I’ve written a tool to assist intrepid code archeologists trying to comprehend the structure of ancient codebases. It’s called ifdex, and it comes with a backstory. Grab your fedora and your bullwhip, we’re going in…
One of the earliest decisions we made on NTPsec was to replace its build system. It had become so difficult to understand and modify that we knew it would be significant drag on development.
Ancient autoconf builds tend to be crawling horrors and NTP’s is an extreme case – 31KLOC of kludgy macrology that defines enough configuration symbols to make getting a grasp on its interface with the codebase nigh-impossible even when you have a config.h to look at. And that’s a problem when you’re planning large changes!
One of our guys, Amar Takhar, is an expert on the waf build system. When he tentatively suggested moving to that I cheered the idea resoundingly. Months later he was able to land a waf recipe which, while not complete, would at least produce binaries that could be live-tested.
When I say “not complete” I mean that I could tell that there were configuration #defines in the codebase that the waf build never set. Quite a few of them – in some cases fossils that the autoconf build didn’t touch either, but in others … not. And these unreached configuration knobs tended to get lost amidst a bunch of conditional guards looking at #defines set by system headers and the compiler.
And we’re not talking a handful or even dozens. I eventually counted over 670 distinct #defines being used in #if/#ifdef/#ifndef/#elif guards – 2430 of them, as A&D regular John D. Bell pointed out in a comment on my last post. I needed some way to examine these and sort them into groups – this is from a system header, that’s a configuration knob, and over there is something else…
So I wrote an analyzer. It parses every compile-time conditional in a code tree for symbols, then reports them either as a bare list or GCC-like file/line error messages that you can step through with Emacs compilation mode.
To reduce noise, it knows about a long list of guard symbols (almost 200 of them) that it should normally ignore – things like the __GNUC__ symbol that GCC predefines, or the O_NONBLOCK macro used by various system calls.
The symbols are divided into groups that you can choose to ignore individually with a command-line option. So, if you want to ignore all standardized POSIX macros in the list but see anything OS-dependent, you can do that.
Another important feature is that you can build your own exclusion lists, with comments. The way I’m exploring the jungle of NTP conditionals is by building a bigger and bigger exclusion list describing the conditional symbols I understand. Eventually (I hope) the report of unknown symbols will shrink to empty. At that point I’ll know what all the configuration knobs are with certainty.
As of now I have knocked out about 300 of them and have 373 to go. That’s most of a week’s work, even with my spiffy new tool. Oh well, nobody ever said code archeology was easy.
Does it know about `#pragma once` guards equivalent?
#ifndef FOO_H
#define FOO_H
….
#endif /* FOO_H */
>Does it know about `#pragma once` guards equivalent?
No. I thought about trying to make it recognize those but decided it would be too failure-prone.
In NTPsec I’ve renamed all those symbols to have a “GUARD_” prefix, so I can just put GUARD_ in my exclude file and ignore them.
>> Does it know about `#pragma once` guards equivalent?
> No. I thought about trying to make it recognize those but decided it would be too failure-prone.
I don’t think it would be error prone if you use the whole context:
1.) include guards are named after the filename (uppercased, with non-word characters replaced by underscore, e.g. hello.h -> HELLO_H), i.e. basename of file and guard should match when using relaxed compare / collation function
2.) the ifdef must be of #ifndef form, must start at the beginning of the file (skipping empty lines and comments), and must stop at the end of the file (again, skipping empty lines and comments)
3.) #ifndef FOO_H should be immediatelly followed by #define FOO_H
This should eliminate false positives (hiding non-guard ifdefs as include guards). And is not possible with regexp-based exclusion – it does not have full information.
OTOH. C99-ification would move header guards to #pragma once…
@esr –
Minor bug – the distribution tarball (from catb.org) has the extension “.tar.gz”, but it is not compressed.
>Minor bug – the distribution tarball (from catb.org) has the extension “.tar.gz”, but it is not compressed.
I checked and it is. As others have said, the browswe is being overly helpful.
The project I’m working on right now is an old codebase that heavily uses autoconf, automake, autogen, autoopt, and probably autofootgun. I’ve only needed to do a few tweaks to get things working with the newest versions of these tools, but I already long to replace it all with a few lisp macros.
@John D Bell – some webhosts will “helpfully” declare any gzipped file, even if it really is gzipped on the server and has a .gz filename, as “Content-Encoding: gzip”, which will cause browsers to auto-decompress it (but not change the filename).
@esr –
Nit-picky questions about a gnarly detail of the code (please remember that my Python-fu is very weak):
ifdex, line 322:
ifre = re.compile("^# *(ifdef|ifndef|elif|if)")
Shouldn’t the whitespace after the hash mark be “[ \t]*” ? (There are a few places in the NTP sources where exactly the pattern “hash – tab – if…” occurs.) And couldn’t the alternation just
be “(if|elif)” (since “if” is a proper prefix of both “ifdef” and “ifndef”)?
>Shouldn’t the whitespace after the hash mark be “[ \t]*” ?
Hm. Yes, actually. Thanks for spotting that.
>And couldn’t the alternation just be “(if|elif)” (since “if” is a proper prefix of both “ifdef” and “ifndef”)?
It could be. I wrote it this way to make the intended semantics clearer
Excellent. Scons for gpsd, now waf for NTPsec.
We need to search and destroy all the GNU build crap in OpenSource. And yes, it is crap. It should know if it has glibc (or whatever Sun SysV had) by checking a few things, but instead tests for memcpy or bcopy and a dozen things individually. It can take longer to do ./configure than to compile the actual code, and if there is an error after 15 minutes you can correct, it takes 15 minutes to get back there.
The proper model is start with POSIX, or what should be common to the most (80-98%) targets, I think headers or macros can test for 64/32 bit or endianness. #ifdef __EXOTIC__ can use macros or shims.
I can appreciate the diversity when autotools was created, there were dozens of hundreds of platforms. But today there has been much more standardization and convergence.
I recently came across a program called “cppp” which claims to be a “partial C preprocessor” and is aimed at cleaning ifdefs out of legacy codebases. I’ve not used it, but it sounds like it would complement ifdex quite nicely.
http://www.muppetlabs.com/~breadbox/software/cppp.html
Random832: That looks like what’s happening here. From the HTTP response:
HTTP/1.1 200 OK
[…]
Content-Type: application/x-gzip
Content-Encoding: x-gzip
This is followed by the (gzipped) tarball. Irritatingly, this happens even if the request doesn’t include an Accept-Encoding header, or if the Accept-Encoding header explicitly says not to. Some browsers have workarounds for this buggy behavior, and others follow the HTTP spec more literally.
What always bothered me about the autoconf stuff is that it checked for features the program doesn’t use. If I don’t call bzero(), why are you checking to see if the system has a working bzero()?
@Garrett isn’t that just a matter of poorly written autoconf scripts?
The autoconf for atool (which was linked here recently, it’s the universal tar unpacker someone mentioned in a recent thread) does basically nothing but check where perl is installed.
>@Garrett isn’t that just a matter of poorly written autoconf scripts?
No. There’s a set of features that autoconf checks for whether you ask or not.
> No. There’s a set of features that autoconf checks for whether you ask or not.
Checking if environment is sane?
http://www.xkcd.com/371/ <– "Title text: Checking whether build environment is sane… build environment is grinning and holding a spatula. Guess not."
Checking if environment is sane?
Sanity checking is all fine and good, but if a program doesn’t care about the sanity of obscureFunction23() then autoconf shouldn’t care about it either: the entire point of this kind of tool is so the programmer doesn’t have to worry about the details of 10,000 platforms of widely varying sanity.
But all of this is ignoring a larger question: Is autoconf sane?
I think most here would say no, which puts autoconf in the interesting position of being a fundamentalist preacher railing against the horrors of sex-for-anything-other-than-reproduction while cheating on his wife with another man.
Auto* must die. It is full of cruft.
ifdex? Not ifdefix ;-) ?
Though I see that in English translation the name of Asterix dog is “Dogmatix”, not “Idéfix”…
>ifdex? Not ifdefix ;-) ?
Alas, a missed opportunity.
>Though I see that in English translation the name of Asterix dog is “Dogmatix”, not “Idéfix”…
Yes, which I think is a very clever rendering. Most English speakers aren’t familiar enough with the French phrase “idée fixe” to appreciate the original, even though a close to parallel usage is occasional in English.
Fortunately, the punning names “Asterix” and “Obelix” work as well in English as they do in French. The same is not so true of many of the other character names. For the druid, I prefer the French original “Panoramix” or the German translation “Miraculix” to “Getafix”.
> No. There’s a set of features that autoconf checks for whether you ask or not.
The atool case proves, at least, that bzero is not part of that set.
>The atool case proves, at least, that bzero is not part of that set.
You’re right. I’m looking at the giflib configure – pretty close to minimal, it checks for 4 header files (limits.h, fcntl.h, stdint.h, and stdarg.h) and that’s it. In the resulting config.h, I don’t see a #define for bzero, but I do see several excess #defines. Checks for dlfcn.h, fcntl.h, memory.h, string.h, the way obsolete BSD strings.h, and sys/stat.h are done even though I didn’t order them up.
I’m actually surprised to see there is so little excess. In the past I remember seeing checks for functions I wasn’t interested in. Either the maintainers have been cleaning up or many of the excess tests cascade off things one does request.
> Sanity checking is all fine and good, but if a program doesn’t care about the sanity of obscureFunction23() then autoconf shouldn’t care about it either
It turns out that “checking if build environment is sane” actually means:
– Directory path doesn’t contain shell metacharacters.
– “ls -t” works (this is a side effect of the fact that the next test depends on it).
– The system clock is set to a reasonable value (i.e. the configure script itself has a timestamp in the past)
I was surprised that atool used autoconf at all (being a perl script), but it does serve as an existence proof against almost anything (e.g. bzero, obscureFunction23) that you might assume autoconf “always” checks.
Though ifdefix would be also #ifdef + fix; ifdex is (?) if + regex ???
…it’s a pity that naming a program ‘#ifdefix’ would be not a good idea (unless you want to annoy users) :-PPP
>Though ifdefix would be also #ifdef + fix; ifdex is (?) if + regex ???
It’s intended to be a portmanteau of “ifdef” and “index”.
literally #ifdefix
please use things like \s in these sorts of cases. Two reasons.
1) you don’t have to worry about whether its a tab or a space and it is clear semantically
2) while probably not an issue in this case I have seen stupid text editors wrap lines on the space in a regex which leads to all sorts of unpleasantness
and 2a) occasional moron developers will add or remove spaces not realizing that that space was intentional in the match and thereby screw it up
>> Though ifdefix would be also #ifdef + fix; ifdex is (?) if + regex ???
> It’s intended to be a portmanteau of “ifdef” and “index”.
That’s actually a good description what program does…
@ESR only tangentially related (as in, open source software development) but have you seen Urbit, esp. the stuff released just three days ago? http://urbit.org/preview/~2015.9.25/materials/whitepaper it tends to press “not 100% sure about what it is about, but smells like pure genius” buttons in my brain.
>not 100% sure about what it is about, but smells like pure genius” buttons in my brain.
Either pure genius or utter crackpottery – not sure which. I will examine more carefully.
Genius and crackpottery are not mutually exclusive: I submit TempleOS as evidence in support.
Building the case for Urbit/Nock/Hoon as a superposition of genius and crackpottery is this seminal fact: it’s Mencius Moldbug’s baby.
Well, either I have not been exposed to the right programming languages, or there is a serious semantics gap (too many new/invented terms) between the content of that paper and a more normal programming/systems discussion. Or both. When I looked at it, I got a gist, but tended to skim….
But it’s interesting it’s from Moldbug – I didn’t know that when I was reading the paper. I actually find his political writing easier to understand – but sometimes I skim there too. And I would still be interested in seeing what ESR thinks about the DE, NRx and Modlbug – it’s been a while since the initial prefatory posts where he said he would get into more detail later.
Of course, he’s been really busy recently, but still…
I’m an urbit developer, so if anyone has questions about urbit, feel free to ask me either here or by email (philip@tlon.io).
Hi Philip,
This a clean, smart, logical, very good idea, this is why in the current incarnation it is destined to fail as generally dirty, illogical, bad ideas tend to win as they match the mindset of users, businesses and less professional programmers better.
Consider Google who won through by denying what every programming university taught, that you search for things by primary key or browsing a filtered list but clearly not something as chaotic and error-prone as a free text search. Consider PHP, which is a programming language for non-programmers who have IRC access and thus they can ask how to multiply something by 10 and then the helpful people on IRC say that out of the 4000 built in functions there is a multiply_by_ten(x) for you.
I never understood why the programming blogosphere always talks about scaling _up_. The success between something like PHP is scaling _down_ – that it enabled the average IT guy working at an average backwoods small business to cruft together a guest book somehow.
So my question is the following. Urbit can be a very good basis. But how are you going to make it a bad and dirty thing enough to actually make it popular? Are you going to build a PHP on top of it – i.e. something that enables any clueless loser to build a webshop or suchlike? Because that is how to be popular. Can you provide the kind of technology and platform, some kind of a loser-friendly quick and dirty mess, that actually attracts the countless amateurs who generated the vast majority of web code? Besides PHP wakanda.org is an absolutely excellent example of that. If Nock is a LISP – it looks like one if you look kinda squinty at it – and thus ideal for DSLs, will you make a stupid-friendly DSL and basically advertise that, not Nock and Hoon itself?
@Philip Do you have any insight into why Urbit uses 3-character CVC “syllables” to represent 8 bits rather than the already-established Bubble Babble CVCVC 16-bit “words”? I always wonder why people reinvent a wheel.
@Dividuals, I think you overestimate the “smartness” of Urbit. It’s really quite dumb. Hoon, for example, might be a purely functional language with type inference, but it’s surely the dumbest one in that class. It feels a lot like an imperative language, and Hindley-Milner is nowhere to be found.
Urbit is clean, though. What we’ve found is that the easiest systems to hack on are those ones which are simple and clean to start with. Too many hacks stacked on top of each other, and you can’t do any more hacking or the whole structure will come crashing down like a game of Jenga. It’s harder to hack on the Unix/Internet than it was.
For example, back in the day, the HTML was clean, and you could just write a scraper to get data from any website. Nowadays, there’s so much javascript and other cruft that it’s nearly impossible, at least with traditional, dumb tools. Back in the day, the average IT guy could cruft together a guest book, and it would be sufficient for the task. You don’t see that anymore. The web got too complicated, and it drifted outside the reach of the average IT guy.
You can slop mud onto a system for a long time with no problems, but eventually, it starts to fall over under its own weight. That’s where Unix and the Internet is. It takes professional mudslingers to make it stick without bringing the rest of the system down.
Unix and the Internet were a sufficient substrate for 40+ years of hacking. We’re trying to be the substrate of the future. We’re clean on the bottom so that you can be dirty on the top.
@The Monster, two reasons come to mind:
Firstly, we don’t like the sound of Bubble Babble words. They’re hard to pronounce, and they don’t seem like something that it would be easy to grow attached to. CVC syllables are easier to pronounce and remember. Bubble Babble was designed for communicating fingerprints, whereas our system was designed for remembering. It’s important for our system to be vaguely “human-like”.
Secondly, we care about the eight-bit case. Our “galaxies” are eight-bit addresses, and we would like them to have names that are distinct from longer addresses. “~zod” is much better for our purposes than “xexax”.
@Phillip
I see you using the “galaxies/stars/planets/comets” terminology, which matches the linked paper, but in other places I’ve seen “carriers/cruisers/destroyers/subs”. Which is it? (And why was the other one ever mentioned?)
@The Monster
It’s “galaxies/stars/planets/moons/comets”. Until a few months ago, we used a warship metaphor, but we changed it for a variety of reasons. For one thing, it’s the 32-bit addresses that are human-sized, so saying that everyone gets their own “destroyer” is confusing without explaining the whole metaphor. Plus, space is cool. Anyway, old habits die hard.
@Philip
I’ve downloaded the code but couldn’t get it to build, but that’s not your problem.
So I downloaded with brew (Mac OSX):
brew install –HEAD homebrew/head-only/urbit
and was able to run it (without a network invitation).
When I look at the urbit material (code, docs) I’m reminded of the Codex Seraphinianus.
It’s all vaguely familiar, but enigmatic. So much so that it seems deliberate.
Why?
@John Franklin
The company behind urbit is named Tlon, from the Borges story. Interpret that as you will.
I can assure you that we are not attempting to be enigmatic. If it looks that way, it’s because Curtis (Moldbug) spent over a decade working on this project alone, and if his fertile imagination isn’t kept in check you end up with something absolutely incomprehensible. Two years ago the first outside developers started making significant contributions and the company was born. Since then, it’s gotten so much more understandable than it used to be, but we’ve still got a long way to go. We recognize that its unnecessary strangeness is a problem, and we’re trying to fix it. We apologize.
If you were able to run urbit, did you ever join urbit-meta? It can take a few minutes for the backlog to download, but there’s a helpful community there that can help you understand it all better.
@Philip
Yes, I joined urbit-meta.. at least I talked to someone, somewhere. Asked some questions, got some answers.
Thanks for answering my question here.
> Grab your fedora and your bullwhip, we’re going in…
Forgive me for saying this, but it sounds a bit more like you caught dysentery and the common sense kicked in:
https://www.youtube.com/watch?v=ua_TZ84hmEA
Assuming that you replace Unix and the internet with an Urbit substrate, who’s to say it won’t accrete mud till it collapses also?
Part of the problem, which sets off people’s crackpottery alarms, is that you’re blaming “Unix and the internet” for cruft that accumulated in the layers above those things. In the nineties, Unix supported nearly all the features present in a modern social network: presence information, status updates (.plan files), real time messaging (email, talk, irc), etc. It was simple, relatively robust, and *distributed*. The centralization of online resources happened because companies like Facebook (and earlier, AOL) saw money to be made in the curation of walled gardens, not because of irreparable design flaws in Unix and the internet. We abandoned IRC, email, and fingerd instead of upgrading and hardening them not because the ideas behind them were inherently broken, but because it was easier for people (especially those approaching the profile of the mythical Average User) to switch to Twitter, Facebook, WhatsApp, and whatever walled garden was being heavily marketed to them. The mess we’re in is Endless September fallout, not mistakes in the design of Unix and the network. Could those things be improved? Certainly — the improved versions are called Plan 9 and IPv6. Both of which are experiencing uptake problems; and you think a slash-and-burn “radical rethink” of the platform will fare better?
I’m reminded of Project COSA, whose creator asserts that software is inherently broken because it is based on algorithms, an inherently flawed model; and that the way to fix software is to base it on a “synchronous, signal based model” akin to digital circuit design because digital circuits are reliable and NEVER exhibit bugs. It’s that whole attitude of “Everything you know about software design is fatally flawed! But I, the lone genius, shall correct the mistakes myself; all you need to do is abandon your traditions, knowledge, and experience and come follow me!”
Urbit has more legs than Project COSA, but the best I can say for it is that it could be the PICO-8 of cloud computing.
@Jeff Read
Presence information, .plan, email, talk, irc all work acceptably only with always-on servers. Only one of those do consumers use nowadays, and they use email through some cloud offering like gmail. Email is only distributed if you own your own mail server. Companies do this because they absolutely must have privacy, nerds do this because they’re nerds, and Hillary Clinton did this because she’s rich enough to pay people to run her server. The problem with Unix is not that it’s impossible to write distributed programs — the problem is that consumers don’t have personal servers. Because who wants a Linux server? It’s a full-time job, complete with the job title “sysadmin”, to run a Linux server.
Urbit’s eventual goal is to make managing your personal server as easy as managing your smartphone. You decide what apps to install, make sure it has enough resources, and then get on with your day.
> Email is only distributed if you own your own mail server.
Unfortunately nowadays thanks to proliferation of closed-source cloud-email offering and spam, it is not easy to have one’s own email server, as Bradley M. Kuhn described in “Exercising Software Freedom in the Global Email System”:
https://sfconservancy.org/blog/2015/sep/15/email/
Summary: Bradley Kuhn describes in detail the trouble that Software Freedom Conservancy has encountered by running its own, in-house mail server—specifically, the manual process required to get the server removed from automated blacklists maintained by large email services (which happened probably because of IPv4 reuse, likely previously by a spammer). Outlook.com to be exact.
So reducing barrier to entry to “managing your personal server as easy as managing your smartphone” is not enough…
> Presence information, .plan, email, talk, irc all work acceptably only with always-on servers.
The whole damn internet only works acceptably with always-on servers. We just live in the world where (except for email) the typical user no longer has a local internet service provider willing to operate those servers for them.
@Jakub Narebski
Indeed, I recall vividly the time when I was forced away from my own email server because gmail had me on a blacklist. Even email, a protocol that used to be truly distributed, has become centralized into just a few cloud services. This is one reason sandstorm.io is insufficient — it can’t solve those problems
The specific problem there was IPv4 reuse. In the Urbit whitepaper we write “Any network with disposable identities can only be an antisocial network, because disposable identities cannot be assigned default positive reputation.” If an identity can be sold for $10, then your spam better net you $10 before it’s blacklisted or else you’d be better off selling it. In the Urbit world, your address has value, and it’s yours forever (or at least until you sell it).
This is exactly the sort of problem that comes with Unix and internet, and it’s why we layered over it completely. It’s hard to build a distributed system on top of an antisocial network, so we fixed the network. There’s a whole set of problems like this, and we dealt with as many as we could. Making managing your personal server easy is truly a large problem.
Basically, Unix and the internet were designed in the ’70s to sole ’70s problems. The designs have proven quite effective and malleable, but they’re starting to fall apart at the seams.
@Random832
Exactly. Unfortunately, the double-whammy of changing the behavior of ISPs and getting users to keep servers always on and updated dooms any hope for redeeming the current internet. Unix and the internet were built back when computers were accessible only at institutions that were willing to put significant effort into maintaining servers.
Yeah, except no, no it’s not. IP address reuse is a social problem and there’s nothing in the Urbit infrastructure that I’ve seen so far that prevents someone from buying up blocks of Urbit addresses and then distributing them randomly to clients, reusing them as clients log off, much like ISPs do today. IP address reuse is in part motivated by a painfully small address space; the solution for that is IPv6 and we’re having enough trouble getting uptake on that.
Address reuse, data silos, the “behavior of ISPs” — these are all social problems that exist in the layers on top of the internet; they are not intrinsic to its architecture. You go from these problems to grand pronouncements like “the internet has failed” and “we need a replacement for the internet from the ground up” while leaving out the bits in the middle that have to be established in order to conclude that B follows from A, hoping your audience doesn’t notice.
If you can migrate the world onto a shiny new platform, you can solve the social problems of the current infrastructure without having to go that far. Recapitulating handwavy talking points from your white paper only further convinces me that you are engaged in a bit of rhetorical sleight-of-hand.
@Jeff Read
Apologies for the delay in responding.
There’s a very simple technical solution to that social problem: don’t let ISPs take back addresses. In Urbit, while most addresses are allocated paracentrally (except for the 128-bit comet space, where your address is just a fingerprint of your public key), once an address is allocated, it can’t be retracted. “Allocation” is the act of the parent Urbit signing the new Urbit’s public key. Once it’s done that, there’s no going back, because the new Urbit can prove that the parent did in fact allocate this address. Any further updates to the public key are signed by the previous key. Thus, addresses are cryptographically owned, and there’s no way for the parent to revoke it.
It’s definitely a valid criticism that we haven’t yet done a good enough job describing our technical solutions to the problems we perceive on the internet. Most of the people who understand them because of they understand the code. We’re writing more documentation on these issues (we plan to release a more thorough description of the network architecture very soon, for example), and hopefully that’ll help. In the meantime, I try to answer questions as well as I can.
I make no defense of neoreactionary thought or rhetorical techniques, for I am far from a neoreactionary. However, with Urbit, I think our problem is rather the opposite of what you suggest: the code contains all the meat, and we’ve been so deep in it that we haven’t given much attention to explaining it to the outside world. We’re trying to change that now, but it’s a nontrivial process for a project the size of Urbit.