Mar 24

Scenes from the Life of a System Architect

I’ve been doing some heavy work on the core code of gpsd recently, and realized it would be a good idea to explain the whys and wherefores to my co-developers on the project. After I wrote my explanation and reread it, I realized I had managed to generate something that might be relatively accessible, and perhaps even interesting, to people who aren’t intimate with the GPSD codebase.

I guess I’m aiming this at junior programmers and particularly curious non-programmers. It’s a slice of what software systems design — the thing that project leads and architects do — is like in the real world, where the weight of history is often as pressing as today’s requirements list. I think this note shows an example of doing it right, or at least not getting it too badly wrong.

If you find the technicalese in here difficult, it may be useful to refer back to some of my previous posts about this project:

GPSD and Code Excellence

GPSD-NG: A Case Study in Application Protocol Evolution

Why GPSes suck, and what to do about it

Continue reading

Mar 13

Subversion to GIT Migration: A Tale of Two Gotchas

I’ve been wanting to migrate the GPSD codebase off Subversion to a distributed version control system for many months now. GPSD has a particular reason for DVCS; our developers often have to test GPSD sensors outdoors and aren’t necessarily in range of WiFi when they do it.

GPSD also needs to change hosting sites, for reliability reasons I’ve written about before. Though I’m a fan of Mercurial, I determined that moving to git would give us a wider range of hosting options. Also, git and hg are similar enough to make intermigration really easy – from SVN to either is 90% of the way to the other.

This blog entry records two problems I ran into, and solutions for them. One is that the standard way of converting repos does unfortunate things with tags directories. The second is that the CIA hook scripts for git are stale and rather broken.

Continue reading

Mar 10

On Learning Haskell

I’ve had learning the computer language Haskell on my to-do list for some time. I’m actually stepping up to learn it now, thanks to a temporary lull in my other activities and a vicious cold that has left me disinclined to strenuous work. I may associate Haskell with the taste of zinc gluconate for the rest of my days; both have an astringent and medicinal quality and produce fervent claims of effectiveness about which I am moderately optimistic but which I have not yet personally verified.

Haskell is very obviously a language built by mathematical logicians for mathematical logicians. Category theory lurks behind it in the same way that the lambda calculus lurks behind LISP. The following is an effort to make Haskell’s way of carving up the world intelligible, written partly to clarify my own thoughts.

Continue reading

Mar 04

Greed kills: Why smartphone lock-in will fail and open source win

In a previous post, How smartphones will disrupt PCs, I explained how and why I think small, ultra-portable, general-purpose computers that we’ll think of and use as “smartphones” are going to displace the PC. I promised then to explain why the software of these devices will be open source.

Go read Androids Will Challenge the iPad. It isn’t about smartphones, but the logic that will break the iPhone business model is clearly set out in it for anyone who’s paying attention. What we’re about to see in the smartphone and tablet markets is a repeat of the way the IBM PC shouldered aside the Apple II after 1980. Google’s deliberately slow-balled launch of Android via the G1 was just prelude; it’s with the Motorola Droid, the unlocked Nexus One and the generic Android tablets that the game begins in earnest.

Continue reading

Feb 13

When you see a heisenbug in C, suspect your compiler’s optimizer

This is an attempt to throw a valuable debugging heuristic into the ether where future Google searches will see it.

Yesterday, my friend and regular A&D commenter Jay Maynard called me about a bug in Hercules, an IBM360 emulator that he maintains. It was segfaulting on interpretation of a particular 360 assembler instruction. But building the emulator with either -g for symbolic debugging or its own internal trace facility enabled made the bug go away.

This is thus a classic example of heisenbug, that goes away when you try to observe or probe it. When he first called, I couldn’t think of anything helpful. But there was a tickle in the back of my brain, some insight trying to break into full consciousness, and a few minutes later it succeeded.

I called Jay back and said “Turn off your compiler’s optimizer”.

Compiler optimizers take the output stream from some compiler stage and transform it to use fewer instructions. They may operate at the level of serialized expression trees, or of a compiler intermediate representation at a slightly later stage, or on the stream of assembler instructions emitted very late (just before assembly and linking). They look for patterns in the output and rewrite them into more economical patterns.

Optimizer pattern rewrites aren’t supposed to change the behavior of the code in any way other then making it faster and smaller. Unfortunately, proving the correctness of an optimization is excruciatingly difficult and mistakes are easy. Mistaken optimizations that almost always work are, though rare in absolute terms, among the most common compiler bugs.

Optimization bugs have a strong tendency to be heisenbugs. Enabling debugging symbols with -g can change the output stream just enough that the optimizer no longer sees the pattern that triggers the defective rule. So can enabling the conditioned-out code for a trace facility.

When I told Jay this, he reported that Hercules normally builds with -O3, which under GCC is a very aggressive (that is to say somewhat risky) optimization level.

“OK, set your optimizer to -O0,”, I told Jay, “and test. If it fails to segfault, you have an optimizer bug. Walk the optimization level upwards until the bug reproduces, then back off one.”

I knew of this technique because I’ve been in this kind of mess myself more than once – most recently the code for interpreting IS-GPS-200, the low-level bit-serial protocol used on GPS satellite-to-ground radio links. It was compromised by an optimizer heisenbug that was later fixed in GCC 4.0.

This morning Jay left a message in my voicemail confirming that my diagnosis was correct.

I said above that optimizer bugs have a strong tendency to be heisenbugs. If you are coding with an optimizing compiler, the reverse implication is also true, especially of segfault heisenbugs. The first thing to try when you trip over one of these is to turn off your optimizer.

You won’t hit this failure case very often — I’ve seen it maybe three or four times in nearly thirty years of C programming. But when you do, knowing this heuristic can save you many, many hours of grief.

Dec 04

GPSD and Code Excellence

There’s a wonderfully tongue-in-cheek project called the The Alliance for Code Excellence (“Building a better tomorrow — one line of code at a time.”) that sells Bad Code Offset certificates. They fund open source projects to produce good code that will, in theory, offset all the bad code out there and mitigate the environmental harm it does. They’ve asked software authors to write essays on how their projects drive out bad code, offering $500 dollar prizes.

I sat down to write an essay about GPSD in the same vein of high drollery as the Alliance’s site, then realized that GPSD actually has a serious case to make. We really do drive out bad code, in both direct and indirect ways, and we supply examples of good practice for emulation.

Continue reading

Nov 15

The pragmatics of webscraping

Here’s an amplification of my previous post, Structure Is Not Meaning. It’s an except from the ForgePlucker HOWTO on writing code to web-scrape project data out of forge systems.

Your handler class’s job is to extract project data. If you are lucky, your target forge already has an export feature that will dump everything to you in clean XML or JSON; in that case, you have a fairly trivial exercise using BeautifulStoneSoup or the Python-library JSON parser and can skip the rest of this section.

Usually, however, you’re going to need to extract the data from the same pages that humans use. This is a problem, because these pages are cluttered with all kinds of presentation-level markup, headers, footers, sidebars, and site-navigation gorp — any of which is highly likely to mutate any time the UI gets tweaked.

Here are the tactics we use to try to stay out of trouble:

1. When you don’t see what you expect, use the framework’s self.error() call to abort with a message. And put in lots of expect checks; it’s better for a handler to break loudly and soon than to return bad data. Fixing the handler to track a page mutation won’t usually be hard once you know you need to – and knowing you need to is why we have regression tests.

2. Use peephole analysis with regexps (as opposed to HTML parsing of the whole page) as much as possible. Every time you get away with matching on strictly local patterns, like special URLs, you avoid a dependency on larger areas of page structure which can mutate.

3. Throw away as many irrelevant parts of the page as you can before attempting either regexp matching or HTML parsing. (The most mutation-prone parts of ppsages are headers, footers, and sidebars; that’s where the decorative elements and navigation stuff tend to cluster.) If you can identify fixed end strings for headers or fixed start strings for footers, use those to trim (and error out if they’re not there); that way you’ll be safe even if the headers and footers mutate. This is what the narrow() method in the framework code is for.

4. Rely on forms. You can assume you’ll be logged in with authentication and permissions to modify project data, which means the forge will display forms for editing things like issue data and project-member permissions. Use the forms structure, as it is much less likely to be casually mutated than the page decorations.

5. When you must parse HTML, BeautifulSoup is available to handler classes. Use it, rather than hand-rolling a parser, unless you have to cope with markup so badly malformed that it cannot cope.

Actual field experience shows that throwing out portions of a page that are highly susceptible to mutation is a valuable tactic. Also, think about where in the site a page lives. Entry pages and other highly visible ones tend to get tweaked the most often, so the tradeoffs push you towards peephole methods and not relying on DOM structure. Deeper in the site , especially on pages that are heavily tabular and mostly consist of one big form, relying on DOM structure is less risky.

Nov 05

Structure Is Not Meaning

So, I announce ForgePlucker, and within a day I’ve got some guy from Y Combinator sneering at me for using regular expressions to parse HTML. Says it’s “crappy code”. The poor fool…he has fallen victim to a conceptual trap which I, fortunately, learned to avoid decades ago. I could spout a freshet of theory about it, but instead I’m just going to utter a maxim: Never confuse structure with meaning.

Continue reading

Nov 04

Announcing ForgePlucker

I’ve been strongly hinting in recent blog entries that I planned to do something concrete about the data-jail problems of present open-source hosting sites. Because I believe in underpromising and overperforming, I decided at the outset not to announce a project until I could not only show working code, but code with wide enough coverage to make it crystal-clear that the project goals are achievable with a relatively modest amount of effort.

That time has arrived. I am very pleased to announce ForgePlucker, a project aimed at developing project-state extractor software for backup, offline analysis, and (eventually) re-importation. The proof-of-concept code can extract complete issue-tracker state from Berlios, Gna!, or Savane — and issue trackers are probably the hardest part of the job. I expect extraction of repository histories and developer permissions tables to be easier. Extraction of mailing-list state is probably a bit trickier than either of those, but doable.

Continue reading

Oct 29

The future of software forges

I’m still not going to talk about my attack on the forge infrastructure problems quite yet; the software is coming along nicely, but I intend to announce only after it handles its fourth forge type (yes, that was a tease). But I will say this: I now think I know what the future of forges looks like. It’s called Roundup, and it is astonishingly elegant and potentially more powerful than anything out there. Anything, not excluding the clever decentralized systems like Fossil or Bugs Everywhere.

Here are the big wins:

1. Mailing lists, issue trackers, and online forums unify into *one* message queue that can be filtered in various ways.

2. Scriptable via XML-RPC or an email responder ‘bot.

3. Small base system with good extensibility – just three base classes (User. Msg, File) and the ability to define new classes. ‘Issue’ is a class built on top of these.

4. Arbitrary attributes per issue is basically free, with baked-in support for defining controlled vocabularies.

5. There’s a uniform way, called “designators”, for messages and other objects to refer to each other in text.

6. Small, clean implementation written in Python.

There are some things it needs, though… (Read the Roundup design document before continuing.)

Continue reading

Oct 26

Hacker superstitions about software licensing

Hackers have a lot of odd superstitions about software licensing. I was reminded of this recently when a project maintainer asked me whether he needed to get a sign-off from each and every one of his contributors before switching from Apache v1 to Apache v2. Here’s what I told him:

My opinion is this. Under U.S. law — and I believe European codes are not different in this respect, because both are controlled by the Berne convention — a license change on a collection is grounds for protest or legal action only if the rights of the contributors are materially affected by the change. That is, a court would have to be persuaded that the change caused a monetary loss or at least damage to a contributor’s public reputation. If there is no such possibility, then there is no harm and no grounds for complaint.

Continue reading

Oct 12

How Not To Tackle the Mess around Forges

In my previous two posts I have diagnosed a significant weakness in the open-source infrastructure. The architecture of the code behind the major SourceForge-descended hosting sites is rotten, with all kinds of nasty consequences — data seriously jailed, poor or completely absent capabilities near scripting and project migration. I said I was going to do something about it, and I’m working the problem now — actually writing code.

The rest of this post is not an announcement, because it will be mostly about things I’ have figured out I should not try to do. Yet. But it is a teaser. I see a path forward, and shortly I expect to have some working code to exhibit that shows the way. Actually, I have working code that attacks the problem in an interesting way now, but I’m still adding capabilities to make it a more impressive demonstration.

Here are some approaches I’ve considered, or had suggested to me by others, and rejected:

Continue reading

Oct 09

Looking Deeper into Forges, And Not Liking What I See

In my previous post, Three Systemic Problems With Open-Source Hosting Sites I identified some missing features that create serious brittleness in or project-hosting infrastructure. The question naturally arises, why don’t existing hosting systems already have these facilities? I have looked into this question, actually examining the codebases of Savane and GForge/FusionForge, and the answer appears to go back to the original SourceForge. It offered such exciting, cutting-edge capabilities that nobody noticed its internal architecture was a tar-pit full of nasty kluges. The descendants — Savane, GForge, and FusionForge — inherited that bad architecture.

Continue reading

Oct 08

Three Systemic Problems with Open-Source Hosting Sites

I’ve been off the air for several days due to a hosting-site failure last Friday. After several months of deteriorating performance and various services being sporadically inaccessible, Berlios’s webspace went 404 and the Subversion repositories stopped working…taking my GPSD project down with them. I had every reason to fear this might be permanent, and spent the next two days reconstructing as much as possible of the project state so we could migrate to another site.

Berlios came back up on Monday. But I don’t trust it will stay that way. This weekend rubbed my nose in some systemic vulnerabilities in the open-source development infrastructure that we need to fix. Rant follows.

Continue reading

Jul 30

GPSD-NG: A Case Study in Application Protocol Evolution

I’ve been doing some serious redesign work on GPSD recently. I had planned to do a blog posting about lessons learned, but the result grew enough length and structure to turn into an actual technical paper. You can read it here; comments and criticism will be welcomed.

Note, everything described in the paper has already been implemented in gpsd. There’s work still to be done; for those of you familiar with the software, I still need to do equivalents of the old–protocol commands B C J N R Z $. I do not expect these to pose any significant difficulties.

May 22

News from the Linux-adoption front

Well, now. This is interesting: A study of corporate Linux adoption polling 1,275 IT professionals says:

Linux desktop roll out is easier than expected for properly targeted end-user groups

Those with experience are much more likely to regard non-technical users as primary targets for Linux. The message here is that in practice, Linux is easier to deploy to end users than many imagine before they try it.

It’s become fashionable lately to be pessimistic about Linux’s future on the desktop, but I have to say this matches my experience pretty well. The handful of Ubuntu deployments I’ve done in the last couple years for end-users have indeed been easier than one might have expected.

Continue reading