Nov 16

NTPsec’s beta is released

You’ve heard me uttering teasers about it for months. Now it’s here. The repository is available for cloning; we’re shipping the 0.9.0 beta of NTPsec. You can browse the web pages or clone the git repository by one of several methods. You can “wget” to get a tarball.

This is an initial beta and has some rough edges, mostly due to the rather traumatic (but utterly necessary) replacement of the autoconf build system. Also, our range of ports is still narrow; if you’re on anything but Linux or a recent FreeBSD the build may not work for you yet. These things will be fixed.

However, the core function – syncing your clock via NTP – is solid, and using 0.9.0 for production might be judged a bit adventurous but wouldn’t be crazy. The next few beta releases will rapidly get more polished. Expect them to come quickly, like within weeks.

Most of the changes are under the hood and not user-visible. A few auxiliary tools have been renamed, most notably sntp to ntpdig. If you read documentation, you will notice that what’s there has been massively revised and improved.

The most important change you can’t see is that the code has been very seriously security-hardened, not only by plugging all publicly disclosed holes but by internal preventive measures to close off entire classes of vulnerabilities (by, for example, replacing all function calls that can produce buffer overruns with memory-safe equivalents.)

We’ve already established good relations with security-research and InfoSec communities. Near-future releases will include security fixes currently under embargo.

If you consider this work valuable, please support it by contributing at my Patreon page.

Oct 30

Hieratic documentation

Here’e where I attempt to revive and popularize a fine old word in a new context.

hieratic, adj. Of or concerning priests; priestly. Often used of the ancient Egyptian writing system of abridged hieroglyphics used by priests.

Earlier today I was criticizing the waf build system in email. I wanted to say that its documentation exhibits a common flaw, which is that it reads not much like an explanation but as a memory aid for people who are already initiates of its inner mysteries. But this was not the main thrust of my argument; I wanted to observe it as an aside.

Continue reading

Oct 23

NTPsec is not quite a full rewrite

In the wake of the Ars Technica article on NTP vulnerabilities, and Slashdot coverage, there has been sharply increased public interest in the work NTPsec is doing.

A lot of people have gotten the idea that I’m engaged in a full rewrite of the code, however, and that’s not accurate. What’s actually going on is more like a really massive cleanup and hardening effort. To give you some idea how massive, I report that the codebase is now down to about 43% of the size we inherited – in absolute numbers, down from 227KLOC to 97KLOC.

Details, possibly interesting, follow. But this is more than a summary of work; I’m going to use it to talk about good software-engineering practice by example.

Continue reading

Oct 15

SPDX: boosting the signal

High on my list of Things That Annoy Me When I Hack is sourcefiles that contain huge blobs of license text at the top. That is valuable territory which should be occupied by a header comment explaining the code, not a boatload of boilerplate that I’ve seen hundreds of times before.

Hackers have a lot of superstitious ideas about IP law and one is that these blobs are necessary for the license to be binding. They are not: incorporation by reference is a familiar concept to lawyers and courts, it suffices to unambiguously name the license you want to apply rather than quoting it in full.

This is what I do in my code. But to make the practice really comfortable for lawyers we need a registry of standardized license identifiers and an unambiguous way of specifying that we intend to include by reference.

Comes now the Software Package Data Exchange to solve this problem once and for all. It’s a great idea, I endorse it, and I will be using it in all my software projects from now on.

Continue reading

Oct 09

I improved time last night

Sometimes you find performance improvements in the simplest places. Last night I improved the time-stepping precision of NTP by a factor of up to a thousand. With a change of less than 20 lines.

The reason I was able to do this is because the NTP code had not caught up to a change in the precision of modern computer clocks. When it was written, you set time with settimeofday(2), which takes a structure containing seconds and microseconds. But modern POSIX-conformant Unixes have a clock_settime(2) which takes a structure containing seconds and nanoseconds.

Continue reading

Oct 07

The FCC must not lock down device firmware!

The following is a comment I just filed on FCC Docket 15-170, “Amendment of Parts 0, 1, 2, 15, and 18 of the Commission’s Rules et al.”

Thirty years ago I had a small hand in the design of the Internet. Since then I’ve become a senior member of the informal collegium that maintains key pieces of it. You rely on my code every time you use a browser or a smartphone or an ATM. If you ever ride in a driverless car, the nav system will critically depend on code I wrote, and Google Maps already does. Today I’m deeply involved in fixing Internet time service.

I write to endorse the filings by Dave Taht and Bruce Perens (I gave Dave Taht a bit of editorial help). I’m submitting an independent comment because while I agree with the general thrust of their recommendations I think they may not go far enough.

Continue reading

Sep 08

On open-source pharma

(This copies a comment I left on Derek Lowe’s blog at Science Magazine.)

I was the foundational theorist of open-source software development back in the 1990s, and have received a request to respond to your post on open-source pharma.

Is there misplaced idealism and a certain amount of wishful thinking in the open-source pharma movement? Probably. Something I often find myself pointing out to my more eager followers is that atoms are not bits; atoms are heavy, which means there are significant limiting factors of production other than human attention, and a corresponding problem of capital costs that is difficult to make go away. And I do find people who get all enthusiastic and ignore economics rather embarrassing.

On the other hand, even when that idealism is irrational it is often a useful corrective against equally irrational territoriality. I have observed that corporations have a strong, systemic hunker-down tendency to overprotect their IP, overestimating the amount of secrecy rent they can collect and underestimating the cost savings and additional options generated by going open.

I doubt pharma companies are any exception to this; when you say “the people who are actually spending their own cash to do it have somehow failed to realize any of these savings, because Proprietary” as if it’s credulous nonsense, my answer is “Yes. Yes, in fact, this actually happens everywhere”.

Thus, when I have influence I try to moderate the zeal but not suppress it, hoping that the naive idealists and the reflexive hunker-downers will more or less neutralize each other. It would be better if everybody just did sound praxeology, but human beings are not in general very good at that. Semi-tribalized meme wars fueled by emotional idealism seem to be how we roll as a species. People who want to change the world have to learn to work with human beings as they are, not as we’d like them to be.

If you’re not inclined to sign up with either side, I suggest pragmatically keeping your eye on the things the open-source culture does well and asking if those technologies and habits of thought can be useful in drug discovery. Myself, I think the long-term impact of open data interchange formats and public, cooperatively-maintained registries of pre-competitive data could be huge and is certainly worth serious investment and exploration even in the most selfish ROI terms of every party involved.

The idealists may sound a little woolly at times, but at least they understand this possibility and have the cultural capital to realize it – that part really is software.

Then…we see what we can learn. Once that part of the process has been de-territorialized, options to do analogous things at other places in the pipeline may become more obvious,

P.S: I’ve been a huge fan of your “Things I Won’t Work With” posts. More, please?

Aug 18

Yes, NTPsec is real and I am involved

A couple of stories by Charles Babcock and (my coincidentally old friend) Steven J. Vaughan-Nichols have mentioned the existence of an ‘NTPsec’ project being funded by the Core Infrastructure Initiative as an alternative and perhaps eventual replacement for the reference implementation of Network Time Protocol maintained by Harlan Stenn and the Network Time Foundation.

I confirm that NTPsec does exist, and that I am deeply involved in it.

The project has not yet officially released code, though you can view a preliminary web page at For various complicated political reasons a full public discussion of the project’s genesis and goals should wait until we go fully public. You probably won’t have to wait long for this.

I can, however, disclose several facts that I think will be of interest to readers of this blog…

Continue reading

Jul 13

How to submit a drive-by patch and get it accepted

I think it’s weird that I have to write this post in 2015, but earlier today I had to explain to someone with the technical skills to submit a good patch that he was doing the process wrong in some basic and extremely annoying ways.

Googling revealed that most explanations of patch etiquette are rather project-specific in their advice. So I’m going to explain the basics of patch submission that apply to just about any open-source project, with a focus on how to do it right when you aren’t a regular committer (that is, it’s what’s often called a drive-by patch). Here we go…

Continue reading

Jun 23

How to spot a high-quality repository conversion

In my last post, I inveighed against using git-svn to do whole-repository conversions from Subversion to git (as opposed to its intended use, which is working a Subversion repository live through a git remote).

Now comes the word that hundreds of projects a week seem to be fleeing SourceForge because of their evil we’ll-hijack-your-repo-and-crapwarify-your installer policy. And they’re moving to GitHub via its automatic importer. Which, sigh, uses git-svn.

I wouldn’t trust that automatic importer (or any other conversion path that uses git-svn) with anything I write, so I don’t know how badly it messes things up.

But as a public service, I follow with a description of how a really well-done repository conversion – the kind I would deliver using reposurgeon – differs from a crappy one.

Continue reading

May 28

Don’t do svn-to-git repository conversions with git-svn!

This is a public-service warning.

It has come to my attention that some help pages on the web are still recommending git-svn as a conversion tool for migrating Subversion repositories to git. DO NOT DO THIS. You may damage your history badly if you do.

Reminder: I am speaking as an expert, having done numerous large and messy repository conversions. I’ve probably done more Subversion-to-git lifts than anybody else, I’ve torture-tested all the major tools for this job, and I know their failure modes intimately. Rather more intimately than I want to…

Continue reading

May 18

Zeno tarpits

There’s a deeply annoying class of phenomena which, if you write code for any length of time, you will inevitably encounter. I have found it to be particularly prevalent in transformations to clean up or canonicalize large, complex data sets; repository export tools hit variants of it all the time, and so does my doclifter program for lifting [nt]roff markup to XML-DocBook.

It goes like this. You write code that handles a large fraction (say, 80%) of the problem space in a week. Then you notice that it’s barfing on the 20% remaining edge cases. These will be ugly to handle and greatly increase the complexity of your program, but it can be done, and you do it.

Once again, you have solved 80% of the remaining cases, and it took about a week – because your code is more complex than it used to be; testing it and making sure you don’t have regressions is about twice as difficult. But it can be done, at the cost of doubling your code complexity again, and you do it. Congratulations! You now handle 80% of the remaining cases. Then you notice that it’s barfing on 20% of remaining tricky edge cases….

…lather, rinse, repeat. If the problem space is seriously gnarly you can find yourself in a seemingly neverending cycle in which you’re expending multiplicatively more effort on each greater effort for multiplicatively decreasing returns. This is especially likely if your test range is expanding to include weirder data sets – in my case, older and gnarlier repositories or newer and gnarlier manual pages.

I think this is a common enough hazard of programming to deserve a name.

Continue reading

Apr 16

A belated response to “A Generation Lost in the Bazaar “

Back in 2012, Poul-Henning-Kamp wrote a disgruntled article in ACM Queue, A Generation Lost in the Bazaar.
It did not occur to me to respond in public at the time, but someone else’s comment on a G+ thread about the article revived the thread. Rereading my reaction, I think it is still worth sharing for the fundamental point about scaling and chaos.

Continue reading

Apr 02

My Gitorious projects have moved.

Gitorious – which I preferred to GitHub for being totally open-source – is shutting down sometime in May. I had no fewer than 26 projects on there, including reposurgeon, cvs-fast-import, doclifter, and INTERCAL.

Now they’ve moved. This won’t affect most of my users, as the web pages and distribution tarballs are still in their accustomed locations at If you’re a committer on any of these Gitirious repos, of course, the move actually matters.

Temporarily the repositories are on; here’s the entire list. They may not stay there, but moving them to was 90% of the work of moving them anywhere else and now I can consider options at my leisure.

Mar 08

Why I won’t mourn Mozilla

An incredibly shrinking Firefox faces endangered species status, says Computerworld, and reports their user market share at 10% and dropping. It doesn’t look good for the Mozilla Foundation – especially not with so much of their funding coming from Google which of course has its own browser to push.

I wish I could feel sadder about this. I was there at the beginning, of course – the day Netscape open-sourced the code that would become Mozilla and later Firefox was the shot heard ’round the world of the open source revolution, and the event that threw The Cathedral and the Bazaar into the limelight. It should be a tragedy – personally, for me – that the project is circling the drain.

Instead, all I can think is “They brought the fate they deserved on themselves.” Because principles matter – and in 2014 the Mozilla Foundation abandoned and betrayed one of the core covenants of open source.

Continue reading

Feb 22

GPSD 3.12 has shipped – bulletproofed from below

I’ve been radio silent the last couple of weeks mainly because I’ve been concentrating furiously on getting a GPSD release out the door. This one is a little more noteworthy than usual because it may actually have fixed a well-hidden flaw or vulnerability of some significance.

Regular readers may recall from back in 2013 that I published a heads-up titled No, GPSD is not the battery-killer on your Android! addressing a power-drain bug reported from a handful of Android phones.

I believed at the time that the proximate cause of the bug was in the kernel serial device-drivers somewhere specific to particular hardware on those phones. I still believe that, because if it had been a purely GPSD problem the error would likely have been much more widespread and I’d have been flooded with complaints.

However, I’ve been concerned ever since that GPSD might not have been doing everything it could to armor itself against bugginess in the layers below it. And a couple of weeks ago I found a problem…

Continue reading

Dec 13

Progress towards the extinction of CVS

The Great Beast, designed for converting large CVS repos, is now in full production. It hasn’t killed off any specimens in the wild yet (and I’ll explain why in a bit), but it’s doing spectacularly well on our test repositories.

As a representative large example, the entire Emacs CVS history, 1985-2009, 113309 CVS commits, lifts clean in 37 seconds at a sustained rate of 3K CVS commits a second. Yes, three thousand.

The biggest beast known to us, the NetBSD src repository, converts in 22 minutes. To give some idea of what a speedup this is, the first time I ran a lift on it – on one of Wendell’s Xeon machines – it took a bit under six hours. That’s about a factor of seventeen, there.

Judging by performance on the other project devs’ machines the Beast is good for a 2x to 3x speedup over a conventionally-balanced PC design (that is, one with worse RAM latency, narrower caches, more cores but somewhat lower single-thread speed). That’s a big enough advantage to validate the design and be practically significant on large repositories.

Continue reading