Major progress on the NTPsec front

I’ve been pretty quiet on what’s going on with NTPsec since posting Yes, NTPsec is real and I am involved just about a month ago. But it’s what I’m spending most of my time on, and I have some truly astonishing success to report.

The fast version: in three and a half months of intensive hacking, since the NTP Classic repo was fully converted to Git on 6 June, the codebase is down to 47% of its original size. Live testing on multiple platforms seems to indicate that the codebase is now solid beta quality, mostly needing cosmetic fixes and more testing before we can certify it production-ready.

Here’s the really astonishing number…

In fifteen weeks of intensive hacking, code cleanup, and security-hardening changes, the number of user-visible bugs we introduced was … one (1). Turns out that during one of my code-hardening passes, when I was converting as many flag variables as possible to C99 bool so static analyzers would have more type constraint information, I flipped one flag initialization. This produced two different minor symptoms (strange log messages at startup and incorrect drift-statistics logging)

Live testing revealed two other bugs, one of which turned out to be a build-system issue and the other some kind of Linux toolchain problem with glibc or pthreads that doesn’t show up under FreeBSD, so it doesn’t really count.

Oh, and that build system bug? Happened while we were reducing the build from 31KLOC of hideous impacted autotools cruft to a waf recipe that runs at least an order of magnitude faster and comes in at a whole 900 lines. Including the build engine.

For those of you who aren’t programmers, just two iatrogenic bugs after fifteen weeks of hacking on a 227-thousand-line codebase full of gnarly legacy crud from as far back as the 1980s – and 31KLOC more of autotools hair – is a jaw-droppingly low, you’ve-got-to-be-kidding-me, this-never-happens error rate.

This is the point at which I would normally make self-deprecating noises about how good the other people on my team are, especially because in the last week and a half they really have been. But for complicated and unfortunate reasons I won’t go into, during most of that period I was coding effectively alone. Not by choice.

Is it bragging when I say I didn’t really know I was that good? I mean, I thought I might be, and I’ve pulled off some remarkable things before, and I told my friends I felt like I was doing the best work of my life on this project, but looking at those numbers leaves me feeling oddly humbled. I wonder if I’ll ever achieve this kind of sustained performance again.

The August release announcement was way premature (see complicated and unfortunate reasons I won’t go into, above). But. Two days ago I told the new project manager – another A&D regular, Mark Atwood – that, speaking as architect and lead coder, I saw us as being one blocker bug and a bunch of cosmetic stuff from a release I’d be happy to ship. And yesterday the blocker got nailed.

I think what we have now is actually damn good code – maybe still a bit overengineered in spots, but that’s forgivable if you know the history. Mostly what it needed was to have thirty years of accumulated cruft chiseled off of it – at times it was such an archeological dig that I felt like I ought to be coding with a fedora on my head and a bullwhip in hand. Once I get replicable end-to-end testing in place the way GPSD has, it will be code you can bet your civilizational infrastructure on. Which is good, because you probably are going to be doing exactly that.

I need to highlight one decision we made early on and how much it has paid off. We decided to code to an ANSI.1-2001/C99 baseline and ruthlessly throw out support for legacy OSes that didn’t meet that. Partly this was informed by my experience with GPSD, from which I tossed out all the legacy-Unix porting shims in 2011 and never got a this-doesn’t-port complaint even once afterwards – which might impress you more if you knew how many weird-ass embedded deployments GPSD has. Tanks, robot submarines, you name it…

I thought that commitment would allow us to chisel off 20% or so of the bulk of the code, maybe 25% if we were lucky.

This morning it was up to 53% 53%! And we’re not done. If reports we’ve been hearing of good POSIX conformance in current Windows are accurate, we may soon have a working Windows port and be able to drop most of another 6 KLOC.

(No, I won’t be doing the Windows port. It’ll be Chris Johns of the RTEMS project behind that, most likely.)

I don’t have a release date yet. But we are starting to reach out to developers who were not on the original rescue team. Daniel Franke will probably be the first to get commit rights. Public read-only access to the project repo will probably be made available some time before we ship 1.0.

Why didn’t we open up sooner? I’m just going to say “politics” and leave it at that. There were good reasons. Not pleasant ones, but good ones – and don’t ask because I’m not gonna talk about it.

Finally, a big shout-out to the Core Infrastructure Initiative and the Linux Foundation, who are as of about a month ago actually (gasp!) paying me to work on NTPsec. Not enough that I don’t still have some money worries, because Cathy is still among the victims-of-Obamacare unemployed, but enough to help. If you want to help and you haven’t already, there’s my Patreon page.

I have some big plans and the means to make them happen. The next six months should be good.

Published
Categorized as General

50 comments

    1. >Would you post links about Windows POSIX compliance?

      Can’t yet. All I have is reports from developers who work that space and seem trustworthy. One said that the only POSIX thing Windows still lacks is fork(2).

  1. Yay! Sounds like you brought all of your experience in code decrufting to bear precisely where it was needed most.

    Just remember: when coding wearing a fedora and holding a bullwhip, sometimes it’s best to drop the whip, draw the 1911, and blow something the hell away.

    1. >Just remember: when coding wearing a fedora and holding a bullwhip, sometimes it’s best to drop the whip, draw the 1911, and blow something the hell away.

      Trust me, I am all about the 1911. Still deleting code.

  2. Well, Git for Windows _somehow_ did it… with the help of MSYS, but did port POSIX program to MS Windows.

  3. Kudos!

    I sincerely hope you receive the fullest recognition for the Very Important Work you are doing :)

  4. Not to rain on your parade, but given the large numbers of LOC you’re reporting cut, it also sounds as if a lot of your great performance here can be attributed to the existing code base being that bad. :-D (How bad would your and your peers say the original version was?)

    1. >(How bad would your and your peers say the original version was?)

      It’s not horrible. Actually it was very good, state-of-the-art Unix code – for about 1995.

      The architecture is sound, but…when I got at it, the code was full of portability shims and special hacks for obsolete Unix big iron. This morning, for example, I ripped out a bunch of code that only worked with the SunOS4 version of STREAMS.

      Sometimes the art was in figuring out what could be safely deleted. Other times I had to replace old code that was a Really Bad Idea. The worst case of this was a thing called autogen which (I kid you not) combined a sort of getopt(3) on steroids with a document-templating system. Not actually a terrible idea for small enough programs but at the scale of NTP it’s a nasty complexity hairball.

      In still other cases good practice had changed and I had to globally audit the code and change it to do things differently. The largest single case of this was the boolification changes. C99 bools weren’t even a gleam in someone’s eye when this code was written; I went through and used them everywhere I could to improve readability and give static checkers more traction.

  5. Eric, as a fan of your code-quality articles, I’d be interested to hear more about simplifying the build system. Specifically, how much of the simplification comes from compatibility layers for legacy platforms that ntpd no longer has to build on, and how much of it comes from moving from autotools to waf? About how many lines of code would an autotools script for just the current 47 percent of classic ntpd need?

    On a similar but independent note, to what extent do you think your C99 success stories in gpsd and ntpd generalize to other projects? In your Software Release Practice Howto, you advise people to “use the full ANSI features. […] The old-style K&R compilers are history.” Am I correct in suspecting that you would now change “full ANSI” to “full C99” and “old-style K&R” to “old-style C89”?

    1. >Specifically, how much of the simplification comes from compatibility layers for legacy platforms that ntpd no longer has to build on, and how much of it comes from moving from autotools to waf?

      Most of it is waf. A lot of the porting shims were gated by platform-type symbols that autoconf computes without much effort by working from the results of config.guess.

      >About how many lines of code would an autotools script for just the current 47 percent of classic ntpd need?

      Hm. I’d say maybe 10% fewer. Remember, autoconf does a lot more probing and grinding that any individual program typically uses. It has to because of the way it’s built. Also, the implementation quality of autoconf is poor – they started with weak languages and just piled layer upon layer of nasty kludges on top to get around the weakness. A modern build engine like waf is about 10,000% better engineered; the difference is like CVS vs. git.

      >Am I correct in suspecting that you would now change “full ANSI” to “full C99? and “old-style K&R” to “old-style C89??

      Yes, in fact I think I’ll go do that now.

      1. >Yes, in fact I think I’ll go do that now.

        When I looked, I found I already had. Someone’s asleep over at TLDP again; the new version shipped 2015-07.

        Synopsis: Gitorious is dead; GitLab is new. Use git-format-patch when you can. PyLint has replaced PyChecker. In C, code to POSIX.1-2001 and C99. Much more about good patching practice.

  6. @esr –

    >while we were reducing the build from 31KLOC of hideous impacted autotools cruft to a waf recipe

    I thought you were using SCons for the build system.

    1. >I thought you were using SCons for the build system.

      No, that’s GPSD.

      One of the NTPsec guys, Amar, is a waf expert. He did the build conversion; it’s the one major piece of work from before about two weeks ago that wasn’t mostly mine.

  7. > One of the NTPsec guys, Amar, is a waf expert. He did the build conversion; it’s the one major piece of work from before about two weeks ago that wasn’t mostly mine.

    So he did conversion from autotools to waf, and you excised autotools ifdefs?

    1. >So he did conversion from autotools to waf, and you excised autotools ifdefs?

      That’s right. Plus a lot of code guarded by platform and compiler ifdefs that autoconf didn’t have to set.

      That may make the job sound simpler than it was. Those #ifdefs were a freaking jungle fit to blunt your machete. I still have … counting … 458 guard symbols to investigate. That’s 458 different macros not instances of the same few. Down from over 600 to start with.

  8. > In fifteen weeks of intensive hacking, code cleanup, and security-hardening changes, the number of user-visible bugs we introduced was … one (1).

    When I find abnormally low bug counts in my programs, it makes me nervous. I worry that it’s more likely bugs are being missed than that I didn’t produce them. Nothing is scarier than a program that works right the first time. (or second time…or fifth time…)

    > When I looked, I found I already had. Someone’s asleep over at TLDP again; the new version shipped 2015-07.

    Where is the shipped version? I am reminded that I should read it again (I find myself working on potentially-public projects for the first time), but I note that your own FAQ page points to the TLDP one.

    1. >Where is the shipped version?

      Should have been published at tldp.org but apparently isn’t. I may have to give up on those people.

  9. esr on 2015-09-23 at 11:17:54 said:
    > Those #ifdefs were a freaking jungle fit to blunt your machete. I still have … counting … 458 guard symbols to investigate.

    Sounds like a job for a tool. Something that can diff the code configurations and with guidance from a human mind, square it up.

    1. >Sounds like a job for a tool. Something that can diff the code configurations and with guidance from a human mind, square it up.

      Yes. I’ll publish it later today or tomorrow.

  10. > a fork() sort-of exists, just an exec() does not. Though this makes me curious how Interix implemented both fork() and exec() then.

    I don’t know about Interix, but I know the Cygwin story. On a normal posix system, fork() is copy-on-write – the child process gets “read-only” versions of the parent process’s pages, and new copies of the pages are created as needed. If exec() is called soon after, almost no memory duplication happens.

    Windows has a different child process model, so to emulate fork(), a Cygwin parent process creates a new process, suspends it, and copies over its entire virtual address space into the new process. And then you run into problems where Windows shared libraries assume a fixed base address…

    I believe exec() follows a similar story – cygwin has to keep track of its own PIDs, and you can end up with multiple Windows PIDs pointing to the same Cygwin PID.

    Short version: you can emulate fork() in Windows, but it just fails sometimes. Yuck.

  11. You know, someone once wrote a book about how Unix does things, and had a whole section on what happens when you do things differently from Unix…

  12. You know, this is the kind of problem that posix_spawn was invented to solve. If you want to be portable to systems that can’t handle fork or exec, don’t rely on them existing.

  13. Is fascinated by the security hole you mention in the middle. 700k systems sounds remarkably low. There are, I happen to know tens of millions of NTP servers on the internet and, last time I looked, low millions that still permit DDoS through the monlist command.

  14. You know, someone once wrote a book about how Unix does things, and had a whole section on what happens when you do things differently from Unix…

    Random832 is right: about the only OS family that actually implements Unix process spawning semantics is Unix. If you want to be portable, something like process_spawn(3) is the correct approach.

    Note that not being Unix isn’t “bad”. True, you might get Windows, but you might also get VMS — a system which, once properly configured, WILL NOT hang, WILL NOT be hacked into, and will even be robust against sysadmin fat fingers.

  15. Not to contradict Our Gracious Host, but I think he may be understating the task he has performed –

    Just for shits and giggles, I just downloaded the compressed tarball of the latest production release of “NTP Classic” (version 4.2.8p3, dated 2015/06/29), unpacked it, and ran two commands:

    find . -name '*.[ch]' -print0 | xargs -0 wc -l which gave me a total line count of 372865 source code lines.

    find . -name '*.[ch]' -print0 | xargs -0 cat | grep '^#[ ^I]if' | sed compress_all_whitespace_to_a_single_char | sort -u | wc -l,
    which, if I’ve done this correctly, should give a count of all the unique “#if”s and “#ifdef”s in the code.

    It gave me 2403. Two thousand, four hundred and three unique conditional compilation options.

    /me head ASPLODES!

    1. John, I used David Wheeler’s sloccount program. I was not aware that the discrepancy between that and a naive line count could get so large.

      If you feel like doing some forensics on this, that might be interesting.

    2. >It gave me 2403. Two thousand, four hundred and three unique conditional compilation options.

      It’s not quite as bad as that suggests. There were, I think, 670 distinct unidentified guard symbols when I started reducing. Now there are 420.

  16. Follow-up:

    [1] Obliviously, my source code line count “one liner” counts blank lines and comments. Don’t have a quick way to smash them out (cheesy little perl script??), but – 100k lines of comments??

    [2] There are obviously some typos in those commands. Exact .bash_history files available upon request, along with rationale for why I did it that way.

  17. “… some kind of Linux toolchain problem with glibc or pthreads …”

    Could you elaborate on that part?

  18. > The August release announcement was way premature

    It would probably be a good idea to remove it from the ntpsec.org front page, then.

    1. >It would probably be a good idea to remove it from the ntpsec.org front page, then.

      See “unfortunate circumstances”. We have someone new taking over website maintainance; it should be fixed soon.

  19. Non-programmer here. First of all, congratulations. :-)

    Next, a couple of corrections: in the OP, you wrote “at least an an order” and “about a a month”.

    Finally, a couple of (possibly unwarranted) questions: at suckless.org, they have this to say about Waf: “waf code is dropped into the compilee’s build tree, so it does not benefit from updated versions and bugfixes”. Has this been an issue for you? If so, how did you tackle it?

    1. >waf code is dropped into the compilee’s build tree, so it does not benefit from updated versions and bugfixes”. Has this been an issue for you? If so, how did you tackle it?

      It has not been an issue for us yet. I expect to keep an eye on waf’s progress and refresh our copy occasionally.

  20. >>It gave me 2403. Two thousand, four hundred and three unique conditional compilation options.
    >It’s not quite as bad as that suggests.

    The command he used obviously counts #ifs that combine multiple options (#if defined(a) && defined(b)) as unique options, which might account for some of the difference. And ones that test the value of a macro being greater or less than some value, etc.

  21. …but you might also get VMS — a system which, once properly configured, WILL NOT hang, WILL NOT be hacked into, and will even be robust against sysadmin fat fingers.

    Back in the early to mid-’80s, I was involved in hanging VMS on a VAX 11-780. I imagine VMS has improved since then.

    I was working with some software with some mysterious problem… something like running forever without doing anything – little or no I/O and few, if any, page faults. We (my supervisor, the manager, the sys-admin and I) decided to bump its priority way up to see if that made any difference.

    I started the program with normal time-sharing priority and the sys-admin upped its priority – the system hung. IIRC, we realized that the priority had been set higher than that of the swapper – not a good idea.

  22. > VMS…once properly configured, … WILL NOT be hacked into,

    Mostly because the remaining instances are sitting deep in TS/SCI territory, or in bank datacenters being run by adults who get “security”.

    1. >In the new sense of it, you are just about one of the least fedora-positive programmers out there

      LOL. Poor lost beta males. Those pictures are awful. I see what you mean.

      Here’s a hint, hipster kids: to do the fedora thing and not look like a pathetic refugee from your mother’s basement, you have to project successful masculinity in other ways that are consonant with the hat. Assertive body body language, muscles, confidence in your eyes, that sort of thing. If the rest of you doesn’t look like it can cash the check the hat is writing, you lose.

  23. Wow. Just, wow. Smugging is definitely indicated on your part, ESR, as is polishing your fedora-and-monocle smile.

    Are you planning to wring NTPsec out, test and verifiability wise, to anywhere near the same extent as GPSD?

    Or are you shaking your head at me taking so long to ask such as question, since you’ve been doing that since project start?

  24. It also helps to be wearing clothes that a fedora is actually appropriate for (in the case of your analogy, that’d be the full Indiana Jones getup with a proper shirt, leather jacket, and khaki pants – more conventionally it’d be a suit) rather than a T-shirt or polo shirt.

    1. >It also helps to be wearing clothes that a fedora is actually appropriate for (in the case of your analogy, that’d be the full Indiana Jones getup with a proper shirt, leather jacket, and khaki pants – more conventionally it’d be a suit) rather than a T-shirt or polo shirt.

      I usually wore mine with an A-2 flight jacket, L.L. Bean shirt, and black jeans. It worked.

  25. I tend to associate fedoras with pre-WWII woollen suits and the like — topped with a (tan) trench coat if one is going full film noir. I’ve seen hipsters willing to take the retro garb to such levels of completeness — while still rocking the tats, the nose rings, and the ear piercings you can drive a truck through.

    Then there are the neo-Victorians replete with sideburns sporting mohawks…

  26. > http://knowyourmeme.com/memes/fedora-shaming

    You know, if you’re going to “shame” someone for wearing an article of clothing you ought to at least know the difference between a Trilby, a Fedora, and a, well, not-fedora, not-cowboy hat thingy that looks REALLY useful for keeping the sun and rain off your face and the back of your neck.

  27. This project is an example of something going right in the world. A critically important piece of global software is being updated with an efficient, well planned, and innovatively superior replacement. Amidst all the bad news that dominates mainstream culture, this archetype of human productivity is hopeful.

  28. So, ESR, does that mean you’ve stepped up your game (wrt cockups/kloc) by a half order of magnitude over GPSD?

Leave a comment

Your email address will not be published. Required fields are marked *