Yes, NTPsec is real and I am involved

A couple of stories by Charles Babcock and (my coincidentally old friend) Steven J. Vaughan-Nichols have mentioned the existence of an ‘NTPsec’ project being funded by the Core Infrastructure Initiative as an alternative and perhaps eventual replacement for the reference implementation of Network Time Protocol maintained by Harlan Stenn and the Network Time Foundation.

I confirm that NTPsec does exist, and that I am deeply involved in it.

The project has not yet officially released code, though you can view a preliminary web page at ntpsec.org. For various complicated political reasons a full public discussion of the project’s genesis and goals should wait until we go fully public. You probably won’t have to wait long for this.

I can, however, disclose several facts that I think will be of interest to readers of this blog…

NTPSec is a fork of the Mills implementation of NTP (which we think of as “NTP Classic”). Early major objectives include security hardening, removal of the pre-POSIX legacy cruft that presently makes NTP Classic hard to audit and maintain, and building a really good test suite so the suite can demonstrate its correctness.

I am deeply involved, and have been working hard on this project behind the scenes for about eight months (this in part accounts for the light blogging recently). I am the architecture/technology lead, and presently the most active coder. A&D regular Susan Sons (aka HedgeMage) is also on the team, and in fact recruited me onto it.

Some team members (including me) are being paid to work full-time on this project. More may be hired. For that to happen, more funding commitments have to land. And probably will land; we’re hearing a lot of clamor from industry for a better-maintained, more secure NTP and have been pressed to release somewhat sooner than I would prefer.

I do expect this to have some negative impact on the amount of time I spend on other projects. But one of the reasons I took the gig is that GPSD is now sufficiently mature and stable not to need large amounts of my time. And time service is really, really important.

There is enough technical work on this project to which I am near-ideally suited to keep it top of my list for a minimimum of 2.5 to 3 years. That’s even if I don’t end up designing the next generation NTP protocol, an outcome I now consider to have over 50% odds.

Those of you guessing that my recent work on improving and documenting time service for GPSD led into this project are of course correct. But there’s more to it than that; it turns out that NTP daemons have a remarkably large amount in common with gpsd. Both are network-aware device managers, with closely comparable KLOC scale and porting challenges. They share arcana like dealing with 1PPS signals, and quite a bit of specialized knowledge just maps right across.

Another aspect of my skills profile that fits me well for the project is knowledge of ancient pre-standardization Unix APIs acquired over four decades. NTP is as full of this stuff as GPSD used to be before I removed it all several years back, and one of the principal tasks is to remove the cruft from NTP in order to reduce code volume and attack surface. We have already ripped out approximately 17% (40KLOC) of the NTP Classic codebase.

Finally, let me note that this code is not really living up to its reputation for impenetrability. There’s a longstanding legend that only Dave Mills ever really understood the Byzantine time-synchronization algorithms at NTP’s heart, but I used to be a mathematician and I think I already get most of it outside of a few arcana about statistical filtering of noisy signals. And most of the code isn’t those Byzantine algorithms anyway, but rather the not terribly surprising plumbing around them. Modifying it is high-end systems programming and not for the faint of heart, to be sure, but it’s not a thesis research project.

I think any top-grade systems engineer with a solid background in applied math or physics could grok NTP, really. Either that or, as I joked on G+, I actually have “read ancient code” as a minor superpower. Which joke I report mainly because I think Dave Taht was much funnier when he figuratively raised a Spock-like eyebrow back at me and said “Minor?”

90 comments

  1. Getting paid to work on a project for which you are almost uniquely suited in terms of both skills and passion. That’s awesome. Congratulations!

  2. I just claim to be a Software archeologist and microsurgeon.
    Usually “first do no harm” applies.

    1. >Drop me a line if you’re looking for a code auditor or help with any crypto involved in the next-gen protocol.

      I will. I already had you on the mental short list of people I’m pretty sure are equipped to grok the time-reconciliation algorithms.

  3. > I will. I already had you on the mental short list of people I’m pretty sure are equipped to grok the time-reconciliation algorithms.

    Any suggested background reading?

    1. >Any suggested background reading?

      The protocol documentation in the NTP Classic distribution itself. Which is quite good, actually.

  4. Good work.

    Funny coincidence. Just this June a thriller by Andy Updegrove came out where the NTP protocol was used as a vector to infect computers (this is a minor spoiler). The book is also funny in that it came close to predicting the “interesting” aspects of the current Republican primaries.

    The Lafayette Campaign: a Tale of Deception and Elections (Frank Adversego Thrillers Book 2)
    http://www.amazon.com/The-Lafayette-Campaign-Deception-Elections-ebook/dp/B010RF882O

    So tech savvy thriller authors have to look for another plot vehicle now?

  5. I wonder if security of NTPSec would have anything in common (ideas, mostly) with DNSSEC…

  6. You (yes, even you!) may find this book of some interest. It taught me a bunch of ideas about how to clean up legacy code, and especially how to make changes to sensitive and incomprehensible bits inside old software. The book is a bit wordy but the content is absolutely good. The general topic is, “so, you wish this big bag of code you were working on had been written using test driven development, but it wasn’t — how do you slowly evolve it towards maintainability?”

    http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052

    1. >You (yes, even you!) may find this book of some interest.

      I found a PDF by Feathers from 2002 that seems to be the core of the book. And read some reviews of it.

      I believe there’s sound thinking here, but the applicability to NTP and other C code is somewhat compromised because the book mostly assumes the code already has an OO organization.

    1. >Vinge was on to something when he coined the term Programmer-Archeologist.

      Yes, he was. I’ve spent a percentage of my life doing this sort of thing that would surprise most programmers – there is a lot of important, ancient code out there and if you work fixing infrastructure the way I do you’re often going to find yourself bumping into it.

  7. In my line of work, I frequently find myself having to rediscover the meaning of various database table columns, full of data still being fed from somewhere, feeding some business query elsewhere. The owner of the DB knows it’s important, because certain scripts or websites break if it’s removed, but the documentation is long gone – if it had even existed.

    “This is where data mining becomes data archaeology” is one of my stock quips to bring up at any meeting where we pitch our value. I find the visual of picking through bits buried in tape archives like shards of chipped pottery very apt.

  8. There’s a longstanding legend that only Dave Mills ever really understood the Byzantine time-synchronization algorithms at NTP’s heart, but I used to be a mathematician and I think I already get most of it outside of a few arcana about statistical filtering of noisy signals.

    Time-code comprehension is NOT a new issue. Centuries ago the great Jewish sage Maimonides said of the Jewish Calendar (which sometimes has twelve months and sometimes thirteen) that “any smart boy can learn to use this calendar after a week or two of careful study.” Not super relevant, perhaps, but one of my favorite quotes.

  9. Great of you to work on it! It’s important for sure.

    I’d be interested in what work you draw from on the possible redesign of NTPsec. Note that HTTPS has has real problems with its certificate authorities, so it would be good to avoid that problem in a new iteration. Alternate designs would be:

    – Treat the server’s key as part of the identifying information on the server, like with a YURL.
    – Cache the server’s key like ssh does, and complain if the key ever changes.

    1. >Note that HTTPS has has real problems with its certificate authorities, so it would be good to avoid that problem in a new iteration.

      Agreed, but I’m not enough of a crypto and crypto-protocols geek to do that redesign myself. I can follow that kind of thinking intelligently but don’t have the specialist knowledge to originate it.

    1. >Hopefully you’re thinking about security and making the protocol/implementations less susceptible to DDoS abuse.

      Yes, that’s how Susan got got a term grant from the NSF to start putting a team together. A lot of the work I’ve already done is hardening against buffer overruns and reduction of attack surface.

  10. Triggerfinger:
    > Vinge was on to something when he coined the term Programmer-Archeologist.

    I have “systems archeologist and process anthropologist” as a sub-title on my resume. Mostly because of Vinge.

  11. Errr… wasn’t “DDoS abuse” about using NTP servers in an amplification attack?

    1. >Errr… wasn’t “DDoS abuse” about using NTP servers in an amplification attack?

      Yes. The DDoS incidents got the Feds very worried about general security issues in the codebase.

  12. Eric, you know there’s another project out there calling itself “ntpsec”, right? It’s a dead project, but it outranks the current one on searches. You might want to make it clear that isn’t you.

  13. Hah, I knew it!

    The coincidence of your comment “Why, I’m repairing the very fabric of time..itself!” in your post “The Great Beast is armored!”, appearing at the same time as the first ‘Father Time’ story on InformationWeek, was just too great. It was clear to me you were doing something with NTP (and I said so, in a contemporaneous comment in Slashdot on the ‘Father Time’ story).

  14. Yes. The DDoS incidents got the Feds very worried about general security issues in the codebase.
    Glad to hear it, but I’m not SO worried about the codebase so much as some of the artifacts of the protocol. Feel free to email me (better to s/di2.nu/gmail.com/ on the email addy though I do check di2.nu too) if you want a big discussion of this.

    The current protocol, combined with Source address spoofing (and the fact that large SPs allow this today is a whole separate problem), allows for enormous amplification of an attack. The main reason for this is because the telemetry uses the same port and protocol as the service itself. It would be really nice if you could have the time query service use a different port than the various telemetry/metadata/control functions. Or – at the very least – ensure that the latter ONLY use TCP and not UDP.

    The NTP service doesn’t provide for real amplification (the standard response is a <100byte packet) but the various telemetry functions certainly do, hence the popularity of NTP as an DDoS amplification mechanism. If you could stop that (so that it could just be a reflection instead of reflection + amplification) that would be a significant benefit.

    It's tempting to say that the whole lot should be TCP only, but I think that you need UDP for the query service, as you have to know exactly when a query/response packet was sent and TCP doesn't give you that certainty so easily. Also I'm not sure of the load/traffic on the average pool.ntp.org server but it would not surprise me that adding TCP session handling would put a siginificant additional load on the server for no obvious benefit in terms of DDoS mitigation – A TCP SYN(ACK) packet isn't much smaller than an NTP UDP packet.

    It would also be really useful to be able to determine whether a supposed client is in fact a DDoS victim instead of a genuine enquirer after knowledge and react accordingly if its a victim. You might look at what the Bind folks have done with Response rate limiting in DNS and see if you can emulate it

  15. The way I would like to solve the reflected DDoS problem is to completely eliminate NTP’s unauthenticated mode of operation. NTP already has (bad, but serviceable) packet authentication built in, but almost nobody uses it because it requires a pre-shared key. That, however, can be solved without even touching the existing protocol. Instead, introduce a separate key exchange protocol that operates over TCP. At the completion of this protocol, the two parties share a secret key and have validated each other’s IP addresses. Now that key can be used to authenticate all UDP traffic. Any UDP packet which arrives with a bad or missing MAC, or which has a source address which doesn’t match the one that negotiated the key, can be dropped without further processing.

    1. >The way I would like to solve the reflected DDoS problem is to completely eliminate NTP’s unauthenticated mode of operation.

      This seems plausible. I’ll relay it to the project list.

  16. At the completion of this protocol, the two parties share a secret key and have validated each other’s IP addresses. Now that key can be used to authenticate all UDP traffic. Any UDP packet which arrives with a bad or missing MAC, or which has a source address which doesn’t match the one that negotiated the key, can be dropped without further processing.

    Sounds like the DNS TSIG mechanism. Although since TSIG as defined requires synchronized clocks (within a fudge factor) you probably don’t want to actually copy that for NTP which is what you use to get the time sync in the firts place

  17. Is there any suggestion of gutting the multitude of leap-second offset timescales called UTC and replacing them with TAI?

    1. >Is there any suggestion of gutting the multitude of leap-second offset timescales called UTC and replacing them with TAI?

      No. It’s not in NTP’s remit to do anything but supply a best approximation of UTC.

      Even if I could change that, I wouldn’t. I’m in the philosophical camp that thinks computer time should approximate solar time as closely as feasible, because that’s what fits human intuitions and expectations about clock times best. Leap-second offsets are a huge PITA but in my view a price worth paying to maintain human-friendliness.

  18. Daniel, how does that accommodate NTP as a public time server?

    Through public key cryptography. One straightforward implementation of the key exchange protocol would be for the client to establish a TLS connection to the server and then for the server to send the client a MAC key via the TLS-encrypted channel. For public time servers, the client would validate the server’s certificate against a known CA but wouldn’t have to provide a certificate of its own (just like the way HTTPS is typically deployed). For restricted-access time servers or for NTP’s symmetric association mode, the TLS connection would be mutually authenticated (i.e. the client and server both supply certificates), or the MAC key would simply be pre-shared like it is today on the few installations that actually bother.

    Public servers with many clients can avoid having to maintain large state tables by having the MAC key they provide be the output of a key derivation function salted by the client’s IP address. That way they don’t have to keep the keys in memory; they can recompute them on the fly each time a packet comes in.

  19. Actually, the KDF idea mentioned in the last paragraph of my previous comment needs some work. You don’t want to hand out the same MAC key twice to two clients that connect with the same IP address. That’s fine if all you’re trying to accomplish is DDoS prevention, but it loses some of the other security benefits afforded by packet authentication. Putting the output of a connection counter into the keyid field and including that in the KDF input would almost solve this, but the keyid field is only 32 bits and so it’s prone to rollover. Adding an extension field might be necessary. I’ll think this over some more after I’ve gone through RFC5905 in more detail.

  20. Buried in the NTP docs and spec I just found something called “autokey” which seems to be closely along the lines of what I outlined above. Interesting…

    1. >Buried in the NTP docs and spec I just found something called “autokey” which seems to be closely along the lines of what I outlined above. Interesting…

      AFAIK autokey has never worked. A past NTP dev whose judgment and veracity I trust says that Dave Mills kept futzing with it, never got it quite right, and there were never production users. An independent line of evidence for this is a comment in some in-tree documentation begging successful autokey users to make themselves known to the dev team.

  21. Well, I’ll look more at autokey and figure out whether it’s salvageable. If not, we should rip it out (if you haven’t already) and design something new. If so, we should clean it up and get it so it works by default with no manual configuration required, at the very least for clients and maybe for servers too by way of Let’s Encrypt.

    1. >If not, we should rip [autokey] out (if you haven’t already) and design something new.

      That’s what’s in the current technical plan. We’ll need someone like you to do the crypto end. I can handle the systems/network-programming/architecture aspects just fine, but not that part.

  22. >Even if I could change that, I wouldn’t. I’m in the philosophical camp that thinks computer time should approximate solar time as closely as feasible

    Yes, but “NTPng” does not need to be the thing to provide that, at least in the high-security UDP packet exchange phase.

    It could provide TAI and leave knowledge of when a leap second is inserted or removed to a separate “conversation” (perhaps that key exchange over TLS) that does not have the same need for precise timing. Since leap seconds are scheduled semi-annually, information about UTC offsets from TAI for the next four months can be gathered quarterly and be quite up to date.

  23. Autokey as it’s currently specified sounds like it makes the DDoS amplification problem worse, not better. You send the server a subject name, and it sends you back a certificate. Yuck.

  24. Based on comments above one option would be to keep allowing for standard time functions to return data to unsigned requests but require authentication/negotiation to get telemetry.

    Hell, if you get a real setup for using certificates and negotiation, you could support one of two configurations:
    1) Everybody requires key exchange
    2) Public time requests supported, everything else requires negotiation.
    Note that there’s no support for unauthenticated “everything else”, to minimize the default options for DDoS in the default configuration.

    Not requiring key negotiation for basic time information allows for bootloaders and embedded systems to do an NTP date request in very little code.

    1. >Not requiring key negotiation for basic time information allows for bootloaders and embedded systems to do an NTP date request in very little code.

      That is a strong point in favor of not eliminating unauthenticated time queries.

  25. It’s probably okay for basic time queries, where the reply contains no extension fields, to be unauthenticated, because in that case there’s no amplification. It still allows unamplified reflection attacks, but without a pre-shared key it’s inevitable that there will be some place in the protocol where that’s possible. For example, if you do the key establishment handshake over TCP, then that place is initial SYN, SYN/ACK of the TCP handshake. Reflected SYN floods are, of course, old news, and internet operators have gotten pretty good at dealing with them. Unauthenticated time queries are no more and no less of a hazard.

  26. One thing that ought to be considered is that casual, unauthenticated time queries ought to be handled within an organization, and only when an NTP packet crosses network boundaries does the need for authentication even need to come into play.

    A DHCP/bootp server ought to be providing its clients with information about local time servers (unauthenticated connections), making for no configuration at all for most computers. Consumer-grade routers should run an NTP daemon to relieve the load on the ISP’s NTP servers, which in turn would only need to serve the ISP’s own customers.

    If some sort of certificate were part of the DHCP exchange, it could be renewed along with the DHCP lease.

  27. You can’t require heavy-duty key authentication for basic time queries in any case; secure key authentication works with timestamps to prevent a replay, which means that both sides of the key exchange must agree about what time it is. Devices making NTP time queries can be assumed not to trust their own guess at the time, so they can’t do things that require it to be accurate.

    Of course, if some three-letter agency wants a target to have trouble using TLS (say), they may try to feed the target bogus time date over unauthenticated NTP, so you want an authenticable mode. And the same three-letter-agency, if it can’t feed a querent bogus time data, may resort to feeding the querent garbage packets to prevent authentication.

    Basically, security is hard.

  28. @esr:
    >Even if I could change that, I wouldn’t. I’m in the philosophical camp that thinks computer time should approximate solar time as closely as feasible, because that’s what fits human intuitions and expectations about clock times best. Leap-second offsets are a huge PITA but in my view a price worth paying to maintain human-friendliness.

    Frankly, what I’d say is that any system that needs to provide accuracy better than a tenth of a second or so should dispense with trying to sync with such imprecise things as human calendars or the rotation of the Earth, and should just be a flat second count from an epoch. (I use a tenth of a second as a border because human reaction times are on that order, thus forming a lower bound on what timescales can be considered “human-scale”).

    Meanwhile, any system that only needs to provide human-scale accuracies should dispense with measuring time, per-se, and should instead use the number of radians that the Earth has rotated through since an epoch (in a rotating frame that holds the sun stationary) for measuring days and fractions of days, the number of radians that the Earth has orbited through since an epoch for measuring years and multiples of years, and should then build all the vagueries of traditional calendars on top of those two.

  29. @Jon Brase
    “Frankly, what I’d say is that any system that needs to provide accuracy better than a tenth of a second or so should dispense with trying to sync with such imprecise things as human calendars or the rotation of the Earth, and should just be a flat second count from an epoch.”

    Harlan Stenn talked about how the leap seconds were needed to keep the UTC zero at Greenwhich.

  30. @Winter
    >Harlan Stenn talked about how the leap seconds were needed to keep the UTC zero at Greenwhich.

    Yes, but that begs the question of why NTP needs to provide UTC rather than TAI, and leave the TAI=>UTC conversion to something that subscribes to leap-second tables. Note that this is not a question of “doing away with UTC” in favor of TAI; it’s a question of whether the system that gives us fine-grained time accuracy needs to concern itself with leap second calculations, or whether leap seconds can be moved to another layer in the “time stack”.

    Traditionally, the thinking has been that there are but two layers: UTC and local time, with tzinfo handling the translation between them, but given some of the problems associated with UTC, there are good arguments to be made for creating a third layer, or perhaps letting the tzinfo database include leap-second info as well, so that NTP doesn’t have to provide that, at least at the most-basic level of “what time is it now?” query.

  31. @Winter:
    >Harlan Stenn talked about how the leap seconds were needed to keep the UTC zero at Greenwhich.

    And that’s why a precision timekeeping system should be just a flat second count: No days, months, or years, no trying to make sure that the sun is always in a certain position in the sky as seen from a particular location when the second count modulo 86400 is a certain number.

    Then your non-precision timekeeping can base itself entirely on the rotation and motion of the Earth and also stop using leap seconds, as it uses the Earth itself as the reference clock.

    Of course, establishing such a thing is *way* beyond the remit of NTP.

  32. > I used to be a mathematician and I think I already get most of it outside of a few arcana about statistical filtering of noisy signals.

    Nothing to be ashamed of. Claude Shannon does have a metric unit named after him, after all.

    w.r.t. Earth-reference times: as much of a pain as leap seconds are, Earth is less consistent as a time reference than the cesium atom, and I’m reasonably certain that whatever would be implemented to reconcile the variable Earth-second with the cesium-second in such a system would be much more painful than the current leap second system. (By the way, does anyone know if the mysterious NYSE lockup of July 8 is related to the leap second? I haven’t kept up.)

  33. > Even if I could change that, I wouldn’t. I’m in the philosophical camp that thinks computer time should approximate solar time as closely as feasible, because that’s what fits human intuitions and expectations about clock times best.

    Solar time? That’s ridiculous. At present (including both daylight saving and being in an extreme western extension of my timezone), my computer’s time is an hour and 45 minutes off mean solar time.

    Why should I care about the mean solar time in London? I don’t live there.

  34. A few seconds is nothing next to even the normal variations of +/- 30 minutes within a timezone.

    1. >A few seconds is nothing next to even the normal variations of +/- 30 minutes within a timezone.

      That is true. However, eliminating leap-second corrections would cause the average divergence from solar time to rise inexorably until it exceeded not just 30 minutes but any time-zone width. Eventually, noon would be midnight.

  35. @Winter:
    But IAT tries to maintain the trappings of traditional timekeeping, which is why the phrase “Zero IAT lies 0.15 degrees east of zero UT” even makes sense. Under my proposal, zero PAT (“Proposed Atomic Time”) is whatever epoch is chosen (for the purpose of this discussion, lets assume the Unix epoch), and units other than seconds, kiloseconds, megaseconds, etc. are not valid in PAT timestamps. Not only are leap seconds not inserted, but the phrase “leap second” is meaningless as we explicitly do not count days. “3232005641” and “0xC0A47E09” are valid PAT timestamps (the same timestamp, actually). “3.23 megaseconds” would be a valid PAT timestamp. “Wed Jun 1 06:20:41 CDT 2072” would not be a valid PAT timestamp.

    @Terry:
    >w.r.t. Earth-reference times: as much of a pain as leap seconds are, Earth is less consistent as a time reference than the cesium atom, and I’m reasonably certain that whatever would be implemented to reconcile the variable Earth-second with the cesium-second in such a system would be much more painful than the current leap second system.

    But for most everyday purposes, the rotational state of the Earth (particularly as reflected by the circadian rhythms of human beings) is of more importance than to-the-nanosecond (or even to-the-millisecond) atomic time. Anyways, my proposal is that the idea of reconciling the two be abandoned, and that conversions be done strictly by lookup to tables of historically recorded data (or by projection from current data, for future dates).

  36. I have network switches in wiring closets that have no RTC. After every reboot, their logs start on 1990-01-01 and then they jump forward ~800 Megaseconds when the boot up enough to sync up to the local network’s notion of civil time.

    The best argument that I can think of for UTC over TAI is that simple systems need a lightweight way to sync up to civil time.

    In a perfect world, we could start from scratch and solve all of the problems with a new set of protocols. One long running protocol to discipline the local oscillator and keep the seconds coming at the right rate. One intermittent protocol to establish the (TAI) identity of a given second. One very infrequent protocol to establish civil meaning (leap seconds, local TZ and DST rules, whatever) of the TAI stream.

  37. By the way, I once spent a while discussing decentralized inter-entity timekeeping with some bitcoin guys. The basic idea was to use solar radio noise to create overlapping agreement about when events happen.

    It seems possible, but probably impractical, to maintain decentralized agreement both on the duration of the second and on the location of noon. Anyone looking to score some extra libertarian points?

  38. @Jon Brase
    “Under my proposal, zero PAT (“Proposed Atomic Time”) is whatever epoch is chosen (for the purpose of this discussion, lets assume the Unix epoch), and units other than seconds, kiloseconds, megaseconds, etc. are not valid in PAT timestamps.”

    Maybe that is called GPS Time.

    “GPS Time (GPST) is a continuous time scale (no leap seconds) defined by the GPS Control segment on the basis of a set of atomic clocks at the Monitor Stations and onboard the satellites. It starts at 0h UTC (midnight) of January 5th to 6th 1980 (6.d0). At that epoch, the difference TAI?UTC was 19 seconds, thence GPS?UTC=n ? 19s. GPS time is synchronised with the UTC(USNO) at 1 microsecond level (modulo one second), but actually is kept within 25 ns. ”

    However TAI is simply a seconds epoch converted to a “normal” time.

  39. The problem with decoupling from Earth rotation is that at very extreme time precision (though probably not at nanoseconds scale) you see relativistic effects: clocks desynchronize because one was moved at human walking speed, or because they are on different floor ;-)

    1. >clocks desynchronize because one was moved at human walking speed, or because they are on different floor

      I think I remember reading somewhere that NIST has devices precise enough to see the different-floor effect.

  40. > > clocks desynchronize because one was moved at human walking speed, or because they are on different floor

    > I think I remember reading somewhere that NIST has devices precise enough to see the different-floor effect.

    Not sure about that, but I know that the GPS satellites themselves have to account for the relativistic slow-down of their clocks due to their orbital velocity.

  41. A different floor? You do not have to go that far:

    http://www.dailymail.co.uk/sciencetech/article-1314656/Scientists-prove-time-really-does-pass-quicker-higher-altitude.html

    In one experiment, the researchers raised one of the experimental clocks by 12 inches.

    Sure enough, the higher clock ran at a slightly faster than the lower clock, exactly as predicted,’ said a spokesman for the NIST.

    “The difference is much too small for humans to perceive directly – adding up to approximately 90 billionths of a second over a 79-year lifetime.’

  42. I think your readers are missing the point of this. But I get it.

    You figure that the code you wrote for the image libraries is on nearly every visual device in the world including cell phones, desktops and laptops. However, there are millions of headless systems out there that haven’t had your software seed deposited on them.
    And so this is Eric’s attempt to spread his code onto every computer in the world.

    It is all part of the Evil Conspiracy.

    1. >And so this is Eric’s attempt to spread his code onto every computer in the world. It is all part of the Evil Conspiracy.

      Congratulations, you have qualified for a rewarding position as Principal Minion. We offer generous compensation and perks including: a corner office in the castle tower, all the peasants you can terrorize, and a modish leather catsuit. Bring your own whip.

  43. I don’t know what happened to my previous message. The spam filter ate it, I suppose. Twice.

    Anyway, the timescale in which an atomic-based *UTC would drift away to the point of being worth bothering with is enough that it can be dealt with by moving timezone boundaries or changing timezone offsets. If *UTC noon is midnight, then your timezone should obviously be +12 or -12 (which one depending on which direction the drift happened in).

    1. >Anyway, the timescale in which an atomic-based *UTC would drift away to the point of being worth bothering with is enough that it can be dealt with by moving timezone boundaries or changing timezone offsets.

      This would not solve any problems. It would just relocate the pain.

  44. And since I mentioned +12, let me clarify there’s no reason for the dateline to move – it would just change from a boundary between (circa) +12 and -12, to a boundary between +18 and -6, to +24 and -0, and so forth.

  45. The “pain” of jurisdictions changing timezones is something that already exists now, on timescales far shorter than this. At the rate that jurisdictions change their timezone offset anyway, the added change for “oh, hey, the sun’s rising a bit later than it used to this time of year” wouldn’t even be noteworthy.

    The point is, “A timescale which is an integer number of seconds offset from TAI and which is within a second or so of London’s Mean Solar Time” is an artificial creation that is wholly unnecessary. It’s an extra level of indirection between local civil time and atomic time, which doesn’t need to exist. You’re not “relocating” the pain, you’re reducing the amount of pain from two offsets to manage to only one.

    1. > You’re not “relocating” the pain, you’re reducing the amount of pain from two offsets to manage to only one.

      Reducing it to one? Sigh…I really thought you were smarter than this.

      You’re “reducing” it from one offset maintained by IERS and UNO to some indeterminately large number of offsets maintained by national authorities who are notoriously flaky and liable to futz with them for political reasons.

      You think a timescale which is an integer number of seconds offset from TAI and which is within a second or so of London’s Mean Solar Time is wholly unnecessary. This demonstrates that you aren’t a marine navigator, an astronomer, or (where it bites especially hard) an aviator. It’s from these people that the real-world pushback against decoupling international standard time from mean solar is coming, and they have good reasons.

  46. It is all part of the Evil Conspiracy

    Although finding NTP bugs has very obvious Evil potential, and admittedly I have always wanted my own satellite, I think that idea would have at best earned me a B- from my old Diabolical World Domination Plans professor back at the Academy of Evil. I’m actually playing a much longer game.

    You see, NTPsec is my early retirement plan. When the epochalypse of 2038 is drawing near, I intend to have the perfect résumé item to distinguish myself head-and-shoulders above any other 52-year-old UNIX graybeards who are still around to compete with me to collect exorbitant hourly consulting rates from panicked CIOs who have the board breathing down their neck. And if whatever sage advice I dispense turns out to be insufficient to prevent the collapse of civilization, I won’t give a damn because I’ll be reading about it by the fireside in my cabin in the Yukon with a bear-skin rug and a bottle of whiskey.

    BWAHAHAHAHAHAHA!

    1. >BWAHAHAHAHAHAHA!

      Well said, stripling. Ah, it’s heartening to see the torch of evil mad science passing to a new generation.

      Just remember the old adage about age and treachery. Odds are good I’ll still be around for the epochalypse, probably with a sinister master plan even more diabolical than yours.

  47. Sssssh! This is a rare example of negotiations between mad scientists. Normally they take place in concrete bunkers deep beneath active volcanos via high-bandwidth video, but thanks to the wonders of open source, we can see this one at close range as it happens. Note how both elder and challenger circle each other looking for weaknesses, and spar verbally with the subtle barb, a mixture of threat and complement. Perhaps this negotiation will end with the youth gaining limited access to the elder’s hunting grounds, or conveying a token of recognition? Or perhaps, intimidated, the newcomer will slink away with nothing to show for it.

    This is the Nature of software development…

    1. >This is a rare example of negotiations between mad scientists,

      My wife Cathy said: “I thought negotiations between mad scientists usually consisted of them pointing doomsday machines at each other.”

      I replied: “Ah. That’s normally the second stage.”

  48. Drat… a rival holding such unparalleled experience with time itself cannot be left unchecked. I shall return to my airship to brood upon a theatrical and needlessly convoluted means of dispatching this menace.

  49. Your spam filter is acting up again, and yet again it has mysteriously disappeared my message rather than saying it is awaiting moderator approval.

  50. Attempt 4.

    > Reducing it to one? Sigh…I really thought you were smarter than this. (Rest of quote omitted to try to get by the spam filter)

    No, I’m reducing it from both. UTC alone is useless.

  51. And with the pain shifted to national timezone offsets, the people who can get by without them (i.e. they only care about a universal timescale) have less pain to deal with.

  52. > You think a timescale which is an integer number of seconds offset from TAI and which is within a second or so of London’s Mean Solar Time is wholly unnecessary.

    Then they, and the IERS, can maintain it themselves, without imposing it on civil time!

    1. >Then they, and the IERS, can maintain it themselves, without imposing it on civil time!

      You still don’t understand the actual demand pattern here. Go talk to an aviator about this.

  53. I guess what you haven’t explained is why all the miscellaneous groups you named need, say, Eastern time to be an integer number of hours from the within-one-second-of-London-mean-time scale, rather than: that scale standing on its own where everyone who’s not an aviator/astronomer/navigator/etc doesn’t have to care about it, and not being used as a basis for civil timezones.

  54. By me, the issue is that *civil time* is UTC +- a fixed offset; that alone is enough reason why this service should return that, rather than some other timescale, as its default. If it wants to return other stuff *as well*, that’s fine.

    I concur that the availability of unauthenticated time is very important — the entire point of NTP as I’ve always understood it was that you could get your time from a large and widely disparate enough cluster of sources that it was effectively impossible for anyone to spoof it — especially if one source is a hardware clock or GPS.

  55. “This is a temporary website, the project is expected to launch the week of 2015-08-24.”

    Is there a revised launch date?

Leave a Reply to esr Cancel reply

Your email address will not be published. Required fields are marked *