How not to design a wire protocol

A wire protocol is a way to pass data structures or aggregates over a serial channel between different computing environments. At the very lowest level of networking there are bit-level wire protocols to pass around data structures called “bytes”; further up the stack streams of bytes are used to serialize more complex things, starting with numbers and working up to aggregates more conventionally thought of as data structures. The one thing you generally cannot successfully pass over a wire is a memory address, so no pointers.

Designing wire protocols is, like other kinds of engineering, an art that responds to cost gradients. It’s often gotten badly wrong, partly because of clumsy technique but mostly because people have poor intuitions about those cost gradients and optimize for the wrong things. In this post I’m going to write about those cost gradients and how they push towards different regions of the protocol design space.

My authority for writing about this is that I’ve implemented endpoints for nearly two dozen widely varying wire protocols, and designed at least one wire protocol that has to be considered widely deployed and successful by about anybody’s standards. That is the JSON profile used by many location-aware applications to communicate with GPSD and thus deployed on a dizzying number of smartphones and other embedded devices.

I’m writing about this now because I’m contemplating two wire-protocol redesigns. One is of NTPv4, the packet format used to exchange timestamps among cooperating time-service programs. The other is an unnamed new protocol in IETF draft, deployed in prototype in NTPsec and intended to be used for key exchange among NTP daemons authenticating to each other.

Here’s how not to do it…

NTPv4 is a really classic example of one extreme in wire protocol design. A base NTP packet is 48 bytes of packed binary blob that looks like this:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |LI | VN  |Mode |    Stratum    |     Poll      |  Precision    |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                         Root Delay                            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                         Root Dispersion                       |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                          Reference ID                         |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      +                     Reference Timestamp (64)                  +
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      +                      Origin Timestamp (64)                    +
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      +                      Receive Timestamp (64)                   +
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      +                      Transmit Timestamp (64)                  +
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

The numbers are bit widths. If I showed you an actual packet dump it would be a random-looking blob of characters with no significance at the character level; only the bits matter.

It’s not very relevant to this episode what the detailed semantics of those fields are, though you can make some guesses from the names and probably be right; just think of it as a clock sample being passed around. The only two we’re going to care about here is VN, which is a three-bit protocol version field normally set to 0b100 = 4, and mode – three more bits of packet type. Most of the others are interpreted as binary numbers except for “Reference ID”, which is either an IPv4 address or a 4-character string.

Here’s a GPSD report that exemplifies the opposite extreme in wire-protocol design. This is an actual if somewhat old Time-Position-Velocity packet capture from the GPSD documentation:

{"class":"TPV","time":"2010-04-30T11:48:20.10Z","ept":0.005,
               "lat":46.498204497,"lon":7.568061439,"alt":1327.689,
                "epx":15.319,"epy":17.054,"epv":124.484,"track":10.3797,
                "speed":0.091,"climb":-0.085,"eps":34.11,"mode":3}

Those of you with a web-services background will recognize this as a JSON profile.

You don’t have to guess what the principal fields in this report mean; they have tags that tell you. I’ll end the suspense by telling you that “track” is a course bearing and the fields beginning with “e” are 95%-confidence error bars for some of the others. But again, the detailed field semantics don’t matter much to this episode; what we want to do here is focus on the properties of the GPSD protocol itself and how they contrast with NTPv4.

The most obvious difference is discoverability. Unless you know you’re looking at an NTP packet in advance, seeing the data gives you no clue what it means. On the other hand, a GPSD packet is full of meaning to the naked eye even if you’ve never seen one before, and the rest is pretty transparent once you know what the field tags mean.

Another big difference is bit density. Every bit in an NTPv4 packet is significant; you’re squeezing the information into as short a packet as is even theoretically possible. The GPSD packet, on the other hand, has syntactic framing and tags that tell you about itself, not its payload.

These two qualities are diametrically opposed. The bits you spend on making a wire protocol discoverable are bits you’re not spending on payload. That both extremes exist in the world is a clue: it means there’s no one right way to do things, and the cost gradients around wire protocols differ wildly in different deployments.

Before I get to a direct examination of those cost gradients I’m going to point out a couple of other contrasting properties. One is that the base NTPv4 packet has a fixed length locked in; it’s 48 bytes, it’s never going to be anything but 48 bytes, and the 32- or 64-bit precision of the numeric fields can never change. The GPSD packet embodies the opposite choice; on the one hand it is variable-length as the number of decimal digits in the data items change, on the other hand it is quite easy to see how to ship more precision in the GPSD packet if and when it’s available.

Hardware independence is another important difference. A decimal digit string is a decimal digit string; there’s no real ambiguity about how to interpret it, certainly not if you’ve ever seen a JSON-based protocol before. The binary words in an NTPv4 packet, on the other hand, may need to be byte-swapped to turn into local machine words, and the packet itself does not imply what the right decoding is. You need to have prior knowledge that they’re big-endian…and getting this kind of detail wrong (byte-swapping when you shouldn’t, or failing to when you should) is a really notorious defect attractor.

More generally, these protocols differ greatly in two related qualities; extensibility is one. The other doesn’t have a term of art; it’s whether data encoded in the protocol can mix gracefully with other payload types traveling on the same wire. I’ll call it “sociability”.

(And why does sociability matter? One reason is because the friction cost of poking new holes for new protocols in network firewalls is considerable; it triggers security concerns. This is why so much stuff is multiplexed on HTTP port 80 these days; it isn’t only for convenience with browsers.)

Adding a new field to a JSON datagram, or more generally any other kind of self-describing protocol), is not difficult. Even if you’ve never seen JSON before, it’s pretty easy to see how a new field named (say) “acceleration” with a numeric value would fit in. Having different kinds of datagrams on the wire is also no problem because there’s a class field. GPSD actually ships several other reports besides TPV over the same service port.

It’s trickier to see how to do the analogous things with an NTPv4 packet. It is possible, and I’m now going to walk you through some fairly painful details not because they’re so important in themselves but because they illustrate some systematic problems with packed binary protocols in general. There will be no quiz afterwards and you can forget them once you’ve absorbed the general implications.

In fact NTPv4 has an extension-field mechanism, but it depends on a quirk of the transmission path: NTPv4 packets are UDP datagrams and arrive with a length. This gives you a dodge; if you see a length longer than 48 bytes, you can assume the rest is a sequence of extension fields. Here’s what those look like:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |         Type field             |      Payload length          |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      |                        Payload (variable)                     |
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Good luck eyeballing that! It’s simple in concept, but it’s more binary blob. As with the base packet, you need a special tool like wireshark and a detailed spec in front of you just to interpret the type fields, let alone whatever wacky private encodings get invented for the payload parts.

Actually, this last section was partly a fib. Detecting NTPv4 extension fields is tricky because it interacts with a different, older extension – an optional cryptographic signature which can itself have two different lengths:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                          Key Identifier                       |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      |                        dgst (128 or 160)                      |
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

It is possible to work out whether one or both kinds of extension are present by doing some tricky modular arithmetic, but I’ve tortured you enough without getting into exactly how. The thing to take away is that gymnastics are required compared to what it takes to add extensions to a JSON-based protocol, and this isn’t any accident or evidence that NTPv4 is especially ill-designed. This kind of complexity is generic to packed binary protocols, and that has implications we’ll focus in on when we got to cost gradients.

In fact NTPv4 was not badly designed for its time – the Internet protocol design tradition is pretty healthy. I’ve seen (and been forced by standards to implement) much worse. For please-make-it-stop awfulness not much beats, for example, the binary packet protocol used in Marine AIS (Automatic Identification system). One of its packet types, 22 (Channel Management), even has a critical mode bit controlling the interpretation of an address field located after the address field rather than before. That is wrap-a-tire-iron-around-somebody’s-head stupid; it complicates writing a streaming decoder and is certain to attract bugs. By comparison the NTPv4 design is, with all its quirks, quite graceful.

It is also worth noting that we had a narrow escape here. UDP protocols are now unusual, because they have no retransmission guarantees. Under TCP, you don’t get a whole datagram and a length when you read off the network. A TCP equivalent of the NTPv4 packet protocol would either have been fixed at 48 bits no extension forever or have needed to give you some way to compute the expected packet length from data that’s within a minimum-size distance of the start of packet.

JSON evades this whole set of complications by having an unambiguous end delimiter. In general under TCP your packets need to have either that or an early length field. Computing a length from some constellation of mode bits is also available in principle, but it’s asking for trouble. It is…say it with me now…a defect attractor. In fact it took six years after the NTPv4 RFC to issue a correction that clarified the edge cases in combination of crypto-signature and extensions.

What about sociability? The key to it is those version and mode fields. They’re at fixed locations in the packet’s first 32-bit word. We could use them to dispatch among different ways of interpreting everything part those first 8 bits, allowing the field structure and packet length to vary.

NTPv4 does in fact do this. You might actually see two different kinds of packet structure on the wire. The diagram above shows a mode 2 or 3 packet; there’s a mode 6 packet used for control and monitoring that (leaving out an optional authentication trailer) looks like this instead:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |LI | VN  | 6   |R|E|M|  Opcode  |          Sequence            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |               Status           |       Association ID         |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |               Offset           |            Count             |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      .                                                               .
      .                        Payload (variable)                     .
      .                                                               .
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

The count field tells you the length of the variable part. Self-description!

Two packet structures, seven potential mode values. You might be wondering what happened to the other five – and in fact this illustrates one of the problems with the small fixed-length fields in packed-binary formats. Here’s the relevant table from RFC5905:

                      +-------+--------------------------+
                      | Value | Meaning                  |
                      +-------+--------------------------+
                      | 0     | reserved                 |
                      | 1     | symmetric active         |
                      | 2     | symmetric passive        |
                      | 3     | client                   |
                      | 4     | server                   |
                      | 5     | broadcast                |
                      | 6     | NTP control message      |
                      | 7     | reserved for private use |
                      +-------+--------------------------+

You don’t have to know the detailed meanings of all of these to get that the mode field mixes information about the packet structure with other control bits. In fact values 1 through 5 all have the same structure, mode 6 has the one I just diagrammed, and all bets are off with mode 7.

When you’re optimizing for highest bit tendency – which is what they were doing in 1985 when this protocol was originally designed – the temptation to do this sort of thing is pretty near irresistible. The result, 34 years later, is that all the bits are taken! We can’t hardly get any more multivalent with this field without committing a backward incompatibility – not a really safe thing to do when there are lots of big-iron legacy implementations still out there, pinned in place by certification requirements and sheer bureaucratic inertia.

OK, in theory we could claim mode 0. But I’ve seen several of the decoders out there and I would not warrant that a mode 0 packet won’t ever slip past anyone’s sanity checks to be misinterpreted as something else. On the other hand, decoders do check the version field; they have to, because versions 0 to 3 have existed and there could be ancient time servers out there still speaking them. So the version field gives us a way out; as long as the version field reads 5, 6, or 7, the rest of the packet part that first byte could look like anything we like and can write an RFC for.

I’ve walked you through this maze to illustrate an important point: packed binary formats are extremely brittle under the pressure of changing requirements. They’re unsociable, difficult to extend without defect-attracting trickery, and eventually you run into hard limits due to the fixed field sizes.

NTP has a serious functional problem that stems directly from this. Its timestamps are 64-bit but only half of that is whole seconds; those counters are going to wrap around in 2036, a couple of years before the more widely anticipated Unix-timestamp turnover in 2038. In theory the existing implementations will cope with this smoothly using more clever modular arithmetic. In practice, anybody who knows enough to have gamed out the possible failure scenarios is nervous…and the more we know the more nervous-making it gets.

This is why I’m thinking about NTPv5 now. 2019 is not too soon. Closing the circle, all this would have been avoided if NTP timestamps had looked like “2010-04-30T11:48:20.10Z”, with variable-length integer and decimal parts, from the beginning. So why wasn’t it done that way?

To address that question let’s start by looking at where the advantages in self-describing textual formats vs. packed binary ones stack up. For self-describing: auditability, hardware independence, extensibility, and sociability. For packed binary: highest possible bit density.

A lot of people would adder “faster, simpler decoding” to the list of advantages for packed binary. But this (at least in the “simpler” part) is exactly where peoples’ design intuitions often start to go wrong, and the history of NTPv4 demonstrates why. Packed protocols start out with “simpler”, but they don’t stay that way. In the general case you always end up doing things like tricky modular arithmetic to get around those fixed limits. You always exhaust your mode-bit space eventually. The “faster” advantage does tend to be stable over time, but the “simpler” does not.

(And oh, boy, will you ever have this lesson pounded home to you if, as I did, you spend a decade on GPSD implementing decoders for at least nineteen different GPS wire protocols.)

If you are a typically arrogant software engineer or EE, you may be thinking at this point “But I’m smarter! And I’ve learned from the past! I can design an optimum-bit-density wire-format that avoids these problems!”

And this is my forty years of field experience, with specific and proven domain expertise in wire-protocol design, telling you: This. Never. Happens. The limitations of that style of protocol are inherent, and they are more binding than you understand. You aren’t smart enough to evade them, I’m not, and nobody else is either.

Which brings us back to the question of why NTPv4 was designed the way it was. And when it is still appropriate to design wire protocols in that packed-binary style. Which means that now it’s time to look at cost gradients and deployment environments.

One clue is that the NTP wire protocol was designed decades ago when computing cycles and bits-per-second on the wire were vastly more expensive than they are now. We can put numbers on that. NTP was designed under the cost profile of early ARPANET to operate well with connection speeds not much higher than 50Kbps. Today (2019) average U.S broadband speeds are 64Mps. That’s a factor of 10^3 difference. Over the same period processor speeds have gone up by about 10^3-10^4. There’s room for argument there based on different performance measures, but assuming the low end of that range we’re still looking at about the same cost change as bits on the wire.

Now let me throw an interesting number at you that I hope brings home the implications of that change. A few weeks ago we at NTPsec had an email conversation with a guy who is running time service out of the National Metrology Institute in Germany. This is undoubtedly Deutschland’s most heavily loaded Stratum 1 NTP provider.

We were able to get his requests per-second figure, do a bit of back-of-the-envelope calculation, and work out that the production NTP load on a a national time authority for the most populous nation in Europe (excluding transcontinental Russia) wouldn’t come even close to maxing out my home broadband or even just one of the Raspberry Pi 3s on the windowsill above my desk. With all six of them and only a modest bandwidth increase I could probably come pretty close to servicing the Stratum 2 sites of the entire planet in a pinch, if only because time service demand per head is so much lower outside North America/Europe/Japan.

Now that I have your attention, let’s look at the fundamentals behind this. That 10^3 drop tracks the change in one kind of protocol cost that is basically thermodynamic. How much power do you have to use, what kind of waste heat do you generate, if you throw enough hardware at your application to handle its expected transaction load? Most of what are normally thought of as infrastructure costs (equipping your data center, etc.) are derivative of that thermodynamic cost. And that is the cost you are minimizing with a packed binary format.

In the case of NTP, we’ve just seen that cost is trivial. The reason for this is instructive and not difficult to work out. It’s because NTP transaction loads per user are exceptionally low. This ain’t streaming video, folks – what it takes to keep two machines synchronized is a 48-byte call and a 48-byte response at intervals which (as I look at a live peers display just now) average about 38 minutes.

There’s just a whisper of a nuance of a hint there that just mmmaybe, three decades after it was first deployed, optimizing NTP for bit density on the wire might not be the most productive use of our effort!

Maybe, in another application with 10^3 more transaction volume per user, or with a 10^3 increase in userbase numbers, we’d incur as much thermodynamic cost as landed on a typical NTP server in 1981, and a packed binary format would make the kind of optimization sense it did then. But that was then, this is now, and peoples’ intuitions about this tend to be grossly out of whack. It’s almost as though a lot of software engineers and EEs who really ought to know better are still living in 1981 even when they weren’t born yet.

OK, so what should we be optimizing NTP for instead? Take a moment to think about this before you keep reading, because the answer is really stupid obvious.

We should be designing to minimize the cost of human attention. Like thermodynamic cost, attention cost unifies a lot of things we normally think of as separate line items. Initial development. Test. Debugging. Coping with the long-term downstream defect rate. Configuring working setups. Troubleshooting not-quite-working setups. And – this is where you should start to hear triumphant music – dealing gracefully with changes in requirements.

It is also a relevant fact that the cost of human attention has not dropped by 10^3 along with thermodynamic cost per unit of information output since 1981. To a good first approximation it has held constant. Labor-hours are labor-hours are labor-hours.

Now let’s review where the advantages of discoverable/textual formats are. Auditability. Hardware independence. Sociability. Extensibility. These are all attention-cost minimizers. They’re, very specifically, enablers of forward design. In the future you’ll always need what a Go player calls “aji” (potential to extend and maneuver). Discoverable textual wire protocols are good at that; packed binary protocols are bad at it.

But I’m absolutely not here to propose a cost model under which discoverability is in a simple linear struggle with bit-density that discoverability always wins in the end. That’s what you might think if you notice that the ratio between attention cost and thermodynamic cost keeps shifting to favor discoverability as thermodynamic cost falls while attention cost stays near constant. But there’s a third factor that our NTP estimation has already called out.

That factor is transaction volume. If you pull that low enough, your thermodynamic costs nearly vanish and packed binary formats look obviously idiotic. That’s where we are with NTP service today. Consequently, my design sketch for NTPv5 is a JSON profile.

On the other hand, suppose you’re running a Google-sized data center, the kind that’s so big you need to site it very near cheap power as though it were an aluminum smelter. Power and heat dissipation are your major running costs; it’s all about the thermodynamics, baby.

Even that kind of deployment, NTP service will still be thermodynamically cheap. But there will be lots of other wire protocols in play that have transaction volumes many orders of magnitude higher…and now you know why protocol buffers, which are sure enough packed binary, are a good idea.

The thought I want to leave you all with is this: to design wire protocols well, you need to know what your cost drivers really are, how their relative magnitudes stack up. And – I’m sorry, but this needs emphasizing – I constantly run into engineers (even very bright and capable ones) whose intuitions about this are spectacularly, ludicrously wrong.

You, dear reader, might be one of them. If it surprised you that a credit-card-sized hobby computer could supply Stratum 1 service for a major First-World country, you are one of them. Time to check your assumptions.

I think I know why people get stuck this way. It’s what Frederic Bastiat called a “things seen versus things not seen” problem in economic estimation. We over-focus on metrics we can visualize, measure, and estimate crisply; thermodynamic costs and the benefits of perfect bit density tends to be like that. Attention costs are squishier and more contingent, it’s more difficult to value options, and it’s especially easy to underestimate the attention cost of having to do a major re-engineering job in the future because the design you’re sketching today is too rigid and didn’t leave you the option to avoid a disruption.

One of the deeper meanings of the quote “Premature optimization is the root of all evil” (often misattributed to Donald Knuth but actually by Tony Hoare), is that you should constantly beware of doing that. Nassem Taleb, the “Black Swan” guy, would rightly call it fragilista behavior, over-planner’s arrogance. In the real world, aji usually beats arrogance – not every time, but that’s the way to bet.

93 comments

  1. > I think I know why people get stuck this way.

    There’s a simpler explanation that I’ve directly observed: for many engineers, micro-optimization and over-engineering is fun. So it’s not so much that they have poor intuitions about costs, it’s that in the absence of incentives not to, they’ll geek out on patterns and micro details just for the sheer intellectual exercise (and maybe some showing-off). This is especially prevalent in recent CS graduates I’ve mentored. From what I can tell, they just spent four years being rewarded for this behavior, and it takes a while to unlearn.

    1. >There’s a simpler explanation that I’ve directly observed: for many engineers, micro-optimization and over-engineering is fun.

      Good point. I may fold this into the book version of the essay.

  2. “202J”.

    Stupid speech recognition. It’s a half duplex protocol, 1200 BPS and One Direction and 75 in the other.

    [ it appears it didn’t actually post the original comment, and I’m too tired to rewrite it now. Sorry. ]

  3. It is true that you need to know whether a binary number is big or little endian on the wire, but you should not normally need to worry about whether to “byte swap” or not in any environment that is at least as high-level as C, since you can write platform-endianess-independent code anyway, and leave the byte-fiddling to the compiler, eg https://godbolt.org/z/oBObQZ

    But, I do of course still believe you that this is a common source of bugs.

    1. There is one given byte-order for network protocols; in C library there are hton() and ntoh() to handle differences between *host* order and *network* order, isn’t it?

      1. There is, but some people will ignore that and design binary wire protocols or file formats using little-endian. They will justify it with “all our architectures, currently and for foreseeable future, are little-endian or bi-endian operating in little-endian mode, so we won’t have to byteswap everything every time”.

  4. “And this is my forty years of field experience, with specific and proven domain expertise in wire-protocol design, telling you: This. Never. Happens. The limitations of that style of protocol are inherent, and they are more binding than you understand. You aren’t smart enough to evade them, I’m not, and nobody else is either.” <– Eric

    Counterexample: ASCII is a bit level protocol.

    1. Well, somewhere the rubber has got to meet the road.

      Additionally all the attempts to extend it for other writing systems/more characters have encountered a large number of problems.

      * old systems only supporting actual 7bit values
      * no way to accurately distinguish between different non-ascii encodings without fragile heuristics
      * several ad-hoc ways of embedding data into ASCII (base-32/58/62/64/etc)
      * all the problems unicode encountered that weren’t self-inflicted or inherent to their problem domain.

    2. >Counterexample: ASCII is a bit level protocol.

      A fact I mentioned in the first paragraph of the post.

      Don’t be tendentious. It’s not even the least bit difficult to figure this one out.

    3. And the limitations of ASCII resulted in issues that produced a whole welter of mutually-incompatible workarounds regarding “national” character sets and methods of encoding, despite the fact that the forward-looking creators deliberately engineered ASCII to include an escaping mechanism to support other character sets.

      Now, if you want to claim UTF-8 as a counterexample, that’s fine. But that was designed after dozens of attempts at an ASCII successor, aimed at a known-finite problem (encoding the world’s writing systems), and took quote a while to propagate.

      So, if you’ve got a well-characterized and known-finite task, the ability to iterate dozens of times by many top-end coders, and a testing base of millions, you too might finally outsmart the inherent difficulties of a bit protocol.

  5. Another example: Git transfer protocol is, in the negotatiation phase, text (in the form of pkt-lines, length+content). It is very extensible… though it was not without its own design gotchas (retrofitting v2 protocol into the git:// i.e. plain socket transfer was PITA).

    I don’t think that one would be able to use JSON – the final part of the interaction in a single connection (or “connection” in the case of HTTP(S) transport) is sending the binary packfile: a large amount of densely packed binary data.

    1. My “go to” protocol lately has been this:

      1 byte of length for up to 255 bytes for a “type” string, then 4 bytes of length for that much JSON.

      Pre-specifying a type allows statically-typed languages to figure out how to unmarshal the message in advance. I used “4 bytes of length + that much JSON” for a while but kept getting in trouble when I needed at least a clue about the type in advance. However, that’s a pretty decent protocol if you’ve only got dynamic languages like Python in the mix.

      And then, if you carefully write the protocol implementation so that it never reads ahead of the “that much JSON” part, you can easily write a JSON message that says “OK, after me, there’s a 35,351,235 byte tar.xz file coming, receive it, then go back to receiving JSON”. Or I’ve got some implementations where the client & server negotiate some details between themselves, then the protocol says “And then the TCP socket became a pipe (in basically the UNIX sense).”, and the JSON is forever done after that.

      You can see there’s not much to the implementation, but I have a Go implementation on GitHub. Here’s a simple file transfer example that transfers the file as a binary.

      It still isn’t the be-all, end-all of protocols by any means, but from experience, damn but it’s a convenient default to reach for. So much work has gone into making JSON slick and you can harness all of that. You just have to mentally keep the limits in mind and not go crazy, but as Eric says, the limits are greater than a lot of people realize.

      1. >It still isn’t the be-all, end-all of protocols by any means, but from experience, damn but it’s a convenient default to reach for.

        That’s a good design. I may steal it sometime.

        1. For cases where bits are still expensive, I’d go along with that – and I’ve seen protocols like that in the past. It is an elegant solution to a problem I hope we will see less and less of. There will always be itty bitty computers where you might be tight, or low speed IoT gadgets or whatever. I still occasionally do consulting where the entire computer for a complex commercial device is an 8-bit PIC (with, ugh, Harvard architecture), and those are memory and speed constrained, which allows my customers to buy the parts at under a dollar.

          But in general, for something like the proposal by Jeremy above, instead of one byte, and four bytes, make it one string of characters, and four strings or a string four times as long, or something a bit more readable.

          After almost 30 years of bits and bytes, I transitioned to ASCII ( before Unicode made sense) back in 1989 for a major and still widely used industry protocol, and it was a wonderful thing. But even today, I find myself unpacking US National Weather Service data messages that are bits and bytes and 16-bit words, and other uglinesses, and it reinforces my desire to stay away from such stuff.

          1. >But in general, for something like the proposal by Jeremy above, instead of one byte, and four bytes, make it one string of characters, and four strings or a string four times as long, or something a bit more readable.

            I would pretty certainly have done that myself once I started tinkering with it. The only parts there’s any case for leaving in binary is the big blobs after the YAML stretches.

            1. You’re right. I pushed a change to the Go library to make it two ASCII-number delimited strings, and add newlines after the type and the JSON fields.

              I left both as length-delimited strings to A: keep the documentation on type names simple and B: because while you can spec a JSON blob to be on one line, I’m not sure all libraries support that, better to let it be length-specified. Plus this lets you turn on “Go ahead and fluff up the JSON with whitespace for debugging” if you want.

              1. >I pushed a change to the Go library to make it two ASCII-number delimited strings, and add newlines after the type and the JSON fields.

                What’s the name and URL? I might want to use this sometime.

          2. Actually, the met-data culprit is the World Meteorology Organization. BTW, they are also the organization which insists that their standard for longitude is 0&leq;LON<360, in contradiction with ISO Standard 6709, treaty obligations to which all the major players are signatory, and centuries of practice by everyone else.
            Plotting WMO-format data on a map is an absolute pain.

  6. It seems to me that this is all based is based on the false dichotomy of “fast, dense, unfriendly binary protocols” vs. “discoverable, easy to use JSON protocols”. The post even goes so far as to say these two goals are “diametrically opposed”.

    One huge problem with JSONifying the world is that you encourage the proliferation of “shotgun parsing”. Just because you can go from a JSON string to a Python dict in one line of code, it isn’t necessarily true that the result is a whole lot better (in terms of programmer productivity, security etc etc) than memcpying a fixed-size binary message over a packed C struct, much like you may have done 20 or 30 years ago. You still haven’t done any real parsing or validation.

    A FlatBuffers schema with long, friendly, discoverable field names is just more useful to me as a programmer than spec that says “OK, we’re using JSON and the messages look like this” – and means you get all the advantages of the efficient encoding, while still being able to round-trip to JSON.

    I also question the discoverability of a JSON spec where many of the keys 2 characters long, and the wisdom of using JSON for a protocol which you probably want to be accessible to even the tiniest of microcontrollers.

    1. >It seems to me that this is all based is based on the false dichotomy of “fast, dense, unfriendly binary protocols” vs. “discoverable, easy to use JSON protocols”. The post even goes so far as to say these two goals are “diametrically opposed”.

      Mentioning JSON in that attempted summary is your first mistake. JSON happens to be handy here but it’s not essential to the argument.

      Imagine, say, a metaprotocol like JSON but without the requirement that you quote field tags, or write the outermost { }. That would do just as well and meet every desideratum I described. And I happen to know, because I’ve done it, that it’s possible to write for a metaprotocol in this class a very small lightweight parser in C that uses only static storage and can be embedded anywhere.

      1. > JSON happens to be handy here but it’s not essential to the argument.

        Right. Lately, I’ve been liking CBOR for its combination of binary efficiency and human discoverability. (Though I probably use enumerations in maps instead of just strings more than is strictly speaking necessary.) And even though CBOR is binary and hard to see, there are general-purpose easy-to-use tools to turn it into a variety of other formats that are easy to read (usually JSON).

    2. I make my living working in microcontroller embedded C.

      We use JSON to communicate with web stuff. The tradeoffs are just too favorable.

  7. I’m a big fan of ASCII, hopefully human readable, protocols on the wire and I accept all your points about the dangers of binary (especially packed) protocols.

    However, all is not rosey with self describing ASCII protocols. They do separate the concerns of reading, parsing, deserialising and processing much more than binary protocols, but they give you very little guarantees of correctness above the parsing layer. A given packet can still cause semantic and logical errors in the code and the programmer must still be just as defensive here as they would have to be with a binary protocol.

    Unfortunately the narrative around these protocols is that they can’t go wrong in this way because people don’t appreciate the difference between the different protocol layers.

    JSON in particular is tricky with numbers. Lots of parsers assume that those decimal literals are floating point numbers (as they are in JavaScript) so there are a bunch of encodings that are not useful or canonical and it’s very difficult to work out what those are without doing a full decode, re-encode operation and comparing before with after.

    This means that

    (a) in practice you can’t just add precision because people many parsers in many languages won’t do anything useful with it.

    (b) Unless you have a single, canonical serialisation for each possible value of your internal data structure, you leave yourself vulnerable to attacks on the cryptography that you use. Often, but certainly not nearly always, these are impractical but you really must do the extra work to verify that when you’re standardising things and that can be tricky to get right and expensive to get wrong. Simple things like leading zeros, UTF-8 canonicalisation and significant whitespace can spoil a cryptographic checksum at the receiver. Because these differences are not meaningful in JSON different libraries on different systems are allowed to encode in different ways. Therefore, you have to compose the different technology in your serialisation / deserialisation stack very carefully to make sure that everything works out from an interoperability perspective.

    OTOH (and AFAIK), floats are the correct data type for the underlying GPS floats. They’re not the correct datatype for NTP epoc-style integers. For that, you’d have to specify a ASCII serialisation of a number to a string such as “1234” and then handle that explicitly in the higher layers of your deserialiser. Once you start doing this at the deserialisation layer, the hoops you have to jump through start to look kind of similar to the hoops you have to jump through with binary protocols at the framing layer.

    So, yes: human readable protocol encodings are good.

    But, they’re not a free lunch and you have to think of lots of things that most practitioners are not only not aware of but are actively encouraged to disregard because “those kinds of things only apply to binary protocols”.

    1. >But, they’re not a free lunch and you have to think of lots of things that most practitioners are not only not aware of but are actively encouraged to disregard because “those kinds of things only apply to binary protocols”.

      You’re not wrong. But I think the problems in this application are quite manageable.

      >(a) in practice you can’t just add precision because people many parsers in many languages won’t do anything useful with it.

      Not really a blocker here. No implementation of JSON parsing is going to choke on large integral parts, and only NTP itself needs to care about extracting full precision from the fractional parts. Because I’ll have control of the reference implementation codebase at both server and client ends, I’ll be able to use a semi-custom parser that has been unit-tested to verify correctness. In fact I’ve already written and tested that parser.

      >(b) Unless you have a single, canonical serialisation for each possible value of your internal data structure

      That’s not actually difficult to arrange either. The data ontology can be reduced to non-negative integers and strings.

  8. One reason developers avoided self-describing wire formats for so long, is because for a while the only widespread language that could be used to describe them was XML.

    Ugh.

      1. I have more or less this reaction to JSON. :-( Though it’s certainly an improvement over XML. XML is only nominally human-readable.

        I wish there were a good serialization format that covered lists, dictionaries, integers, reals, and strings, which didn’t require ubiquitous quoting, and was actually nice to read and write. That would cover some huge percentage of use cases, I think, and make my life much less painful.

        YAML comes closest of anything I’ve tried, but I’m told it’s a monster to implement.

        1. YAML tries very hard to be human-readable and human-writable, at the cost of there being More Than One Way To Do It. This makes YAML awkward to produce by a machine — it just can’t choose the “right” representation. Not without a lot of heuristics, anyway.

          Non-heuristic-driven machine-generated YAML devolves into JSON.

          1. It reminds me of that Stroustrup quote, “Within C++, there is a much smaller and cleaner language struggling to get out”. I feel that way about YAML.

            I suppose it proves that a human-friendly data format is *possible*, at least, which is nice.

        2. I’ve found TOML to satisfy those requirements quite nicely. It’s designed for human read/write first, data serialize/unserialize only later; my use cases for it are mostly programs of the form “write some autogenerated data, tweak it in a text editor, read it back in and process it”, but it’s also widely used for config files.

          1. I’ve never heard of TOML before. After a quick look, I like it as a file protocol. However TOML looks to be tricky as a wire protocol encoding because of the uncertainty of handling EOL and the fact that unless you use lots of {} you need EOLs.

            Having said that, the single line format with {} does look like a useful extension to JSON to allow for more types.

            Also I’m quite possibly missing something from reading the spec in 5 minutes

            1. >I’ve never heard of TOML before.

              I just added TOML to loccount’s recognition set. And, while I was at it, JSON, YAML, and INI.

  9. Seems like PNG might have gotten this mostly correct. The “type” fields are 32-bits, but they are also somewhat readable tags when interpreted as ASCII. Everything that’s variable length gets a length field, which is isomorphic to having a predictable end tag.

    In the little engineering I do, I just use JSON. I might use a subset of Dhall if I actually needed to send functions across the wire. If bit density ever really became a point of pressure (that can’t be relived via snappy or some other streamable compression), we could design a “tight” protocol to be used just at those places where we need to relieve the pressure. That’s never been an issue.

    1. Seems like PNG might have gotten this mostly correct.

      That’s because PNG took as its model the Amiga IFF format — hands down one of the most flexible binary formats ever devised.

    2. >Seems like PNG might have gotten this mostly correct

      Agreed. I almost mentioned this in the OP but decided that talking about storage formats would be out of scope.

      1. Rather than being out of scope, I say they are the same thing: a wire and a hard drive are both unavoidably serial, which is where all of the difficulties with them come from.

        1. >Rather than being out of scope, I say they are the same thing: a wire and a hard drive are both unavoidably serial, which is where all of the difficulties with them come from.

          It’s almost a fair cop. The difference is that random access to a storage file is cheap, and what you’ve seen is not irretrivably thrown away by falling out of the input buffer.

          1. > The difference is that random access to a storage file is cheap

            Says the person who hasn’t worked in the data storage industry! Non-central fallacy for the win!

            1. Well, compared to random access to a resource out on the Internet, I’m sure it is cheap. Compared to something in RAM, access to a storage file is incredibly expensive, but that’s not what he was talking about.

  10. >..it’s especially easy to underestimate the attention cost of having to do a major re-engineering job in the future because the design you’re sketching today is too rigid and didn’t leave you the option to avoid a disruption.
    >In the real world, aji usually beats arrogance – not every time, but that’s the way to bet.

    In this, as in most things, balance is key: How many super-over-engineered meta-frameworks with exactly one use-case implementation have you run across? Those are what happens when you let a junior dev ‘be flexible’ and solve the ‘general problem’ instead of the particular one. If the need is for a floor wax, does it need to be able to be a floor wax *and* a dessert topping? I think you’ll agree that as much as new grads need to be restrained from micro-optimizations, they also need to be restrained from inventing entire platforms to solve small problems.

    Maybe this difference is due to data vs code. Or maybe it’s more about the use-lifetime of the work in question, since specs are designed for a longer use-timeframe than code. I just wanted to point out that ‘aji over arrogance’ can be misapplied.

  11. > the production NTP load on a a national time authority for the most populous nation in Europe (excluding transcontinental Russia) wouldn’t come even close to maxing out my home broadband or even just one of the Raspberry Pi 3s on the windowsill above my desk. With all six of them and only a modest bandwidth increase I could probably come pretty close to servicing the Stratum 2 sites of the entire planet in a pinch, if only because time service demand per head is so much lower outside North America/Europe/Japan.

    Meanwhile, in other parts of my inbox, the NTP pool project is trying to figure out what to do with an estimated 10 million NTP requests per second from China, and individual server operators struggle to scale a single NTP server on a fast(er than a Pi) CPU to handle even 0.1% of that traffic (the best solution so far seems to be to install 1000 Raspberry Pis in China, but progress on that approach has been…slow). The juxtaposition of these two views of the NTP protocol amuses me.

    I don’t doubt you could serve the NTP pool and all the corporate LANs in US/EU/JP that do run a stratum 2 (and for some reason don’t run their own stratum 1), maybe even from a single Pi. There’s only a few tens of thousands of those, and the ones that run their own stratum 1 don’t need to bother your server. The per-capita NTP pool capacity in those regions is huge, utilization is below 1%. In other parts of the world, NTP service utilization is well over 100%, and a change in packet size would matter a lot.

    Also: At one point in the distant past it was important for NTP packets to all be the same length, to avoid adding variable (especially asymmetric) transmission or CPU processing delays. This was before CPUs had to busy-loop to wait for nanoseconds to go by.

    1. >The juxtaposition of these two views of the NTP protocol amuses me.

      Zounds. Somebody managed stratum fanout…poorly.

    2. I was thinking about this myself. I’m not at all sure the German stratum 1 sever is the right thing to think about. I’d be much more interested in time.nist.gov, or time.microsoft.com, or time.apple.com.

      1. Indeed, from what I’ve seen, most OSes’ default NTP setups list the OS vendor’s NTP pool as the default NTP source.

    3. Just as ISPs normally provide a pair of DNS server IPs to clients along with their DHCP leases, they should also give a pair of NTP server domain names (which could actually be round-robin DNS pools). There is no reason for most machines to make an NTP query outside of the ISP’s network, if the ISP is able to satisfy those queries internally.

      1. In fact Microsoft OS’s do that by default in an Active Directory domain-joined environment.

        All domain-joined clients use their AD servers as their NTP servers by default. Only ADC’s by default look anywhere else.

        But it would make sense to specify that in DHCP. Relatively easy as well (at worst it’s an extra DHCP field, the biggest stumbling block is getting client OS’s to honour it)

      2. “There is no reason for most machines to make an NTP query outside of the ISP’s network, if the ISP is able to satisfy those queries internally.”

        Actually there is. What if you don’t trust the ISP? A lot of things (e.g. DNS TSIG, TLS certificate negotiation to name two) require accurate time on both ends to work. If I were a nasty ISP or country I might decide to break attempts to access things securely outside my ISP/country by providing an NTP time that was, say, 10 minutes in the future.

        1. The same applies for DNS, really. And a non-trivial number of folks already use nameservers other than the ones their ISP supplies. Those currently using NTP servers outside their ISP could keep doing so – the DHCP-provided ones would be defaults.

          I mean, unless the ISP is malicious and blocks requests to other machines, but they can already do that.

  12. It’s interesting to think over the encoding of protocol buffers after reading this article. Their goal seems to have been to make something that was binary-enough to dissuade people from making their own thing, while dealing with some of the worst problems that people tend to come up with in binary protocols.

    * Integers are 64-bit by default, and people seldom try to shrink them because they use variable-width encoding. If your actual numbers are small, they’ll be small on the wire.

    * All fields are tagged with 64-bit field ids, and order doesn’t matter. Want to add a new field? You can, no sweat.

    * Unrecognized fields are ignored, so extensions to what you send on the wire won’t crash older software. If you modify and re-serialize a message with unrecognized fields, they’ll be passed through.

    * Strings use the One True Encoding, UTF-8. Compare with XML or JSON.

    * The grammar is LL(0) as god intended. This makes writing a correct parser surprisingly easy.

  13. Flip-side argument which might apply in this case:
    Bit-packed protocols (or at least fixed-width ones) can be guaranteed to be processed in a fixed amount of time. This is a very nice property for real-time systems. Granted, the easier way these days to manage real-time requirements is just to throw 10x the required processing power at a problem and forget about the details. But provability does have nice properties on occasion.

    1. >Bit-packed protocols (or at least fixed-width ones) can be guaranteed to be processed in a fixed amount of time. This is a very nice property for real-time systems.

      Yeah, I’ll buy that counter-argument for this particular edge case.

    2. Yeah, I run into a lot of these in the industrial automation industry. Ranging from tiny devices like load cells that just constantly stream “packed” data as long as they have power (usually in a “protocol” that originated for RS-232 and simply got copied to a TCP/IP connection) to complex hard-realtime kinematic control systems that *have* to have their entire data set transferred without fail every X ms.

      A lot of this is based in very old cruft and backwards compatibility. But one of the odder cases, though, is the high-speed status data stream from the Universal Robotics controller. It’s completely bit-packed, and the *only* way to parse it is to know in advance exactly which bytes in the stream are part of which char/word/double-word/etc. The controller is quite new, and has plenty of CPU grunt, so I don’t see any reason for *not* using a more discoverable, human-readable, easily-extensible format. I suspect whoever got assigned to write it was thinking “old school.”

      1. >It’s completely bit-packed, and the *only* way to parse it is to know in advance exactly which bytes in the stream are part of which char/word/double-word/etc. The controller is quite new, and has plenty of CPU grunt, so I don’t see any reason for *not* using a more discoverable, human-readable, easily-extensible format. I suspect whoever got assigned to write it was thinking “old school.”

        Yeah. A classic case of what I think of as “EE syndrome”.

        This kind of thing is why I tend to start ranting and cursing when fscking idiots try to sell me on the superiority of packed binary.

  14. In reading this post, I see a lot of similarity to your TAOUP section on config file formats. Seems like a lot of the good design rules are common to both file formats and wire protocols.

    1. >Seems like a lot of the good design rules are common to both file formats and wire protocols

      Agreed. As I said in response to an earlier comment I almost mentioned this in the OP but decided that talking about storage formats would be out of scope. I might add a note about it in the book version.

      1. Other than the topic of indexing, there’s not a terrible difference: Both are serialization formats, and it’s not at all uncommon for programs to not know whether the serialized form is going over the air or onto disk.

  15. I find this amusing: [1]

    The Rsync Protocol

    A well-designed communications protocol has a number of characteristics.

    – Everything is sent in well defined packets with a header and an optional body or data payload.
    – In each packet’s header a type and or command specified.
    – Each packet has a definite length.

    In addition to these characteristics, protocols have varying degrees of statefulness, inter-packet independence, human readability, and the ability to reestablish a disconnected session.

    Rsync’s protocol has none of these good characteristics. The data is transferred as an unbroken stream of bytes. With the exception of the unmatched file-data, there are no length specifiers nor counts. Instead the meaning of each byte is dependent on its context as defined by the protocol level. “

    There are probably some reasons for that. But it has been successful (whether intended or not) to dissuade almost all who would initially think to create anything wire compatible with rsync. Pity.

    [1] https://rsync.samba.org/how-rsync-works.html

  16. Have you met CBOR / YANG?

    It’s sort of JSON / JSONSchema like thing but binary.

    The interesting to me thing about it is it takes discoverability one step further… in that it comes with lots of cunning that in principle should allow you to automagically pull (or reused) some well known schema and have a well known meaning ascribed to the fields.

    I work in the embedded tiny thing / very low bandwidth RF link space.

    1. >It’s sort of JSON / JSONSchema like thing but binary.

      I’m hostile to application protocols that I can’t eyeball, and I don’t work anywhere they’re a good enough design choice to overcome my hostility. So I haven’t touched these tools.

      1. A good compromise is a protocol that takes something human-readable like XML or JSON and runs it through a compressor like gzip or bzip. (If memory serves, OpenOffice formats are ZIPped XML.) All you have to do is run your capture through the decompressor and your Mk. 1 Eyeballs can be used, and the compression will almost always produce output nearly as small if not smaller than a binary format would.

      2. Admittedly you have to interpose a cbor to XXX filter between you and the data… but even json I always end up interposing a json pretty printer like jq into the pipeline.

        The nice thing about schema is they can become the source of agreement between parties, and can carry the unstated burden of defaults values.

  17. You say in the spec that you want to start with “+” and a packet type. Don’t do that. Make it a protocol-version indicator, and follow that with the packet type such as “+5:0”. That way you have already provided the way for NTPv6 to cleanly succeed NTPv4 when the time comes.

    1. >You say in the spec that you want to start with “+” and a packet type. Don’t do that. Make it a protocol-version indicator, and follow that with the packet type such as “+5:0”. That way you have already provided the way for NTPv6 to cleanly succeed NTPv4 when the time comes.

      Here’s a lesson I learned from the PNG chunk system: When you go to a tag-value format like this, “version” loses the meaning it has when the format is rigid and you can only make big-bang changes. You evolve the format by adding new fields and ceasing to generate old ones. No field ever changes semantics after it’s defined.

      1. But “ceasing to generate” does not relieve implementations of the burden of being able to handle those old packet types the way that an actual version number does. Maybe that’s not that big of an issue for NTP.

        I suppose if you have high confidence NTPv6 is that far off, you(r successors) can use any non-numeric character after the “+” to indicate that break.

        1. Could you have something like a “deprecations” field, listing all fields that the implementation does not implement? I suppose there is the risk that in the far future this list could become arbitrarily long. I suppose it could be shortened by periodically defining groups of fields to be listed as a single deprecation item, but that might run afoul of Eric’s “don’t change semantics after definition” rule, and a sufficiently old implementation might not understand group deprecations defined after it was written. OTOH, you could deal with the latter issue by allowing an implementation to query another for the exact fields included in a deprecation group passed by the other implementation that the first did not understand. OTGH, I suppose this could allow for a DOS attack in which a malicious implementation could advertise a common deprecation group, and then, when asked for a definition of that group, could send back a definition including all fields used in the protocol, rendering the victim unable to interact with implementations advertising that deprecation group until the group definitions were reset.

        2. >But “ceasing to generate” does not relieve implementations of the burden of being able to handle those old packet types the way that an actual version number does. Maybe that’s not that big of an issue for NTP.

          Experience with the PNG chunk system suggests that while this is a theoretically major problem it’s practically very minor to nonexistent.

          I used to be one of the PNG maintainers, actually wrote the support for a couple of minor chunks, so I’ve had a close up view of this. And I expect NTP to have less churn than encodings of graphics.

  18. I’m the lead author of NTS and I designed the record layout for NTS-KE, which is the key establishment protocol that I’m not sure why you’re calling “unnamed”.

    I’ll organize this rebuttal into three parts. First I’ll address why JSON is a particularly terrible choice for a wire format no matter what the application, and how its flaws bite your NTPv5 proposal particuarly hard. Second, I’ll rebut some of your general arguments against binary protocols. I’ll conclude by mentioning a few considerations specific to NTP and to NTS-KE.

    As for JSON, I really don’t really have much to add to Parsing JSON Is a Minefield. In a discussion we had on Signal you minimized this article as only revealing implementation flaws, while maintaining that JSON itself is sound. Eric, over 50 different implementations were tested and no two of them recognized quite the same language. Do you think all of those implementers are at fault? The blame here rests almost entirely with its specification(s), mostly for being vague and contradictory but also for (clearly) making impractical demands such as using a number format incompatible with IEEE754.

    In your NTPv5 design, you’ve attempted to avoid some of these issues by prohibiting certain JSON features that trigger them. This makes the ill-specification problem worse, not better, by adding even more ill-specified requirements. Now to decide whether a message is syntactically valid, you must first recognize valid JSON, and then recognize the constructs you’ve prohibited so that you can reject those.

    The flaws in JSON don’t just come up in weird edge cases: if you did a direct translation of NTPv4 into JSON syntax, then implementations using different JSON libraries would often completely fail to interoperate. This is because, due to the mismatch between what numbers are representable on the wire and what numbers are representable in a floating point register, numbers can’t be relied upon to round-trip cleanly through parsing and printing. As a result,
    the value that a client parses out of an origin timestamp won’t reliably match what it put into the corresponding transmit timestamp, and thus the response will be rejected. Your NTPv5 proposal dodges this particular problem by dispensing with origin timestamps altogether, but that the viability of your message format is sensitive to such things still goes to show what a foundation of muck you’re building on.

    JSON’s lack of economy with message size is also a problem for NTP. NTP needs to run over UDP, and it can’t allow packets to fragment; trying to relax either of these requirements would
    completely trash timekeeping precision. (You can’t rely on protocols that rely on fragmentation to work even tolerably well; routers tend to deprioritize fragmented packets, firewalls tend to drop them, and they inherently increase loss and jitter). That means that packets need to reliably be 1280 bytes or less, including IP and UDP headers. An NTPv4 packet with no extensions sent over IPv6 is 40+8+48=96 bytes. NTS extensions, in the common case where you’re using AES-128-SIV, cookies follow the recommended format, and and only a single cookie is sent, add another 168 bytes (give or take a few depending on how padding works out), bringing the total to 264; still comfortable. But in some cases where several cookies need to be sent to recover from previous packet loss, we already end up skirting that
    1280-byte threshold pretty closely. Having to base64-encode all our cookies and ciphertext would put us over it with some regularity.

    Moving on to your critiques of binary protocols, which can be summarized as emphasizing the value of human-readable and self-describing formats, while demphasizing the value of compactness and the ability to write a performant parser.

    I agree that in many cases, message size and parsing speed will never cause a bottleneck, and that in those cases a format that optimizes for these qualities won’t buy you much. But the cases where they matter are a lot more common than you realize, and you’ve failed to
    realize that NTP is one of them, as I’ve just finished arguing. You’re in good company; your attitude was a trendy one in the ’90s and early ’00s, and today we’re still paying the cost of the mistakes that it led to: hence all the effort to switch to HTTP/2.0, for example.

    (Tangent: stop calling NTP and NTS-KE “packed”. They aren’t. Packing would mean replacing the fixed-width fields with variable-width ones using some sort of Huffman coding)

    Your advocacy for self-describing formats is a non-sequitir to the rest of your argument, since whether a format is binary or textual is entirely orthogonal to whether or not it is self-describing. CBOR is a self-describing binary format. A CSV file with no header line is textual but not self-describing. Debating self-describing formats is therefore a digression, but I’ll address them very briefly: I think they’re usually a mistake. While self-describing formats save you from needing a schema parsing, all the information contained in a schema is still necessary for semantic validation. Non-self-describing formats such as Protobuf tend, out of necessity, to have good toolchains for generating parsers and validators from a schema. Some self-describing formats have such tools too (XML, dumpster fire though it is, at least
    has this much going for it), but too often the validation logic is instead something hand-written, ad-hoc, and bug-ridden. In code that consumes JSON, this manifests itself as lots of tedious checks to make sure that all the fields are of the type you expect them to be.

    Moving on to human-readability. Human-readability is essential, of course, for things that are actually intended to be directly read and written by humans. Programming languages need to be human-readable, because they directly serve as the interface between the programmer
    and the machine. But wire protocols are not for humans; they’re for communication from one machine to another. Occasionally, a programmer or network administrator needs to inspect what’s going over the wire or to craft messages by hand; while this is a minority use case, we
    agree that it clearly needs to be accommodated. Sending text over the wire, however, is a lazy and wasteful means to that end.

    Rendering wire messages into a format consumable by humans is the responsibility of your tooling. Protobufs are particularly good at this; their specification includes a canonical JSON encoding and their official implementation includes library calls for converting back and forth. This is perfect for when you’re working from a REPL: the programmer gets to work with text during the debugging session, but the endpoints in production never have to handle it. And in the case where you’re not at a REPL but rather analyzing a packet capture,
    there’s even a tool for generating Wireshark dissectors from Protobuf schemata.

    Binary protocols plus tooling for interfacing with humans is how the whole rest of the stack below the application layer has always worked, and it works so smoothly that you seldom have to think about the fact that’s actually going over the wire is electrons, not a stream of bytes. And at several layers, making your wire protocol human-readable just isn’t an option. You’ve already conceded that the lowest layers (PHY and MAC) obviously can’t be so I won’t harp on those. But the same goes for any layer involving cryptography, such as TLS. Ciphertext is not made human-readable by rendering it as a JSON string.

    Finally, a couple considerations specific to NTP and to NTS-KE. In arguing these final points I make no claim of generalizability to other protocols.

    By using anything other than a fixed-field representation for NTP, you require a complete redesign and increase in cost in a lot of embedded hardware that deals with it. You argue that even a Raspberry Pi can handle NTP at pretty high loads, and that this would continue to be
    true if you added the cost of JSON parsing. But the distinction between a Raspberry Pi and a high-end rack server is not interesting to me; those are both complete general-purpose computing platforms with maybe one order of magnitude difference in processing power. The interesting distinction is between a Raspberry Pi and a couple hundred GEs in a dusty corner of an FPGA, which is how clockmakers like Netnod actually do this stuff. Earlier drafts of NTS used PKCS#7 padding in one place to meet RFC 7822’s alignment requirements. We changed this to a counted-string representation because hardware makers complained that handling PKCS#7 would require too many gates. I haven’t asked them what having to handle JSON would do to them because it’s obvious it would be many orders of magnitude worse.

    I’m not defending NTP’s syntactic ambiguity on where extension fields end and the MAC field begins; nobody is. That was just an outright error on Dave Mills’ part and, whatever form NTPv5 takes, it will correct it. RFC 7822 fixes the ambiguity for the meantime. It does so
    in pretty much least kludgy way possible without bumping the protocol version, but everybody agrees it’s still a kludge.

    As to the type-length-value (TLV) scheme used in NTS-KE: the working group settled on this unanimously a few years ago after some discussion. An earlier NTS proposal before I came on the scene used ASN.1; its own authors already regretted this choice by the time I proposed doing away with it and were enthusiastic about doing so. Nothing textual was ever considered, because there would be no benefit to it when most of what we’re passing around consists of cryptographic blobs. The choice that actually needed some consideration was between something dead-simple but inflexible and ad-hoc like what you see, versus something more complex but flexible and with good tooling available, like Protobuf. We concluded that the basic TLV scheme would meet all our likely needs, and that it was so simple that it would be easier for projects to write their own tooling for it than it would be to import ready-made tooling for anything more complex. TLV won’t scale if we ever want to extend NTS-KE with something that requires more complex data structures, and at that point I would advocate something Protobuf-shaped. But I’ve forseen this, and that’s why I mandated ALPN: it gives us unlimited chances to take a mulligan on past design choices without breaking backward compatibility.

    1. This guy absolutely took the article author to school. Author has clearly never worked in real-world environment where constrained devices are at play. Preferring human-readability over space-efficiency in a protocol is the very first sign of author’s ineligibility as a protocol designer.

      1. >This guy absolutely took the article author to school.

        I’ll be writing a detailed response within a few days. It will take a bit because one of Daniel’s arguments may be an actual showstopper; I need to do some detailed packet-size modeling to check. But most of his arguments mainly reveal a lack of understanding of the problem space. That’s OK; I wouldn’t do any better at designing a cryptosystem. And he’s at least making intelligent mistakes, not stupid ones.

        As for you, Some Guy, you get to tell me I’m disqualified as a protocol designer only after you have fielded an application protocol that has been as successful as GPSD’s over a decade of time and hundreds of millions of deployments.

        1. gpsd was originally written by Remco Treffkorn with Derrick Brashear, then maintained by Russell Nelson. It is now maintained by Eric S. Raymond.

          1. >It is now maintained by Eric S. Raymond.

            Neither Remco nor RUSS wrote the JSON reporting protocol. For good or ill, that was all me.

            Actually you’d be hard put to find any code in GPSD that predates me at this point – I reworked it all pretty thoroughly. Had to for a couple of reasons, including the driver system (it only did basic NMEA when I got my hands on it) and the automatic protocol/baud-rate detection. By the time those two big changes were done…well, even if I’d been trying to preserve the ancestral code there wouldn’t have been much left.

            1. > Neither Remco nor RUSS wrote the JSON reporting protocol. For good or ill, that was all me.

              That was for ill.

              As for me, I’m an active Linux kernel contributor. But unlike some people (cough,cough) here, I am not going to claim credit for its success.

              1. >But unlike some people (cough,cough) here, I am not going to claim credit for its success.

                What a very vivid imagination you have. I’m sure it will be a comfort to you in your declining years.

                1. Just retire. IQ falls pretty fast for some people as they age. Yours is obviously below the threshold of a useful programmer already.

        2. While I look forward to your successive article and hope to see this debate continue. One recommendation I would strongly consider making is to stop attempting to make an argument from authority. It is a basic logic fallacy that you have made in your original posting and strongly amplified in this response. It even comes across as hostile and reckless, weakening your overall argument.

          Using specific examples is a good counter to that and you even gave a couple in your article. Relying on logical fallacies and casual statements such as “his arguments mainly reveal a lack of understanding of the problem space” are fallacious.

          I’d even specifically counter your arguments around GPSd JSON protocol as a comparison; The GPSd JSON protocol is almost exclusively used for serving data to if not a single client than less than a dozen per instance of the server. I have personally designed wire formats that while not nearly as *widely deployed* had more significant scaling concerns per server instance.

          Now with that last paragraph, how do you feel? More angry? Less likely to take my position into deeper consideration? That’s a position of authority. You’re well respected and you’re better than that.

          Sincerely, Another “Some Guy”

          1. >Relying on logical fallacies and casual statements such as “his arguments mainly reveal a lack of understanding of the problem space” are fallacious.

            I don’t intend to rely on it. Response coming soon.

            And don’t mistake an argument from experience for an argument from authority. Not at all the same thing. I did use the word “authority”, but I did not mean by it what is meant in that classical logical fallacy.

    2. Daniel, your argument raises several separate issues all of which I will engage. I’m going to start with your first one, which happens also to be the easiest to dispose of and is the main reason I described you to another commenter as not understanding the problem space well.

      You wrote:

      >As for JSON, I really don’t really have much to add to Parsing JSON Is a Minefield. In a discussion we had on Signal you minimized this article as only revealing implementation flaws, while maintaining that JSON itself is sound.

      Yes. In the real world, JSON itself is quite sound, even though it’s underspecified and many general-purpse JSON parsers have implementation bugs which seriot.ch correctly pointed out.

      If that seems like a weird and crazy thing to say, it’s only because you don’t understand how JSON is actually deployed in production. I do, because I’ve done it myself and I know of many similar deployments.

      GPSD’s JSON protocol has been in the field for ten years now over hundreds of millions of deployments. In all that time, the number of defect reports we have have from the field due to seriot.ch’s edge cases is zero. We’ve never had even one. And I can promise you we never will.

      This is because GPSD uses a subset of theoretical JSON – I’ll call it “practical JSON” – that never touches a single one of those edge cases. I’m here to tell you that this is entirely typical – all the web services JSON protocols I’ve seen use about the same safe practical-JSON subset. No Unicode in strings. No “lonely values”. No scientific notation. No hex. No JSON null value. No generation of trailing commas. No duplicate keys. No exotic whitespace, not even tabs. No comments or other “common extensions”. No crazy deep nesting – four levels is the most I’ve ever seen deployed.

      The seriot.ch guy is, I think, somewhat justified in criticizing Crockford for underspecification. But he over-eggs his case by talking about “common extensions”. And it’s not an underspecification issue that (for example) some parsers fail on deep nesting; that’s just plain bad code and does not reflect on the soundness of either theoretical or practical JSON.

      In the real world, people using JSON as a metaprotocol simply do not go into the corners where underspecification is an issue. That’s what my experience is, anyway. I’m sure if you were to look hard enough, you could find a few fools who went there and face-planted. But that would hardly be an argument against using it in NTPv5, since I can do and in fact have done the design sketch entirely in practical, safe JSON.

      >Eric, over 50 different implementations were tested and no two of them recognized quite the same language. Do you think all of those implementers are at fault?

      No. Some of that blame rests on Crockford.

      On the other hand, I am prepared to cover any reasonable bet that the common language they *do* recognize includes everything GPSD does and everything I propose to do in the NTPv5 sketch.

      So every criticism the seriot.ch guy or you can make about underspecification of theoretical JSON is completely valid…and completely beside the point.

      A related point is that in both the GPSD and NTPv5 cases I have (or will have) effective control of both endpoints. In GPSD that’s because in general application library developers don’t speak JSON directly; instead, they link one of the client-side libraries supplied in the GPSD distribution. In NTP it will be because to a first and second and third approximation nobody speaks NTP packets exceot NTP itself; the only exceptions I’ve ever heard of are protocol monitors like wireshark.

      Use the same parser at both ends, unit-test the shit out of it, prove with a coverage analysis, and we’re done here.

      >In your NTPv5 design, you’ve attempted to avoid some of these issues by prohibiting certain JSON features that trigger them. This makes the ill-specification problem worse, not better, by adding even more ill-specified requirements. Now to decide whether a message is syntactically valid, you must first recognize valid JSON, and then recognize the constructs you’ve prohibited so that you can reject those.

      This is a very wrongheaded claim, which I refute by pointing you at the microjson code. It doesn’t recognize all theoretical JSON and then throw out weird stuff. It doesn’t have to. It recognizes a well-defined set of practical JSON and fails gracefully, throwing an error, on anything else.

      I say “subset” because my parser engine has two restrictions most practical-JSON implementations do not: (1) when you specify a parser every field must have an associated type, with literal values of the wrong type in a input throwing an error, and (2) arrays must be type-homogenous.

      This was required in order to eliminate any requirement for the parser to use dynamic storage and it is perfectly OK. There is no band of red-robed JSON inquisitors waiting to descend on me because I didn’t implement full theoretical JSON and then filter it as you seem to assume I have to; in fact the one thing seriot.ch actually demonstrates is that attempting that may be unwise.

      But again, this is real-world. There’s even a term of art for it; when people speak of implementing a JSON “profile”, what they mean is that they’ve selected a feature subset of JSON and associated semantics for a defined set of tags, rejecting anything else. GPSD has one JSON profile; my NTPv5 design sketch has a similar one, differing mainly in field semantics rather than JSON subset size.

      >The flaws in JSON don’t just come up in weird edge cases: if you did a direct translation of NTPv4 into JSON syntax, then implementations using different JSON libraries would often completely fail to interoperate. This is because, due to the mismatch between what numbers are representable on the wire and what numbers are representable in a floating point register, numbers can’t be relied upon to round-trip cleanly through parsing and printing.

      Which is why, if you have a concern about that, you scale your representation so you’re only passing integer literals over the wire. Duh! This is Protocol Design 101, so basic that I shouldn’t even have needed to say it, but if you persist in bashing straw men I shall have to persist in restating the obvious.

      In practice, GPSD uses decimal fractional integers because the highest precision it ever wants to pass is well above the FP fuzz limit. This won’t necessarily be true in the NTPv5 design; one of the items on my to-do list is to perform the basic numerical analysis required to check (and the answer is not obvious, because it depends on the maximum precision of clock sources). Whether I get a yes or no answer will affect the tactics of what the wire representation looks like, but not the strategy.

      >As a result, the value that a client parses out of an origin timestamp won’t reliably match what it put into the corresponding transmit timestamp, and thus the response will be rejected. Your NTPv5 proposal dodges this particular problem by dispensing with origin timestamps altogether, but that the viability of your message format is sensitive to such things still goes to show what a foundation of muck you’re building on.

      The only “foundation of muck” here is your bizarre assumption that I (or anyone else) will do obviously stupid things like failing to design a clean round-trippable profile in practical JSON just because a way to do it wrong exists in theoretical JSON.

      If I were to accuse you of planning such an elementary error in a cryptosytem design, you’d be insulted. And rightly so.

      I’ll address your concerns about scaling and other issues in later replies.

      1. you scale your representation so you’re only passing integer literals over the wire.

        That’s not enough to save you. NTPv4 timestamps are 64 bits. In order for integers in JSON to round-trip with any reliability at all, they have to fit in a double-precision floating point, i.e., have magnitude less than 2**53. Even then there’s explicitly no guarantee; see RFC 8259, last two paragraphs of section 6.

        On the other hand, I am prepared to cover any reasonable bet that the common language they *do* recognize includes everything GPSD does and everything I propose to do in the NTPv5 sketch.

        Keep your money, but it took me all of five minutes skimming GPSD code to identify two counterexamples. The first one is mentioned in your documentation: if a message would exceed 1536 bytes, it’ll barf out truncated JSON which obviously no parser will accept. You go on to document that consumers need to implement a workaround for this.

        The second counterexample is due to a combination of two bugs. One is shallow and easily fixed but the other is deeper and really illustrates my point. Referencing the current HEAD of gpsd (commit 3d5e11f), on line 152 of gpsd_json.c you’re missing a call to json_stringify(). As a result, if a device path contains a quotation mark, a backslash, a control character, or any sequence of bytes that isn’t valid UTF-8, you’ll output something that compliant parsers should choke on. That’s the shallow bug. The deep bug is that, due to character set confusion, json_stringify() is incorrect and adding the missing call won’t be sufficient to fix your problem.

        Strings in JSON’s abstract syntax are strings of Unicode characters, encoded in the concrete syntax in UTF-8. There is no type in JSON for strings of bytes. The de facto standard workaround for this is to use base64 to encode byte strings as Unicode strings.

        UNIX path names, on the other hand, do not have a defined encoding; they’re just null-terminated byte strings. For the purpose of display to the user, they’re interpreted according to the system locale. For all other purposes, they’re binary data, not text.

        When json_stringify() encounters a high byte xy, it stringifies it as \u00xy. What you’re doing here, essentially, is interpreting the input as ISO8859-1 encoded, because ISO8859-1 codepoints correspond to the first 256 Unicode codepoints. So let’s say you call json_stringify() with a path name that contains 0xa7, which in ISO8859-1 is the section sign §. Your serialized JSON will then contain “\u00a7”, which a compliant JSON parser will understand as an escaped representation of U+00A7 § SECTION SIGN. But that same compliant implementation can re-serialize that character not as “\u00a7” but as the two bytes c2 a7 which is the UTF-8 encoding of that Unicode character. You’ve now lost the semantics of your original input, because c2 a7 is not the same path name as a7. Complete the round-trip by parsing this with microjson and stringifying it again and now you have “\u00c2 \u00a7” which is representing two Unicode characters, §.

        I can’t point to any particular line of code that’s at fault for this bug, because what you have is an architectural flaw resulting from mental fuzziness as to what’s a Unicode string, what’s a string of arbitrary bytes, what’s a string of bytes that encode a Unicode string, and what’s a Unicode string that encodes a string of arbitrary bytes. UNIX path names are strings of arbitrary bytes. Strings as lexemes in a JSON AST are Unicode strings. JSON on the wire is a string of bytes that encode a Unicode string, using UTF-8 and/or \u notation. A base64-encoded blob is a Unicode string that encodes a string of arbitrary bytes. The comment in json_stringify about wishing you could use C-style escapes instead of \u is evidence of your confusion, because it makes no sense to be able to do that. C-style escapes escape bytes. JSON escapes escape Unicode characters.

        There is no band of red-robed JSON inquisitors waiting to descend on me because I didn’t implement full theoretical JSON and then filter it as you seem to assume I have to

        You’re proposing changes to NTP and to NTS-KE, which are both standards-track IETF documents; so is JSON. In the IETF, those inquisitors literally exist and will in fact descend on you for what you’re describing. After a document gets through working group last call, before it becomes an RFC it has to be voted on by the IESG. When you normatively cite another standard and then go on to misinterpret it or diverge from it without clearly specifying the ways in which you’re doing so, that’s obvious grounds for a DISCUSS ballot. You could instead try specifying your protocol syntax de novo, producing something that superficially resembles JSON but doesn’t purport to be. It’s possible you could eventually get that through, but you’d better do a mighty good job writing your specification, and you’d better have a mighty good answer as to why it was necessary for you do that rather than following an existing standard.

        (Incidentally, I’m a member of the Security Directorate, which is a pool of reviewers who make recommendations to the IESG and help ease their workload. I guess that makes me a… magistrate inquisitor? calificador? Our chief weapon is pendantry, just pedantry. At any rate, I like your suggestion for new uniforms; I’ll bring it up at this month’s plenary.)

        A related point is that in both the GPSD and NTPv5 cases I have (or will have) effective control of both endpoints. In GPSD that’s because in general application library developers don’t speak JSON directly; instead, they link one of the client-side libraries supplied in the GPSD distribution. In NTP it will be because to a first and second and third approximation nobody speaks NTP packets exceot NTP itself; the only exceptions I’ve ever heard of are protocol monitors like wireshark.

        Use the same parser at both ends, unit-test the shit out of it, prove with a coverage analysis, and we’re done here.

        Ah ha! Now I see where you’re confused. It’s you who’s misunderstanding the problem space. GPSD is different from NTP and NTS-KE in a crucial way.

        There are four kinds of environment a wire protocol can operate in, requiring progressively greater levels of discipline in how you design it.

        Level 1: All endpoints are under your control and synchronously updated.

        This is the case, for example, when you have two goroutines communicating over a channel. This situation requires the least rigor. You don’t have to worry about versioning or backward compatibility, because it’s guaranteed that both endpoints are always running the same version. The only people who have to understand your protocol are the people working on your codebase, so on small projects you might be able to get away with failing to document your assumptions.

        Level 2: All endpoints are under your control but are asynchronously updated.

        This is the case when you’re distributing a mobile app which interacts with servers that you administer. It’s the case for GPSD iff all clients are using your provided library for their end of the protocol. At this level, you have to pay attention to backward compatibility, or else accept that when one endpoint updates, the other will have an outage until it does as well.

        Level 3: Only one endpoint is under your control.

        This is the case when you’re running a web service with a public REST API. It’s the case for GPSD if clients are doing their own parsing rather than using your library. At this level, it’s essential to write lucid documentation, because your protocol needs to be understood by people who aren’t part of your development team. However, the existence of a single reference implementation for the side of the protocol under your control provides with you with some slack. If your documentation is unclear, people can look to your reference implementation to understand what you meant, and if your reference implementation has bugs, then people can implement workarounds without worrying that the workaround will cause some other implementation to choke.

        Level 4: Neither endpoint is under your control.

        This is the situation for NTP, NTS, and everything else under the IETF’s purview. We are writing standards, which must be of sufficient clarity that two implementations which have never been tested against each other can be expected to interoperate. Enough NTP implementations that nobody knows how many NTP implementations exist let alone tests against them all as part of their development cycle. Your specification must be clear enough to stand by itself without a reference implementation to fall back on.

        This is a very high bar. In order to hold ourselves to it during the development of NTS, Martin Langer and I created independent PoC implementations, deliberately avoiding any communication while doing so. The morning of the IETF hackathon was when we saw each other’s code for the first time, and we did interoperability testing. The test was mostly successful; Martin had misread one piece of the spec. When I pointed out the misreading, he changed his code and we then we got successful interop. Then, in the next draft, we clarified that paragraph.

        The mistake you’re making is taking the discipline and habits you’ve learned from your experience working at levels 2 and 3, and trying to get by with them in a level 4 environment. Here, they’re insufficient.

        1. >That’s not enough to save you. NTPv4 timestamps are 64 bits. In order for integers in JSON to round-trip with any reliability at all, they have to fit in a double-precision floating point, i.e., have magnitude less than 2**53.

          Yeah, I already thought of that. When we go to 64-bit seconds counters plus 32 fractional bits, things that look like single data representations in JSON are going to have to unpack into two distinct fields of a struct. There’s no actual problem here, the parsing isn’t difficult, but it will be a bit deceptive to people who think they can look at the JSON and make direct inferences about the type ontology inside the code.

          The general point – which is that any protocol designer not utterly incompetent is going to scale to avoid FP fuzz and roundoff problems – still stands. It’s a good example of why I have trouble taking the attempted takedown at seriot.ch seriously – dude is in the business of turning molehills into mountains in order to make himself look clever and I seldom have much patience for that. There are far too many real problems needing solutions.

          >The first one is mentioned in your documentation: if a message would exceed 1536 bytes, it’ll barf out truncated JSON which obviously no parser will accept.

          You’re confusing levels here. This isn’t a problem with the GPSD JSON metaprotocol itself, it’s an implementation limit in GPSD ultimately derived from the house rule against using malloc inside gpsd. Which I documented just because I’m obsessive about documenting limits – I tried to construct a scenario in which GPSD would emit a message that long and couldn’t come even close.

          And “This is invalid, I’ll reject it” is a perfectly reasonable thing for a client-side parser to do, if that limit were ever reached in practice. Behavior is well-defined over the whole range of possible sentences in the protocol and the receive-side code has been demonstrated to be quite bulletproof in practice.

          To make your point, you’d have to demonstrate that GPSD can emit JSON that actually collides with one of the seriot.ch edge cases and can thus cause a bug (not just a clean reject of invalid JSON) on the receiver side. Good luck with that.

          >As a result, if a device path contains a quotation mark, a backslash, a control character, or any sequence of bytes that isn’t valid UTF-8, you’ll output something that compliant parsers should choke on.

          You’re missing a crucial bit of domain information that closes off that whole category of bugs. The domain of these paths is not general Unix file paths. It’s device names under /dev. There is no scope for character-set confusion, control characters, or even quote marks here because Unix kernels simply never do that in /dev paths, for a whole bunch of reasons including that it would risk difficult-to-detect breakage in administrative shellscripts. That’s why json_stringify() isn’t called there – we can in fact assume this is plain ASCII with no funny business. Since you brought it up, though, I’ll document that assumption.

          >You could instead try specifying your protocol syntax de novo, producing something that superficially resembles JSON but doesn’t purport to be.

          Or I could follow normal practice in describing JSON profiles, which is to say “RFC 8259 excluding features Unicode and blah blah blah excluded”. I hope you’re not going to tell me that sort of restriction is a divergence for which I would be sentenced to…the comfy chair.

          >Our chief weapon is pendantry, just pedantry.

          As you have amply demonstrated in your reply.

          >Your specification must be clear enough to stand by itself without a reference implementation to fall back on.

          I’m well aware of this requirement. I haven’t yet been been a named author in a non-joke RFC – the one time that was offered I turned it down because I felt I hadn’t contributed sufficiently to earn it – but I’ve helped prepare a couple. I think you’ll find I am quite up to the challenge.

          1. You’re missing a crucial bit of domain information that closes off that whole category of bugs. The domain of these paths is not general Unix file paths. It’s device names under /dev. There is no scope for character-set confusion, control characters, or even quote marks here because Unix kernels simply never do that in /dev paths, for a whole bunch of reasons including that it would risk difficult-to-detect breakage in administrative shellscripts.

            I have a device file with a backslash in its name on the system I’m typing this on, and I didn’t have to do anything at all funny to put it there. udevd produces these names in its default configuration.

            [dfranke@pathfinder:~]$ ls -l /dev/disk/by-label/
            total 0
            lrwxrwxrwx 1 root root 9 Nov 18 14:31 ‘ASRock\x20SupportCD’ -> ../../sr0
            lrwxrwxrwx 1 root root 11 Mar 14 16:50 CCCOMA_X64FRE_EN-US_DV9 -> ../../loop0

            The config files which ship with udevd on my distro do avoid generating any non-ASCII characters, but this is just configuration; I could change it and the kernel wouldn’t interfere one bit.

            Or I could follow normal practice in describing JSON profiles, which is to say “RFC 8259 excluding features Unicode and blah blah blah excluded”. I hope you’re not going to tell me that sort of restriction is a divergence for which I would be sentenced to…the comfy chair.

            You could do it, but you’d have to be a lot more thorough about it. Look at RFC 7493, which defines just such a profile. The IESG solemnized it upon its second hearing, once it abjured its incontinent treatment of Unicode surrogates. Even so, it still suffers the problem I warned about: by introducing the profile as a diff against full JSON, anyone seeking to implement it must first correctly understand everything in the JSON standard, and then correctly understand everything in the diff. I can’t comment on implementers’ track record of success on that score, because I can’t seem to locate any implementations at all.

            1. >I have a device file with a backslash in its name on the system I’m typing this on, and I didn’t have to do anything at all funny to put it there.

              Please. Mounting a thumbdrive with a FAT32 volume on it certainly qualifies as funny. And not within the domain of paths GPSD will ever emit, for very obvious reasons.

              >The IESG solemnized it upon its second hearing, once it abjured its incontinent treatment of Unicode surrogates.

              Wise. I like UTF-8, but anybody designing a JSON profile who doesn’t restrict it as least as tightly as saying “non-ASCII only permitted in the values of specific named fields X, Y, and Z” is asking for trouble. Not a mistake any competent protocol designer would make.

              That FAT-32 path was a good joke, but from the points you didn’t reply to I take it we’ve now disposed of the idea that a properly specified JSON profile is necessarily unsound in any way. I will proceed to your other issues in my next reply.

              1. That FAT-32 path was a good joke, but from the points you didn’t reply to I take it we’ve now disposed of the idea that a properly specified JSON profile is necessarily unsound in any way.

                Necessarily unsound? No, of course not. The language consisting of the singleton string “{}” is a perfectly sound JSON profile. But if you want to specify a useful metaformat with sufficient rigor and clarity that you can reasonably hope that most of the implementations that come out of it will be interoperable and secure, you’ll have an easier time starting from scratch than trying to use JSON as a starting point.

                As for the device path, first of all, I didn’t stick it there just now to prove my point; it’s a driver CD that’s been sitting in my DVD drive for months. But this is getting ridiculous. We started this subthread with you asserting that the GPSD’s output was a sublanguage of what’s recognized by any reasonable JSON parser. I pointed out two instances where that’s false. You then asserted that those cases can never happen, because the operating system would prevent it. I then showed you evidence that the operating system does so such thing. You have now retreated to the position of, “Plugging in a thumb drive? Who would ever do THAT?”. Your protests are starting to sound less like the Inquisition sketch and more like the Dead Parrot sketch, and if we continue this much longer you’re going to end up sounding like the Argument sketch.

                At any rate, I’m less than impressed with your LANGSEC discipline.

                1. >We started this subthread with you asserting that the GPSD’s output was a sublanguage of what’s recognized by any reasonable JSON parser.

                  Daniel, I don’t think you’ve falsified that. The fact is that GPSD, as it is implemented right now, only produces JSON that leads to a well-defined result, in exactly the LANGSEC sense that you can’t produce an unintended computation with it (clean failure on the receive side is not a weird machine). Yes, you could argue that it might fail in that way if its platforms behaved in a different way than they actually do, but they don’t – the serial-device part of the /dev tree is not going to go to non-ASCII names anytime soon if ever, and if it did I’d have plenty of warning. Pedantry is one thing, and it’s useful in moderate doses; ignoring actual real-world constraints is another. You have gone over that line here and are doing something that is no longer engineering.

                  >you’ll have an easier time starting from scratch than trying to use JSON as a starting point.

                  As a separate point, I might do that anyway. The profile I have in mind for NTPv5 is small enough that writing a self-contained BNF for it would not be a burden on an RFC. I could specify a subset and remark as a non-normative observation that it is intended to resemble JSON well enough for JSON libraries to handle it.

                  Or I could strip the syntax down further. I speculated about this on the NTPsec list – drop the quotes off of fieldnames, maybe even change the comma separators to spaces. Because JSON qua JSON isn’t the real point here; a protocol that is discoverable, self-describing, and extensible is the point.

      2. So, your article boldly declares “JSON is fine”, but then your code/specification responds with “JSON is not fine, use this not-JSON that is similar enough for JSON parsers to read correctly”. Meanwhile, if anybody else implements a client, they now either have to assume that it will only ever connect to a benevolent server, or tolerate the full range of troublesome input that their chosen general-purpose library passes on, if it doesn’t abort early on valid JSON but invalid not-JSON the way your own library does.

        This is not an article about general protocol design, it’s about protocol design when you have a single server and client implementation to care about, and third-party clients are tolerated but not truly expected. But the article as written never seems to realize that, nor acknowledge that it’s not actually talking about JSON in a way that the audience is expected to recognize.

      1. ASN.1 is extremely complex and bugs in parsers for it are historically a huge source of vulnerabilities. A whole lot of this could be mitigated with better tooling, but the open source tooling ecosystem for ASN.1 is pretty impoverished.

  19. If it surprised you that a credit-card-sized hobby computer could supply Stratum 1 service for a major First-World country, you are one of them.

    Eh, to the extent (minor) any of that surprised me, none of it was about protocols … it was about how little S1 load there actually was.

    (Mostly, I assume, because almost everyone is using either the MS time service, Apple’s time service, or whatever their phone uses, which might be the cell network.

    WTF connects to a central stratum 1 time source anyway, in 2019?

    Not a lot of people/things.)

  20. > because Unix kernels simply never do that in /dev paths

    UNIX kernels do not control paths in /dev. UNIX admins do.
    You don’t know what you’ll stumble over in /dev. Neither do I.

    Some Unices to hand control of some of /dev over to the kernel via one sort or another of devfs; this does not invalidate the point that there are widely deployed systems that do let admins put whatever they feel like in /dev.

  21. Isn’t the fixed size of the NTP packet an important feature of the protocol? If you have dynamic and unknown parsing time of the packets, you will lose precision in setting your local clock. This has low impact if you have a high speed CPU, but may have devastating impact in an embedded device with a low clock frequency. The slowest embedded device i have built did 2000 instructions per second (PIC12 microcontroller). The application didn’t need precision time, but it would have been perfectly viable to build an application that did.

  22. I have a feeling that the NTP protocol was designed to fit a small UDP packet because they were aiming to minimize network latency. Which is important when synchronizing clocks.

    If the message has to be sent in multiple packets or complexities of TCP are involved then we can have higher latency.

Leave a Reply to Carlie Coats Cancel reply

Your email address will not be published. Required fields are marked *