GPSD-NG: A Case Study in Application Protocol Evolution

I’ve been doing some serious redesign work on GPSD recently. I had planned to do a blog posting about lessons learned, but the result grew enough length and structure to turn into an actual technical paper. You can read it here; comments and criticism will be welcomed.

Note, everything described in the paper has already been implemented in gpsd. There’s work still to be done; for those of you familiar with the software, I still need to do equivalents of the old–protocol commands B C J N R Z $. I do not expect these to pose any significant difficulties.

232 comments

  1. Yet another typo: “woek”.

    Don’t worry ESR, we’ll run out of nits to pick eventually.

  2. Just a sidenote: I think you can tell that you are emitting YAML, a human-readable data serialization format, which is a superset of JSON (so clients may use YAML-parsing library instead of JSON-parsing library). Although you won’t need full power and more advanced features of YAML. Perhaps there is some small YAML Tiny parsing library that you could have used…

  3. The cool thing about JSON is that it’s really easy to parse in Python, now that the simplejson module is part of the library (since 2.6). In fact, the syntax of JSON should look familar to you because it’s almost exactly the same as the syntax for a Python dictionary.

    This means that gpsd-ng clients can be written quite easily in Python, of course. Which makes me very happy, because Python is my very favorite language mostly because it let’s me write the code the way I think about code.

    @Jakub: You could probably do a full-blown YAML parser in Python by subclassing the json.JSONEncoder and json.JSONDecoder classes, but, you’re right in that you wouldn’t need to do that for the JSON stuff emitted by gpsd-ng.

  4. BTW–This is an example of how mind-bogglingly simple JSON is to parse in Python. For those unaware of JSON, the following code open up a JSON configuration file:

    config_file=open(‘pyro_dtdns.json’,’r’)
    config=json.load(config_file)
    syslog.openlog(‘pyro_dtdns’, 0, config[‘logging’][‘facility’],LOG_PID)

    which looks like this:

    {
    “router”: {
    “driver”: “wrt54g”,
    “address”: “192.168.1.1”,
    “password”: “gibson”,
    “username”: “”
    },
    “DtDNS”: {
    “password”: “1ms031337!”,
    “domains”: [
    {
    “name”: “tuxedo.darktech.org”,
    “password”: “” },
    {
    “name”: “cotim.flnet.org”,
    “password”: “” },
    {
    “name”: “they.gotgeeks.com”,
    “password”: “” }
    ]
    }
    “Logging”: {
    “facility”: “LOG_USER”
    “levels”: [“LOG_EMERG”,
    “LOG_ALERT”,
    “LOG_CRIT”,
    “LOG_ERR”,
    “LOG_WARNING”
    “LOG_NOTICE”,
    “LOG_INFO”]
    }
    }

    (passwords have been changed to protect the guilty)

    (One of these days I’ll get around to actually cleaning up this code so I can post it somewhere.)

  5. “YAML is not a subset of JSON. Act as if it is at your own peril.”

    Ah….t’other way around….he said that JSON was a subset of YAML, actually:

    “YAML (of which JSON is essentially a subset) has a following as well….”

  6. ESR….I have an old (~7-10 years) Garmin eSeries GPS device kicking around. It is rather pathetic by modern device standards, but its GPS capabilities are still good. I had been wondering for some time if it could be tethered to a serial port on some netbookish device as a portable mapper (or whatever). Would gpsd give me a shot at accomplishing this?

  7. I found your trade offs unusual. At the beginning or your article you were concerned because you were running out of single letter commands. The solution seemed perhaps obvious — use more than one letter. However, the concern seemed to be to reduce network cost. Regardless of your later conclusions, you could readily have saved far more bandwidth by being less verbose in your output, allowing you a more sane input command stream.

    Also, your choice of JSON is interesting. AFAIK JSON is primarily designed for web services communication between javascript based applications, and is more of a patch on top of the weak facilities of javascript in general. It is not at all clear to me what advantage JSON has over a far more standard and equivalent protocol, namely XML. Since this is pretty non complex data, tab separated fields on a text line would seem even more appropriate, in accordance with the great Unix traditions of pipes and filters. But I am far from an expert on these things.

    However, I think there is a broader point worth mentioning here, and it might be why you chose a non-pipe format, namely that one of the great weaknesses of the pipe and filter paradigm in Unix has always been the inability to deal with structured data. XML and a rich XML tool set would offer the ability to do this effectively. Throwing JSON in the mix just adds combinatorial explosion to that tool set.

    As a second thought, I think I agree with your conclusion that network and processor cycles are less scarce than they were. However, one of the great laments of the hacker community is the so called bloatware from Microsoft. They seem to have followed your thoughts, but did so ten years ago. I might add that a practical look at the realities today are quite surprising. Computers are vastly more powerful than they were a decade ago, yet I still find that they run sluggish and slow, and don’t seem much more useful. I think part of the reason is the cumulative crud from application developers who think throwing processor cycles and memory at the problem is a good solution. But I don’t really know. I haven’t done any such benchmarks.

  8. I will dig out the device and experiment, directing all discoveries/discussion to the mailing list. Thanks :)

  9. @JessicaBoxer: You still seem to be operating under the assumption that Web tech and software tech in general are still two separate domains. This is no longer the case: desktop apps are adopting Web tech and vice-versa. We are seeing an unprecedented level of convergence these days. Think about CUPS and GNOME CUPS intergration for instance. It’s all HTTP!

  10. >It is not at all clear to me what advantage JSON has over a far more standard and equivalent protocol, namely XML

    JSON’s type ontology is better fitted to what I wanted. And I could not have built an XML parser in 310 lines.

  11. >his guy did it in around 300?

    And he says “I don’t need any support for parent/child relationships or other hierarchical information.”. I did. I was able to parse all of JSON in 310 lines. You ain’t going to tell me he got equivalent capability out of XML in 310 lines; I know XML and that is not possible. Not even for me, though I am exceptionally good at writing tight and economical parsing code (from lots of practice).

    XML is great for document-like markups – I use Docbook-XML and like it. But for structured data interchange it sucks pretty badly.

  12. You may enjoy the Hessian binary metaprotocol then, I plan to use it wherever I can. I can’t stand XML and can’t think of a single case where an open binary format wouldn’t be better. YAML/JSON are better than XML but I think open binary formats will come back in a big way, as there’s almost always a need for their economy (think how much power you save over a year by using a more efficient format?) and I see no reason for all-text formats.

  13. XML is great for document-like markups – I use Docbook-XML and like it. But for structured data interchange it sucks pretty badly.

    I actually agree with you here. XML sucks because it’s too verbose, requires validation, and has limited or no support for data types. If you want data types, you have to write a whole minilanguage just pull it off. If I wanted to do that, I’d just a write minilanguage and skip the XML stuff. :)

  14. >I think open binary formats will come back in a big way

    Let’s all devoutly hope not.

    I argued in The Art of Unix Programming that binary application protocols are a false economy. I still believe that. The tiny gains in overhead are swamped by the downstream costs incurred because such formats are difficult to read by eyeball, and require special (bug-prone) tools to even be auditable by humans.

    If you must send binary blobs through an application protocol, it’s better to leave the protocol itself textual and ASCII-armor just the blobs.

  15. > I actually agree with you here. XML sucks because it’s too verbose,
    > requires validation, and has limited or no support for data types.

    I can write an XML parser in one line (by reusing someone else’s code.) Aren’t you guys all into free and open source in part for this very reason? Another point worth making here is this: ESR is no doubt right about accepting minor bloatware for the increased understandability and flexibility they provide, as he pointed out you can use the huge gains in capacity of newer computer systems to bury the cost in the statistical noise.

    However, there is one place where bloatware is becoming a bigger problem, and that is mental bloatware. Which is to say, many different ways of doing exactly the same way, each with subtle differences and incompatibilities. I understand the argument that a carpenter has more than one screwdriver, however, screwdrivers are simple tools. Microsoft tortures their development community with this stuff. Every year it is a new framework to do basically the same thing. Unix is, or was, the very antithesis of this: a simple set of tools that are hyper compatible (due to a shared suboptimal communication standard), that could be combined together in amazing new ways. It is this suboptimal comminication standard that underlies much of the modern Internet. SMTP, NNTP, POP, HTTP are all designed around a simple text based communication scheme primarily due to their genesis in the pipe and tty world of Unix. I’m a child of the modern era, but from what I read of the early days of Unix, the attitude was that if you couldn’t communicate with a server process via telnet, then it was considered a poor design.

  16. >I can write an XML parser in one line (by reusing someone else’s code.)

    Sure. But in this case that wouldn’t have done. Remember, one of my requirements is “no malloc!”. Go ahead, try to find an XML parser that unpacks to fixed-extent structures only. Knock yourself out.

    >Every year it is a new framework to do basically the same thing. Unix is, or was, the very antithesis of this: a simple set of tools that are hyper compatible

    Yes. In the context of application-protocol design, this is an argument for building on top of a small set of metaprotocols that are each compact, elegant, and widely used (so they’ll have large developer communities, leading to stable and robust toolsets). I think the set {JSON, YAML, XML} pretty much fills the bill, here.

    >I’m a child of the modern era, but from what I read of the early days of Unix, the attitude was that if you couldn’t communicate with a server process via telnet, then it was considered a poor design.

    You’re quite right. I was there for those early days, before the AT&T divestiture. That attitude is still very much alive in the latter-day Unix world. The way I test GPSD-NG is,,,by opening a telnet session to port 2947 and typing at it. This is why the thought of “open binary protocols coming back” disturbs and disgusts me.

  17. > I argued in The Art of Unix Programming that binary application protocols are a false
    > economy. I still believe that. The tiny gains in overhead are swamped by the
    > downstream costs incurred because such formats are difficult to read by eyeball,
    > and require special (bug-prone) tools to even be auditable by humans.

    Depends what you’re economizing for, you’re economizing programmer comfort over all else and I think that’s the wrong tradeoff to make. Take HTML, the most widely used text protocol, how much bandwidth and cpu-time could have been saved over the years if it had been an open binary format? Yes, you’re right that programmers would have to use standard converter libraries to audit the binary format, but what percentage of the time is it even necessary to dig that deep and so what? It’s not like everybody just reads XML raw anyway, most use tools to make that easier too. You can’t get away from buggy tools just by using a text format, practically nobody wants to read or parse a raw text format either. I think binary formats disgust people who draw the wrong conclusion from the past, when they were stuck reverse-engineering proprietary, kludgy binary formats. If a binary format is open, it will already have converter libraries written for it and you get all the efficiency that it provides. Maybe the efficiency of the protocol isn’t a big deal on GPS devices, but it certainly is many other places on the internet.

  18. >Take HTML, the most widely used text protocol

    Yes, let’s. HTML can be modified with any editor, not requiring specialized tools. Allowing people who are writing it (not just programmers, but people like, say, my wife) it to use tools they are already familiar with. That is a huge win that far outweighs the putative efficiency losses relative to a binary format, and goes far to explain the web’s rapid adoption.

    I don’t think you could have chosen a worse case to rest your argument on if you had tried – about HTML you’re not just wrong, you’re point-at-the-fool-and-laugh wrong. I’d suggest you try again with a protocol that only programmers care about, but don’t bother; binary application protocols still crash-land because programmers can’t read them with a Mark I eyeball. It’s not “comfort” per se we’re talking about here, it’s long-term maintainability and relentlessly cumulative friction costs.

  19. >Yes, you’re right that programmers would have to use standard converter libraries to audit the binary format, but what percentage of the time is it even necessary to dig that deep and so what?

    Debugging networked applications is difficult enough without having an opaque binary protocol that you need special tools just to debug.

    One of the nice things about a simple HTTP server is that if something isn’t working right, I can throw sniffer on the wire and simply read the plaintext HTTP streams going back and forth.

    Good luck doing this with a binary protocol: for that, you need a specialized stream analyzer just to figure out what’s going wrong. With a plaintext protocol, I can spot obvious problems in seconds.

  20. esr, if you think anybody other than a programmer can edit raw HTML in a text editor or Word, I have to point at your foolishness and laugh. XD It is only momentary comfort you’re talking about, it wouldn’t be hard to run a binary protocol through “catb |more” on a unix command line rather than just “more.” As for maintainability and friction, I don’t see any real advantages.

    Morgan, running the sniffer output through a stream analyzer is not a big deal, as I’ve said. To optimize a protocol for this rare event- how many people debug HTTP servers?- rather than network usage and lag just shows how lazy, short-sighted, and selfish programmers can be. ;)

  21. Take HTML, the most widely used text protocol, how much bandwidth and cpu-time could have been saved over the years if it had been an open binary format?

    Next-to-none, I’d think; packet overhead, etc. I mean, if you actually care about that the easy answer is Content-Encoding: gzip. No reason to couple the higher density you’re talking about to protocol details unless it wins you a lot, and I don’t see why it does.

  22. esr says:
    > Yes, let’s. HTML can be modified with any editor, not requiring specialized tools.

    Maybe the HTML you write, however, most HTML in the wild is extremely hard to read, especially since a large percentage of it isn’t even syntactically correct. Throw in some javascript and even most programmers need a program editor and some reformatting. Part of this is because big web farms try to compress the text into a pseudo binary format by using meaningless names, and variables and so forth, part of it is because most HTML is not written as HTML (but with a tool), and part of it is because HTML writers are quite often horribly lacking the skills required of good software development.

    However, I will grant you, it is easier to read than binary, though not easier to read than binary with the right tool (which would undoubtedly have been built in as part of the browser were the standard format not text.)

    Morgan Greywolf:
    > One of the nice things about a simple HTTP server is that if something isn’t working
    > right, I can throw sniffer on the wire and simply read the plaintext HTTP streams
    > going back and forth.

    Would you advocate TCP/IP header packets going to a text based format too? IP addresses spelled out rather than encoded in four bytes? TCP port numbers converted to TCP text descriptions? Fact is that packet sniffers do the decoding for you usually, and would probably also have done so for standardized formats like binary HTML had that been the standard from the beginning.

    BTW, I am not actually saying that binary formats are preferable or better, just that Ajay’s original statement is probably not “point-at-the-fool-and-laugh” wrong.

  23. >esr, if you think anybody other than a programmer can edit raw HTML in a text editor or Word, I have to point at your foolishness and laugh.

    Proof by counterexample. My wife, the lawyer. Built a costume-history site by hand, using a couple different editors.

    I know half a dozen non-programmers who have done similar things.

    What do you suppose the early non-programmer web authors did before crap like Dreamweaver? How do you think the web became enough of a popular medium to justify the investment in those tools?

    *plonk*.

  24. >Would you advocate TCP/IP header packets going to a text based format too?

    I wouldn’t. That’s far enough down the stack, and suffciently high volume, that the efficiency gains from binary are worth it.

    >BTW, I am not actually saying that binary formats are preferable or better, just that Ajay’s original statement is probably not “point-at-the-fool-and-laugh” wrong.

    His original statement was merely wrong, it was the attempt to support it using HTML that put him in point-at-the-fool-and-laugh territory.

  25. Mike, the gzip answer is a glib one that I’ve gotten from others too: it’s wrong because of two points. One is that a general compression algorithm like gzip cannot possibly compete with a designed binary format for compression. I’m unsure of the exact overhead from using general compression instead of a binary format, maybe it’s 50%, maybe it’s 2-500%, but it’s non-negligible. The second point is that I’d estimate that less than 5% of HTML is actually compressed in practice, whereas by baking the efficiency into the format one enforces it.

    Jessica, always nice to hear a voice of reason among the fools, who’re really repeating received “wisdom” rather than thinking. I’d be interested to hear why you think open binary formats may not be a clear win, particularly coming from someone whose mind is actually open to discussion.

    esr, some small group of people learns to use the monster that is Photoshop too, your HTML-editing buddies are hardly relevant. As you note, most people use dreamweaver-type tools and you’re arguing from a leaky abstraction. I’m sure HTML was helped in the earlier days by being a text format that any of the techie types who were building websites back then could edit, as opposed to some closed binary protocol that they had to rely on a company to provide buggy converters for. However, text formats have never been worthwhile for end users, for whom a text protocol is only marginally less alien than a binary one. My point is that an open binary format would have done as well as HTML, because one could have chosen one of the multitude of converters that would have then competed to implement it better. So, TCP is worth it but HTML isn’t? That’s where you go wrong, HTML is a perfect example because of how much traffic it has. Your derision is funny considering you are the fool in this discussion.

  26. > ’d be interested to hear why you think open binary formats may not be a clear win,

    For the reasons ESR and Morgan are putting forward — it is clearer and the additional cost is probably not large. (Haven’t measured though so I don’t know.) And also because open binary formats often become closed binary formats, or open binary formats with proprietary extensions.

    FWIW, text is a binary format — last I checked ASCII was represented with bytes. The question is more one of how the structure is encoded: explicitly or implicitly. It is also a trade off of availability of tools verses capability of tools. vi/notepad is capable of editing any text file, and is readily available, however its capabilities are low in regards to editing HTML. Dreamweaver is much less readily available, but is much more capable in that regard.

  27. FWIW, text is a binary format — last I checked ASCII was represented with bytes. The question is more one of how the structure is encoded: explicitly or implicitly. It is also a trade off of availability of tools verses capability of tools. vi/notepad is capable of editing any text file, and is readily available, however its capabilities are low in regards to editing HTML. Dreamweaver is much less readily available, but is much more capable in that regard.

    @JessicaBoxer: The 1990s called and they want their argument back. The vast majority of Web development today is done using tools like CMSes or even more dynamic technologies like Django and Ruby on Rails. Nobody is writing static HTML anymore. HTML snippets and templates are coded by hand and the final product is assembled by the underlying technology(ies). Most Web pages are automatically generated by something. Web 2.0 is here, and it’s taken over. Why do you think ESR is using a WordPress blog as opposed to manually updating his Web site like he used to? Because it’s more efficient.

    Anyway, the counterexamples of TCP/IP packets are simply ludicrous. You’re arguing about something much, much lower in the network stack. I’ll assume you understand something about the OSI network model and how TCP/IP-based technologies work in general. GPSD-NG is way up layer 7; IP is down around layer 3. You might as well argue that Ethernet frames should be text-based and that an RFC 1149-compliant transport should be implemented using stone tablets attached with steel straps.

    I know a guy who’s a networking genius. He can look at a raw hex dump of a stream of IP packets and tell you exactly what is going on. He thinks packet analyzers are for wimps. But we all don’t have ‘M-x butterfly‘ working on our Emacs installations just yet.

    ‘Programmer comfort’ directly translates to fewer bugs, easier debugging, and increased programmer productivity. With most people running 100 Mbit or even Gigabit Ethernet on their home LANs, who gives a rats’ ass about a few percentage points worth ‘wasted’ bandwidth? It won’t make any difference to most end users, and that’s what counts.

    Oh, and you asked who is debugging an HTTP server? Deep-dive debugging of performance and scalability issues of N-tier networked applications without access to a single line of source is just one of the things I’ve done for a living. :) (You probably don’t want to know.)

  28. At work recently, I was using XML to specify some metadata about a structure. The third time I found myself declaring that “choices” takes a list of “choice” elements and uses the CDATA of the tag as the value, stripping out whitespace (with various different things for “choices” and “choice”) is when I got pissed off and replaced the whole shebang (and tons more unwritten code) with “decode_json($text)”. In this context, this was the right choice.

    XML is still pretty good at mixed-text markup, IMHO, but there’s a place where you just want data, and JSON is pretty good at that, especially if you use a strict, complaint parser.

    Also, yeah, I accidentally got the order backwards in my first comment, but JSON is not a subset of YAML, and my warning holds true: Use it as such at your own peril. Both formats may look loosey-goosey, but that doesn’t mean they should be treated that way. (But then, being bitten by (or even writing) sub-par parsers seems to be part of all developers learning experience. If you think the quality of your parser or your output doesn’t matter… well, I promise you, you’ll learn better someday.)

  29. if you think anybody other than a programmer can edit raw HTML in a text editor or Word, I have to point at your foolishness and laugh.

    i may be a poor example, but i couldn’t program my way out of a wet paper sack, and i hack HTML (lightly) all the time.

  30. if you think anybody other than a programmer can edit raw HTML in a text editor or Word, I have to point at your foolishness and laugh.

    This is demonstrably not true: people (including me, just then) enter HTML even in managed systems like WordPress. I was told by someone who ploughed through the AOL search logs that were released onto the Net that one of the few glimmers of hope lay in just how many thousands of people were learning HTML in order to improve their MySpace pages. And that’s not forgetting the real driving force in propagating the knowledge of how to use and expand the Web: View Source.

  31. I know a guy who’s a networking genius. He can look at a raw hex dump of a stream of IP packets and tell you exactly what is going on. He thinks packet analyzers are for wimps. But we all don’t have ‘M-x butterfly‘ working on our Emacs installations just yet.

    Fixed in Emacs 23.

  32. I’m sure HTML was helped in the earlier days by being a text format that any of the techie types who were building websites back then

    Oh, this is one of the moments I actually wish the web of 1995 were still around, just to show how utterly wrong you are.

  33. Jessica, clearer to whom? As I’ve said, there is no real cost in clarity, it’s really just another manifestation of the unthinking discrimination of programmers for text formats over all else: use a hammer and everything should be a nail. ;) As for binary formats becoming closed or having proprietary extensions, I see no reason that’s more likely than for text formats, where a huge amount of documents over HTTP are still sent in the proprietary binary format of Microsoft Word. As for the trivial point about text being a binary format, clearly it is, it’s just a question of whether we should encode all data with that higher-level character set or something lower-level like binary where possible. While the availability of tools plays into it, I think the real issue is the capability of users, the vast majority of whom scream and run from a text format like HTML also, if it isn’t already decoded by a GUI tool like Dreamweaver.

    morgan, try to stay on the subject. Most people don’t use rails, only programmers do and like we said, they could use a binary converter instead. For the power users that Jessica and I are talking about, GUI tools like Dreamweaver are essential, those users won’t dig any deeper than that. Just cuz TCP is much lower in the stack doesn’t imply that the top of the stack doesn’t matter, particularly when so much traffic goes over HTML. We don’t send raw image or video data either, which both take up a much larger share of traffic, because it would be stupid not to compress that data with a designed format that saves bandwidth. It’s time to do the same with the remaining laggards. The programmer comfort argument is a red herring because you can easily run the binary format through a converter that will output text for you if you’d like: it’s only an issue of the programmer typing a single extra command. Gigabit home LANs are irrelevant when the pipe from the internet is on average 2 Mbps and web pages take 3-5 seconds to load, particularly when they’re now slowing down webapps as a result. Debugging scalable webapps is not the same as debugging HTTP servers, but using a binary converter is not hard and only one in a million will have to.

    Danny, there will always be a small technical minority that wants to learn new tech, better that they use GUI tools like Dreamweaver than have to learn arcane tags and formatting rules like the HTML box model. And an open binary format would be as explorable, using a GUI tool, as an open text format like HTML. Anyway, I think View Source is highly overrated, what matters is openness.

    Mike, if you had an actual argument to make, you’d make it, rather than saying I’m wrong and not being able to state a single reason. ;)

  34. Morgan said:
    >The 1990s called and they want their argument back…. Nobody is writing
    > static HTML anymore.

    So you are arguing that people don’t edit raw HTML anymore, but use higher level tools? So, now you are making the argument that Ajay and I were making?

    >Why do you think ESR is using a WordPress blog as opposed to manually
    > updating his Web site like he used to? Because it’s more efficient.

    No so for the redoubtable Cathy Raymond, who, in addition to being objectively hot, occasionally swinging a broadsword or two, and training cats to swing from the trapeze, is apparently also a whiz with the HTML. She sounds like she must keep old Eric on his toes. I think I’d like to have a cup of tea with that gal!!

    > Anyway, the counterexamples of TCP/IP packets are simply ludicrous. You’re
    > arguing about something much, much lower in the network stack.

    What difference does that make? If it is useful information, it is useful no matter where it is in the network stack. I’d suggest that IP addresses and port numbers are one of the most common things people look at when sniffing a network. Traffic analysis anyone? More importantly, you missed the basic thrust of my argument (no doubt due to my poor writing skills rather than your oft demonstrated perceptiveness), which was that packet sniffing software often interprets the contents of the packet to give a higher level view. This is very often true of TCP/IP where the packets can readily be displayed in a nice tabular format rather than raw bytes. Why not also for binary coded formats such as our imagined binary HTML.

    > ‘Programmer comfort’ directly translates to fewer bugs,…

    But the subject under discussion was the, apparently very competent, but nonetheless non programming, non debugging, Cathy Raymond. So this seems a little OT to me.

  35. I am using JSON extensively now, it really is a Swiss Army knife. I’ve designed two protocols, one for facilitating a distant fork on a single system image cluster, the other for provisioning virtual machines. I wanted to use some sort of known standard, so I forced myself to remain in the confines of what XMLRPC could give me.

    JSON gave me the ability to expand things so they conveyed what I needed, made it simple to transmit entire dictionaries and made it easy for the next person who has to maintain what I wrote. I completely agree with you, given its availability – JSON is a good logical choice for the first building block when designing any new protocol.

    It really paid off for me when someone from my office wrote a perl/client module to work with my service independently, not having to ask a single question.

    As others have said, the choice of parser matters quite a bit. So, someone trying to write scripts around my service might have difficulty, but the same would hold true if someone was using C with a buggy C library. The time and frustration saved by using JSON in the first place more than pays for that possible corner case.

  36. As for binary formats becoming closed or having proprietary extensions, I see no reason that’s more likely than for text formats, where a huge amount of documents over HTTP are still sent in the proprietary binary format of Microsoft Word.

    I don’t see how that’s relevant. Web pages written using Microsoft Word are HTML. And Microsoft Word is no longer a binary format, anyhow. It’s XML.

    For the power users that Jessica and I are talking about, GUI tools like Dreamweaver are essential, those users won’t dig any deeper than that.

    And those users aren’t doing professional Web site development. Real professionals doing enterprise-class Web development aren’t using tools like Dreamweaver. Maybe mom and pop shops are, but the guys who are doing these for a living aren’t using Dreamweaver. Trust me.

    Just cuz TCP is much lower in the stack doesn’t imply that the top of the stack doesn’t matter, particularly when so much traffic goes over HTML.

    But it does imply that it doesn’t matter as much. Everything is encapuslated into TCP, IP or UDP. There’s a lot more going on down at that level. HTTP is a tiny fraction of what is transmitted over the Internet.

    Gigabit home LANs are irrelevant when the pipe from the internet is on average 2 Mbps and web pages take 3-5 seconds to load, particularly when they’re now slowing down webapps as a result.

    2 Mbps? Where do you live? Iowa? I’ve got a 20 Mbps pipe over DOCSIS cable. And most of the rest of the world? The Japanese and much of Western Europe have 1 Gbps lines as a bare minimum. The U.S. is way behind thanks to the telcos and their powerful lobbies.

    Debugging scalable webapps is not the same as debugging HTTP servers, but using a binary converter is not hard and only one in a million will have to.

    Every scalable webapp has an HTTP server in the stack somewhere.

    Look, Ajay, you’re not going to convince anyone here that binary formats are anything but a false economy. I’m in 100% agreement with esr because he’s right. You might think binary works, but it’s obvious to me that you’ve never written any significant multiplatform applications. You do understand that different platforms, for example, represent floating point numbers in different ways? Understand when I say platform, I mean OS, processor, programming language, any virtual machines that are involved, etc. Do you understand that 90%+ of what gets transmitted over the Internet is already plaintext? And, finally, do you understand that plaintext is far more compressible than binary data? And that compression algorithms in general are mature, well-established and quite stable, and, as a result, are faster now than they’ve ever been?

    You sound like me when I was 15.

  37. >No so for the redoubtable Cathy Raymond, who, in addition to being objectively hot, occasionally swinging a broadsword or two, and training cats to swing from the trapeze, is apparently also a whiz with the HTML. She sounds like she must keep old Eric on his toes. I think I’d like to have a cup of tea with that gal!!

    Objectively hot, check. Swinging a broadsword or two, check – though she’s not very good at full-on Florentine-style two-sword yet, she’s practicing and improving. Whiz with the HTML, check (well, by non-programmer standards, anyway).Cats to swing from trapeze, no – you must have her confused with that Russian girl from the Moscow Cats Theater, who is not nearly so hot and is to the best of my knowledge unable to swing a broadsword at all (but you never know).

  38. morgan, the binary DOC format is relevant because I was pointing out how much data is still sent using such a proprietary binary format, though admittedly that is a step removed from her point about how open binary formats might turn closed (which I don’t see happening much and is a governance issue, not a technical issue). Most people don’t use OOXML, they stick with DOC, to the extent where I’ve heard of executives painstakingly implementing processes to keep their beloved DOC around. As for web professionals, my understanding is that the graphic designers who come up with most of the formatting actually use GUI tools, with the programmers sometimes needing to jump in at the HTML level. The fact that even the professionals are mostly using GUI tools only buttresses our point.

    I think you meant to say that HTML is a tiny fraction of internet traffic, as that’s what I referred to, but I’ll address both: HTTP is probably at least 50% of internet traffic by now, because of the great rise in video, while I’d estimate HTML at around 5%, which is still big. The median download rate in the US is 2-3 Mbps and what really matters is throughput, which I guarantee on a per-user basis is much lower than the 20 Mbps you claim to enjoy along with the Europeans and the Japanese. I live in Phoenix, AZ, where I have a 1.2 Mbps/800 kbps connection, which I don’t bother upgrading because it’s all I need. Anyway, my point was that internet uplinks are much slower than your home LAN and more importantly that the bloat of HTML files causes perceptible application latency for the end user. Yes, the US is still way behind those countries in broadband, for the reasons you state and that esr laid out before, but if we don’t have to take their socialist systems also, I’ll take that tradeoff. ;) All scalable webapps have HTTP servers but my point is that debugging at the HTTP level is almost never necessary.

    I may not be able to convince you that open binary formats are necessary, because you’ve already made up your mind, but I think I’ve made a much better case for them than you two have against. If designing significant multiplatform applications is so important, you’d presumably be able to make your trivial point about differing floating point representations have some semblance of a connection to the open binary format discussion, you can’t. As my links show, around 5% of internet traffic is probably plaintext, mostly text embedded in HTML. If you really think it’s 90+%, no wonder you’re so off-base! ;) As for plaintext being far more compressible, surely you realize that’s why I’m suggesting replacing HTML tags with a designed binary format? Compression algorithms are great, but they will never match a designed binary format (check out the png test). If I sound like you at 15, clearly you’ve lost a step since then. ;)


  39. Look, Ajay, you’re not going to convince anyone here that binary formats are anything but a false economy. I’m in 100% agreement with esr because he’s right. You might think binary works, but it’s obvious to me that you’ve never written any significant multiplatform applications. You do understand that different platforms, for example, represent floating point numbers in different ways? Understand when I say platform, I mean OS, processor, programming language, any virtual machines that are involved, etc.

    What’s more different platforms use even different ordering of bytes: little-endian and big-endian. Binary network have to have this ordering specified in protocol. So you usually need marshalling and unmarshalling anyway.

    Besides there is (fortunately) trend to move towards text based protocols and formats. XML-RPC and SOAP, and XMPP (Jabber). Office application suites such as MS Office moving from binary formats to (compressed) XML (well, in OOXML/OXML case XML with binary inserts), possible thanks to the fact that current computers have enough CPU power and memory. SVG for verctor graphics. Even binary formats such as MP3 (ID3), OGG, JPEG (EXIF) and PNG have metadata in text form.

  40. @Jessica: to be fair, Cathy Raymond can’t help but pick a few tech skills from Eric, who is obviously a very skilled hacker. As an example, my wife’s desktop and laptop computer both run Ubuntu Linux and she knows enough to install packages from Synaptic and even understands enough about SSH tunneling to be able to VNC into her desktop from her laptop and download files from it when she’s at school in Sarasota. She installs her own security updates, And just the other day, postfix crashed on my mail server due to /var filling up, so she deleted stuff in /var/tmp and restarted it. That’s no small feat for someone who, just 4.5 years ago, was running a zombified Windows XP machine. But I’m under no delusion that she would be able to do these things if she hadn’t picked up at least some knowledge from me.

    OTOH, one could argue that being a lawyer (and being married to hacker like Eric), Cathy Raymond is probably well above-average on the intelligence scale to begin with, as is my wife. And as such, it could be argued that either could learn anything they really wanted to. (Geeks are often very discerning about their women!)

  41. Hmm, long comment from last night still isn’t posted, I’ll assume the usual long comment gremlins and repost in two pieces.

    first half:
    morgan, the binary DOC format is relevant because I was pointing out how much data is still sent using such a proprietary binary format, though admittedly that is a step removed from her point about how open binary formats might turn closed (which I don’t see happening much and is a governance issue, not a technical issue). Most people don’t use OOXML, they stick with DOC, to the extent where I’ve heard of executives painstakingly implementing processes to keep their beloved DOC around. As for web professionals, my understanding is that the graphic designers who come up with most of the formatting actually use GUI tools, with the programmers sometimes needing to jump in at the HTML level. The fact that even the professionals are mostly using GUI tools only buttresses our point.

    I think you meant to say that HTML is a tiny fraction of internet traffic, as that’s what I referred to, but I’ll address both: HTTP is probably at least 50% of internet traffic by now, because of the great rise in video, while I’d estimate HTML at around 5%, which is still big. The median download rate in the US is 2-3 Mbps and what really matters is throughput, which I guarantee on a per-user basis is much lower than the 20 Mbps you claim to enjoy along with the Europeans and the Japanese. I live in Phoenix, AZ, where I have a 1.2 Mbps/800 kbps connection, which I don’t bother upgrading because it’s all I need.

  42. second half:

    Anyway, my point was that internet uplinks are much slower than your home LAN and more importantly that the bloat of HTML files causes perceptible application latency for the end user. Yes, the US is still way behind those countries in broadband, for the reasons you state and that esr laid out before, but if we don’t have to take their socialist systems also, I’ll take that tradeoff. ;) All scalable webapps have HTTP servers but my point is that debugging at the HTTP level is almost never necessary.

    I may not be able to convince you that open binary formats are necessary, because you’ve already made up your mind, but I think I’ve made a much better case for them than you two have against. If designing significant multiplatform applications is so important, you’d presumably be able to make your trivial point about differing floating point representations have some semblance of a connection to the open binary format discussion, you can’t. As my links show, around 5% of internet traffic is probably plaintext, mostly text embedded in HTML. If you really think it’s 90+%, no wonder you’re so off-base! ;) As for plaintext being far more compressible, surely you realize that’s why I’m suggesting replacing HTML tags with a designed binary format? Compression algorithms are great, but they will never match a designed binary format (check out the png test). If I sound like you at 15, clearly you’ve lost a step since then. ;)

  43. Morgan writes:
    > Look, Ajay, you’re not going to convince anyone here that binary
    > formats are anything but a false economy. I’m in 100% agreement
    > with esr because he’s right.

    “Well sonny, you think you young uns are so smart wit yer new fancy talk and high falutin’ ideas. You need to respect yer elders, and pull up yer pants, and get a haircut.”

    > You might think binary works, but it’s obvious to me that you’ve
    > never written any significant multiplatform applications.

    “Ye think yer so mart, but yer just wet behind the ears. And turn down that music, its too loud, and go clean your room too.”

    > You do understand that different platforms, for example, represent
    > floating point numbers in different ways?

    How commonly are FP numbers transmitted? Ever hear of IEEE 754 standardized representations? Are you aware that binary format specification define the representation of such things?

    > Understand when I say platform, I mean OS, processor, programming language,
    > any virtual machines that are involved, etc.

    Once again, there are many binary formats used on the Internet, including TCP/IP itself. Ever hear of NFS, SMB, VoIP, streaming video, BitTorrent, SSH and so forth? There are challenges, for example endian-ness in IP headers, but the specification simply defines the format, and the problem is dealt with. Apparently, in respect to TCP/IP, binary formats work perfectly well. I am sure you are aware of RFC 2507 “IP Header Compression” which specifies a way to shave a few bytes off the binary format of IP, TCP and UDP headers using various custom compression formats on the already binary formats of TCP/IP and UDP/IP. I’m not a network person, but AFAIK these are quite commonly deployed from what I understand. So apparently, the writers of this RFC think that saving a few bytes with added complexity and reduced transparency is worthwhile.

    > Do you understand that 90%+ of what gets transmitted over the Internet
    > is already plaintext?

    “That is the way it has always been, you young whippersnapper. Don’t bring me any of your new fangled ideas.”

    > And, finally, do you understand that plaintext is far more compressible than binary data?

    And that is relevant why? You do understand that the reason why plain text is more compressible is because, being longer and more redundant, it has much higher entropy? You do understand that a well written uncompressed binary format will almost always be shorter than the equivalent plain text compressed with the very best generalized compression algorithms?

    I don’t mean to be disrespectful to you Morgan, I have often found your comments and discussions very interesting and perceptive. But your arguments here against Ajay are not at all convincing. My gut tends to agree with you, but, frankly, I haven’t really seen a compelling argument against what Ajay suggests here.

  44. >How commonly are FP numbers transmitted? Ever hear of IEEE 754 standardized representations? Are you aware that binary format specification define the representation of such things?

    I can tell you this: in my GPS- and AIS-related work, I run across a lot of binary protocols with slots for transmission of float numbers – AIS packets, vendor binary reporting protocols like SirF’s for GPSes, the truly bizarre bitstream protocol shipped by GPS satellites and RTCM2 stations, etc. I’m a domain expert on these protocols and I tell you that not a one of them uses IEEE754 – not even the really recent ones like RTCM3 that are tolerably well designed in other respects. These fields are invariably shipped as integers with a divisor you’re supposed to apply on the client side.

    I can also tell you this: endianness remains a significant pain in the ass with these protocols. Most of them have plumped for byte-oriented twos-complement big-endian, but there are exceptions. Like Zodiac chipsets, which are word-oriented twos-compliment little-endian. Hell, we’re lucky everybody uses twos-complement! Then there’s the really weird stuff out there, like the packed sixbit representation for ASCII used in AIS message fields. And Unicode? What’s Unicode?

    In theory, these binary protocols could have been designed with best practices for the domain: IEE754 floats, uniform big-endian integral representations, UTF-8 for strings, In practice, it never happens. Anybody with enough of a clue about the software side to do all these things right knows better than to specify a binary applications protocol rather than a textual one in the first place, so they don’t. The binary applications protocols we actually get are the ones that wind up being written by hapless EEs and helplessly-adrift hardware designers.

    Consequently, they suck. Really badly. That suckage is much of the reason gpsd exists.

  45. @Jessica. Most of the binary protocols you highlight are binary by necessity. NFS and SMB are binary because NFS and SMB have always been binary and need to continue to be binary for backwards compatibility. Performance is a bit more critical in enterprise class files systems, but then, compare WebDAV. VOIP and streaming video we’re talking about a LOT of data going back forth, as in the case of NFS and SMB. GPSD, which is what we’re talking about here, doesn’t transmit that much data. It just doesn’t.

    IP header compression was very necessary back in the days we were connecting with dialup modems, but these days at least most of us are running fat pipes with 24/7 connectivity. How many cablemodem users are running with IP header compression turned on? I’ll bet almost none.

    And that is relevant why? You do understand that the reason why plain text is more compressible is because, being longer and more redundant, it has much higher entropy? You do understand that a well written uncompressed binary format will almost always be shorter than the equivalent plain text compressed with the very best generalized compression algorithms?

    But is it enough to make a difference? All that matters is end-user experience. Why does Windows have the biggest market share in the desktop space? Because I can assure you that it has nothing to do with resource efficiency or technical excellence. End user experience is what matters. A difference that makes no difference is no difference.

  46. @Jessica: BTW–you aren’t the first person to accuse of me being a gumpy old man, young lady.

  47. ESR says:
    > In theory, these binary protocols could have been designed with
    > best practices for the domain:

    So your argument is that because some binary protocols are badly designed, by necessity all binary protocols are inferior? What is the intrinsic property of being binary that makes then necessarily bad?

    Morgan says:
    >But is it enough to make a difference?

    It sure does if you run a web farm. If all your run is web traffic, you could easily reduce your bandwidth requirement by 50% by a binary representation of HTML. That would make a big difference, yes.

    Further, with a binary protocol we would not have to endure the massive mess that HTML is today. THe amount of garbage HTML, and the consequent slop factor built into every HTML rendering engine makes the job of web designers much more difficult. Imagine if you could simply test on FF and be confident that it would work on all the other browsers. How much programmer energy would that save? The sloppiness of HTML is in part due to the nature of HTML as a text format. NFS and SSH implementations don’t have this problem. (Though, evidently GPS systems do.)

  48. >THe amount of garbage HTML, and the consequent slop factor built into every HTML rendering engine makes the job of web designers much more difficult. Imagine if you could simply test on FF and be confident that it would work on all the other browsers. How much programmer energy would that save? The sloppiness of HTML is in part due to the nature of HTML as a text format.

    Though my historical knowledge (heh) of that period is rather shaky, I believe a lot of the garbage HTML is because different (proprietary) vendors kept adding new features to try to increase their market share. Do you really think a binary format would have prevented that?

    >NFS and SSH implementations don’t have this problem. (Though, evidently GPS systems do.)

    Aren’t the major implementations of NFS and SSH open source? (As opposed to GPS or HTML.) Would that make a difference, maybe?

  49. >So your argument is that because some binary protocols are badly designed, by necessity all binary protocols are inferior? What is the intrinsic property of being binary that makes then necessarily bad?

    No. I’m saying that the systems architects who (like, say, me) are capable of consistently applying microlevel best practices like IEEE754 to binary applications protocols also know they’re a stupid idea for higher-level reasons involving development friction costs and long-term maintainability. Thus, we don’t do it, and the binary application protocols the world actually gets are designed by the clueless.

    The “intrinsic property of being binary that makes then necessarily bad” is that they can’t be read with a Mark One Eyeball and viewed or generated with general-purpose editing tools. This is such a serious disadvantage for long-term maintainability of the code surrounding them that binary protocols can really only be justified in two circumstances: (1) low levels of the network stack, where maximum efficiency is super-important and you don’t yet have any notion of textuality to build on anyway, and (2) protocols that are trivial wrappers around large binary blobs like images, so trying to come up with textual designs won’t gain you significant readability..

  50. >Also, the site seems to be down.

    If you mean the GPSD site…annoyingly, yes. It went down about two days ago. The berlios.de project-hosting site has a bad tendency to get flaky every August when the European vacation month starts. I think we’ve been hit again this time.

    UPDATE: Came back up early on 3 Aug.

  51. Ajay are you freakin crazy? I first learned HTML in SCHOOL! Do you think they would have taught an open binary format to school kids? HTML is the best way to prep non programmers to learn full power programming languages. If it had been an open binary format nobody BUT seriously good coders would even bother with it.

  52. Imagine if you could simply test on FF and be confident that it would work on all the other browsers. How much programmer energy would that save? The sloppiness of HTML is in part due to the nature of HTML as a text format. NFS and SSH implementations don’t have this problem.

    You’ve never sysadmin’d a mixed-platform Unix environment then, I take it. There are sufficient differences between the NFS implementations from HP, Sun, IBM, SGI, DEC, Hummingbird NFS (on Windows), SCO, BSD, Linux kernel, etc., to cause major headaches, particularly in regard to performance and stability. Some of these differences are more pronounced when implemented with NIS or LDAP and automount. Some standardization is (supposedly) occurring with NFSv3 and NFSv4, but a good sysadmin will know how to work around the differences in these filesystem stacks. In particular, this background on the history of NFS highlights exactly the same problems that ESR keeps talking about in regards to binary protocols, such as endian issues.

    Also, SSH implementations did have this problem at one point. See this history of SSH, which describes the split between the open source version and the proprietary versions developed by SSH Communications.

  53. Yet another thing to think of in binary vs text protocols (and formats): in most cases maintability is more important than optimization.

  54. >Yet another thing to think of in binary vs text protocols (and formats): in most cases maintability is more important than optimization.

    In my judgment this is not just “yet another point”; it’s the pivot of the whole argument.

  55. Ajay:

    First of all, a lot of HTML is in fact text anyway – most of an average page is text or Javascript or graphics files or the like, not markup, so if you care about compression you need to do it at another layer anyway (and we’re back to mod gzip) and the point of using an unreadable markup format is lost. Markup isn’t much of the HTML by volume.

    Second, what you probably want to covert to binary is HTTP, not HTML; HTTP overhead is as much as HTML in most cases. That would still be wrong, but less obviously silly.

    All scalable webapps have HTTP servers but my point is that debugging at the HTTP level is almost never necessary.

    Actually, it’s necessarly all the goddamn time, which is why I have a job. Most of the time the insanity is lost in the noise of massive capacity overkill, but you can find real gems out there in most applications. 5 KB cookies attached to dozens of 20-byte images! 80 different javascript subroutines added by 80 different reference inclusions, thus triggering 80 new connections (due to strange proxy configuration) over SSL, thus overloaded host CPU! The fun, it never ends… and in all those applications, I can’t think of a case where “bandwidth used by markup” was nontrivial.

  56. Anyway, my point was that internet uplinks are much slower than your home LAN and more importantly that the bloat of HTML files causes perceptible application latency for the end user.

    And, no, this is just wrong. HTML markup is not producing noticable latency on a 2 MBps download pipe. Sure, video may be slow, but turning HTML into sendmail.cf doesn’t address that. If you’re actually talking about HTML, most often the delay is due to network latency and/or threading issues and/or server delay, not bandwidth.

  57. Back-of-the-envelope analysis:

    Currently, this page is 90 KB or so of HTML, of which about 50 KB is human-addressed text (not counting included graphics, etc; just the main text). Offhand it looks like about half the actual markup is text (mostly hyperlink addresses), but let’s assume none of it is and you achieve 100% compression on the HTML markup. That cuts 40 KB, which on a 2 Mb download link, saves us… almost 0.16 seconds. And remember, gzip will give you >80% on both the markup and text, for a whopping 0.3 second return.

    In short, it looks irrelevent unless you’re hosting a major site or your users are on dialup – which is why almost nobody can be bothered to configure for mod gzip, which would work better in almost every case anyway.

  58. And remember, gzip will give you >80% on both the markup and text, for a whopping 0.3 second return.

    Also bear in mind that that 0.3 second cost you something in CPU cycles. So unless you’re running a big server farm with CPU cycles to spare you’ve gained close to nothing in terms of scalability or robustness.

    Both CPU cycles and bandwidth are plenty cheap. So why bother?

  59. Alright, I think I’ll just avoid the long comment queue and split this one up into three parts. First part –

    esr, judging binary protocols by GPS/AIS protocols is not much of a sample. I don’t think the fact that different protocols use different endianness qualifies as much of a pain in the ass, it’s a trivial fix. I actually don’t mind experimentation in formats, such as not using unicode; there’s no reason to acquiesce to the “standards” all the time, there are sometimes compelling reasons not to. Yes, I suppose it’s true that all the software engineers have been brainwashed into thinking that text application protocols are always better, thank god we have EEs like me to set them straight! ;) As Jessica noted, pointing out bad implementations of binary protocols doesn’t damn the concept: if that were enough, xml would clearly damn all text formats by itself! :) Simply repeating the claim that text formats are more maintainable without ever backing that up isn’t much of an argument. There’s no difference between running XSLT over XML or a binary converter over a binary format, I see no maintainability advantage.

    Morgan, WebDav just looks like another example of hammer-meet-nail, where HTTP/XML are stupidly repurposed just because they’re so popular. We’re not talking about gpsd here. I simply brought up the ancillary topic of open binary formats and noted early on that efficiency might not be a concern on GPS devices but it certainly is elsewhere. Yes, HTML makes a difference to the end user, as I’ll go into more later. Using Windows as an example is such bullshit, unless you really want bad tech to proliferate because of a vendor monopoly. As for your CPU cycles argument, yes, most HTML is served from big server farms :) and my back of the envelope calculations show that the CPU cycles for compression cost 1-2 orders of magnitude less than the bandwidth savings they provide. At the scale of the internet, where HTML is approximately 5% of traffic, compression is clearly cost-effective, trading cheap CPU cycles for relatively much more expensive bandwidth.

  60. Second part –

    Tom, you’re right that binary formats don’t stop vendors from adding their own extensions to differentiate themselves, but I think Jessica is right that text lets in a lot more slop that has to be accounted for by web browsers, that wouldn’t be there for a binary format.

    Russ, somebody really needs to teach you programmers how to argue: this isn’t math or a black swan situation, where the existence of a single black swan proves the all white swan theory wrong, though clearly you programmers are prone to such “searching for your keys under the light” arguments. ;) I could equally well come up with some badly designed image format that simply changes the first bit in the image and claim that designed formats don’t work, that’s just idiotic. The point is that someone who knows what they’re doing can always design a binary format that packs specific kinds of data better than general-purpose compression and it’s often worth the effort to do so.

    FLMKane, I’m sorry that your school was dumb enough to think learning HTML was worthwhile. The whole point about a binary format is that learning formats is worthless, you just need to learn the GUI tools, just as most kids who bother with the web now learn Dreamweaver, not HTML. If you’re so ignorant that you think a markup language like HTML is a good prep for programming, I dunno what to tell you.

  61. Third part –

    Mike, yes, if an open binary format enforced text compression alone, that alone would be a big win. I see no reason to do it at another layer and leave it up to the whims of server admins. As for the markup, as you point out with this page, it’s still a significant percentage that can mostly be eliminated by a binary format. I am unaware of the overhead in HTTP, so I was in fact talking about HTML, but there’s no reason HTTP can’t be replaced by a binary protocol with text compression even easier. ;) As for debugging at the HTTP level, strictly speaking you’re only referring to the headers and filesizes, not the HTTP stream, but then HTTP is such a trivial protocol. All the cookie and connection examples you cite can equally well be handled by a browser tool that shows you all that info, like the Selenium IDE is used for other purposes in Firefox, there’s no real reason to get it from the raw HTTP stream. If you can’t think of a case where bandwidth used by markup was nontrivial, it’s clear to me you weren’t looking for it. ;) Yes, I’m talking about latency from HTML bloat, which has always been my precise claim. If you actually think this page loads in the theoretical .36 seconds of your calculation, you clearly haven’t used the web much. ;) It usually takes at least 2-3 seconds for me to load the full page, probably because the download takes a while for the bitrate to ramp up, as you’re no doubt aware. If you’re downloading a large file like a video file, the rampup can be a small portion of the overall download time, but unfortunately HTML files seem to hit the ramp in a bad spot, right when it’s getting good. Cutting this page down to 20 KBs through a binary format and enforced text compression could mean the difference between a half-second load time and a 2-3 second load time, which is huge in application UI latency, as you should know. I suspect mod_gzip is mostly unused because of ignorance, nothing else, but the point is it is mostly unused and that wouldn’t be an option with an open binary format.

  62. >Simply repeating the claim that text formats are more maintainable without ever backing that up isn’t much of an argument. There’s no difference between running XSLT over XML or a binary converter over a binary format, I see no maintainability advantage.

    The advantage is that you can use the same tools (emacs/vi, nc, etc.) for everything, without needing any special-purpose software for each protocol. Special-purpose software can disappear, but there will always be text-manipulating tools available. Also, if you’re using special-purpose software to read/edit an opaque format, you don’t know precisely what it’s doing; you don’t have complete control of your output, which makes your software vulnerable to bugs that were, in fact, introduced by the editing software you’re using.

  63. Tom Dickson-Hunt says:
    >The advantage is that you can use the same tools (emacs/vi, nc, etc.)
    > for everything, without needing any special-purpose software for each protocol.

    This is a circular argument. Text protocol is ubiquitous because text processing tools are ubiquitous. And text processing tools are ubiquitous because (amongst other reasons) text protocols are ubiquitous. It is perfectly conceivable to define a simple meta grammar for binary protocols, and a specific grammar for instances of those binary protocols, then have a binary protocol editor tools (perhaps called emacs) that could edit them based on that grammar. Network protocol analyzers do this all the time. If someone had written an RFC with a number less that 1000, that might be our world today. In fact, HTML has just such a grammar, which is, unsurprisingly a text protocol.

    I guess though, Darwin dictated otherwise. Just as evolution gets stuck in a hole sometimes, so too, I think Jon Postel got stuck here too. But what do I know, according to Morgan, I am just a young whippersnapper anyway.

  64. >This is a circular argument. Text protocol is ubiquitous because text processing tools are ubiquitous. And text processing tools are ubiquitous because (amongst other reasons) text protocols are ubiquitous. It is perfectly conceivable to define a simple meta grammar for binary protocols, and a specific grammar for instances of those binary protocols, then have a binary protocol editor tools (perhaps called emacs) that could edit them based on that grammar. Network protocol analyzers do this all the time. If someone had written an RFC with a number less that 1000, that might be our world today. In fact, HTML has just such a grammar, which is, unsurprisingly a text protocol.

    Perhaps that’s true. Text, though, at the beginning had the advantage that there were already strong tools for working with it, because it’s what program source code is written in, and more generally it’s the usual medium of human exchange on computers. It’s easier to teach computers to read text than to teach people to read binary–the computer doesn’t really care what it reads–and optimizing for computer convenience over human convenience makes no sense in the age of multiple-gigahertz computers on every desktop. Also, especially for HTML–Web pages are almost all text anyway, content-wise, so wrapping that in opaque binary markup makes no sense, just as writing a text format for storing images/music makes no sense.

  65. Ajay, somebody needs to teach your kind how to argue. When you say “all”, you really *do* mean “all”. Or at least I do. Now, in the real world, when people set out to design binary protocols, they *don’t* always get it right. You can’t claim that they do, or that they will, because the evidence is such that they don’t. A little more logic, please, and a little less faith.

  66. Ajay says:

    FLMKane, I’m sorry that your school was dumb enough to think learning HTML was worthwhile. The whole point about a binary format is that learning formats is worthless, just need to learn the GUI tools, just as most kids who bother with the web now learn Dreamweaver, not HTML. If you’re so ignorant that you think a markup language like HTML a good prep for programming, I dunno what to tell you.

    FLMKane says:

    What are you gonna say next? The Linux kernel should be rewritten in assmebly? Emacs should be implemented in machine code? Python should have been too?

    First of all, if you think HTML should have been a binary format, your not in any position to be calling anyone else dumb.

    Secondly, if HTML was a binary format, the Web as we know it would never have exist.

    Third most kids who learn Dreamweaver without knowing HTML DO NOT WANT TO KNOW whats actually going on. But there are many who do. If HTML were a binary format they have had absolutely no idea what the hell is going on, UNLESS they were great hackers in the first place.

    Fourth, the guys who coded Dreamweaver in the first place definitely had an easier time because they did not have bother with a binary format.

    As for HTML being a good prep for programming…ESR has a great HOWTO on how to become a hacker. That should explain that point well enough if you read the FAQ

  67. BTW I live in a place with horrible Internet speed(10-20 kbps). If I never experience any TRULY irksome speed problems except with Youtube.

  68. Sorry I made a typo.

    BTW I live in a place with horrible Internet speed(10-20 kbps). I never experience any TRULY irksome speed problems except with Youtube.

  69. LMKane:

    What are you gonna say next? The Linux kernel should be rewritten in assmebly? Emacs should be implemented in machine code? Python should have been too?

    In the early days of the Linux kernel, a large portion of it was written in x86 assembler! Linus and him merry band of kernel hackers saw fit to rewrite virtually all of it in C so that it could be ported to other architectures. The result is that the kernel now runs on everything from wireless routers to IBM z-Series mainframes.

    Secondly, if HTML were a binary format, the Web as we know it would never have existed.

    (ed note: a few grammatical fixes)

    Absolutely correct. In fact, plenty of binary (and plaintext) hypertext formats existed before Sir Tim Berners-Lee came along with HTML. In fact, I remember thinking that Web browsers were kind of like Emacs’ ‘M-x info’ or Hypercard rather than the other way around. Why did WWW win?

    As for HTML being a good prep for programming…ESR has a great HOWTO on how to become a hacker. That should explain that point well enough if you read the FAQ

    I don’t think HTML is good prep for programming, but then again, I learned to program in Pascal first, and then C, before HTML was glimmer in Tim Berners-Lee’s eyes. It’s a markup language. Markup is a completely different concept than programming.

    IMHO, if you want to learn to code, start with Python. You could do much, much worse.

    Russell Nelson:

    Now, in the real world, when people set out to design binary protocols, they *don’t* always get it right. You can’t claim that they do, or that they will, because the evidence is such that they don’t.

    Absolutely right. Everyone arguing for binary protocols here is probably too young to remember the mess of protocol incompatibilities in the 1980s. Does anyone else here remember the numerous, incompatible implementations of Xmodem, Ymodem and Zmodem?

    Ajay:

    I suspect mod_gzip is mostly unused because of ignorance, nothing else, but the point is it is mostly unused and that wouldn’t be an option with an open binary format.

    Sure. Because you clearly know more than those ignorant professional webmasters and systems administrators out there. And gosh, darn, all us old programmers are just lazy and stupid, what without our plaintext protocols and refusal to use special-purpose tools.

  70. First, gpsd.berlios.de must be badly configured: it shows me an internal error “while trying to use an ErrorDocument to handle the request”.

    Second, I’m intrigued about how any engineer could design an equivalent of HTML as a binary format. I saw someone referring to Bittorrent as a binary protocol; well, I guess you haven’t ever programmed a .torrent decoder. It is a text protocol. It uses HTTP for file transfers and plain text for torrents, and I think it could not have been developed otherwise.

    How would you implement a binary format equivalent to HTML? Let’s see, you could identify each tag using one byte codes, and then some bytes to indicate size. But, how many bytes? One is not enough. Two maybe, even three or four, but then you would only save one byte or two for each tag (with b, i, em tags, this saving is negligible). And this format would not escalate. But, OK, let’s say you don’t care about this.

    Then we go to the tag attributes, which could be on the same format. But where you put them? After the tag? Then you could misinterpret them as text, because not everyone uses ASCII. Then again, you could use one or two bytes more to indicate how many attributes there are. So you actually waste bytes and time, writing compilers.

    But let’s assume you do it for the laughs (if you are seriously thinking about doing it at this point, send your brain to the factory and have it replaced). How would you put tags between text? Checking if the next character belongs to the encoding used? You can’t, if you are using Unicode. Or CJK. Or even ASCII, although you could restrict more your format, and only allow 128 tags (using the most significant byte to indicate whether it is a tag or character). When Microsoft introduces extensions, you get collisions between tag identifiers. And all that stuff just to save, how much, 10%? Can’t you use LZMA, which is very good at compressing text? Or even GZip? Damn, I can’t believe what I’m hearing. Have any of you ever designed a binary format?

  71. Have any of you ever designed a binary format?

    Guilty as charged. It was a record format for keeping track of the last N callers to a T.A.G. BBS, similar to what last(1) does on Unix and was a bit like /var/log/wtmp

  72. Miguel says:
    >Have any of you ever designed a binary format?

    If you really can’t imagine how to encode HTML is a more structured format then I suggest you read more code than you are obviously doing.

    Just FWIW, I notice a couple of comments concerned with the fact that binary protocols don’t get the design right first time, as an argument against them. And that doesn’t apply to text formats exactly why? It is quite an ironic comment given the context since HTML is currently at, what? Version 5. HTML parsers are filled with heuristics to deal with the variability of semi-valid uses of the protocol. Different implementations of the protocol (in different browsers) implement the protocol differently, most don’t even implement all the protocol given in the standard. For sure the original designers where smart to use a flexible format that allowed for the evolution of the protocol, but anyone designing a good binary format includes that level of flexibility in there too.

    Frankly, I tend to agree that in many cases a text readable format does offer certain advantages, but I find some commenters’ knee jerk dismissal of Ajay’s perfectly reasonable argument rather disturbing.

  73. Frankly, I tend to agree that in many cases a text readable format does offer certain advantages, but I find some commenters’ knee jerk dismissal of Ajay’s perfectly reasonable argument rather disturbing.

    The best arguments against binary formats and for textual formats have already been laid out here and in Chapter 5 of The Art of Unix Programming.

    If you haven’t read it, Jessica and Ajay, then I suggest doing so.

  74. Ajay:

    It usually takes at least 2-3 seconds for me to load the full page, probably because the download takes a while for the bitrate to ramp up, as you’re no doubt aware.

    Eh, I think the ramp-up is sudden (I mean, we’re talking about a couple dozen packets here!), it’s just that it takes a long time to actually *start*. You have to consider

    a) Server delay (retrieving data, spawning CGI processes, blah blah).
    b) Rendering time on client
    c) Network latency delay.

    C is easy to underestimate. Hard to be sure from work (I’d have to try at home) because I’m behind a proxy and don’t want to go through the shenanigans to trace from there, but remember that starting a HTTP download takes something like 5 one way transmissions – that can easily take longer than the actual download. (Add SSL and you get some CPU time and another couple of round-trips). Even if you’re using persistant HTTP connections it’s a minimum of 2X the one-way latency to download anything, which is often longer than the actual bytes would take to stick into the pipe. It looks like A & C are the dominant factors here. And contrary to most people’s intuition, latency is almost always more important than bandwidth for interactive applications.

    As a side note, thanks to readable formats I can wonder WTF my browser is asking for “http://esr.ibiblio.org/MSOffice/cltreq.asp?UL=1&ACT=4&BUILD=6551&STRMVER=4&CAPREQ=0” ? Bizarre; that file doesn’t exist… wonder if that’s some kind of strange IEism?

  75. Tom, since when is using the same text formatting tools for everything a law of nature? Besides I’m not really arguing against text-formatting tools, only that we shouldn’t sent the text over the wire also. I don’t think you and others appreciate this, so let me give you an example. Suppose I came up with a binary HTML format, that replaced all HTML tags with binary and that compressed all remaining textual data using standard compression. In order to read such a format on the command-line, all you’d have to do is type “catb foo.bhtml|more”, rather than “more foo.html”. Both commands would produce the exact same output, but the former would save lots of bandwidth when the file is sent over the network. For you to keep saying this is a big deal for programmers to do, when billions of dollars are wasted every year on unnecessary bandwidth, is just lazy and self-indulgent. I understand not wanting to add another layer of software and libraries into the stack, but if you’re doing so with most complex text formats anyway, such as with HTML or XML, there is no problem with doing it in binary instead. As for software bugs, those are equally likely from an XML editor as from a binary editor; as long as the format’s open, programmers can compete to produce better tools for you to use. We’re not optimizing computer convenience over human convenience, we’ve noticed that the cost of bandwidth and laggy interfaces is far more than the minimal cost of making programmers type an extra command in and are making that beneficial tradeoff. HTML pages are not mostly non-tag text, please read the above comments as you seem not to have noticed that your arguments were already addressed.

    Russ, perhaps you need to learn to read first, I didn’t use the word “all,” I said “designed,” meaning somebody with a brain designed it. A bad protocol design will die and good ones will proliferate, that is why HTML will die. Funny how you create a strawman argument that I said “all binary formats are better” and then proceed to argue with that, perfectly demonstrating the “searching for your keys under the light” argumentation I mentioned. :)

  76. Jakob, your Google link has no data on web-authoring tools but qualitatively suggests that GUI editors like GoLive and Word are very popular, so if your intent was to support my assertion that GUI editors are widely used, thanks. ;)

    FLMKane, Linux will not survive because it uses the GPL and cannot be relicensed according to my hybrid license, Emacs has never had much usage compared to vi, and Python pales compared to ruby usage, care to name another technology loser? I’m not arguing they should have been built in assembly either, that’s like saying you should write files by hand in binary 1’s and 0’s. I’m saying they should have been written in C, while you all are saying the equivalent to “they should have been written in a VM language like Java,” just as Jython is Python implemented on the JVM. Funny that you think HTML so obviously should’ve been text considering you also think it was a good introduction to programming. :D There is NO POINT in KNOWING WHAT’S GOING ON at the format level, just use a GUI tool. I don’t much care how hard it is to code a GUI tool, but you’re seriously deluded if you think the slop in HTML is easier to handle than most binary formats. If your internet is truly so slow (56k modem?) and you don’t think that’s a problem, clearly this entire discussion is lost on you.

    Morgan, care to name some of that former plenitude of binary hypertext formats? I’m curious what came before HTML and whether they were open. There are a multitude of reasons why any particular tech like MS Windows wins: the worse thing you can do is assume it won because it’s “perfect” and then assume all its attributes are golden, cargo-cult style. If ignorance isn’t the reason most webmasters/sysadmins don’t compress their text, perhaps you’d like to posit another reason. When I ran a site serving 5-10 TBs/month of mp3s, I compressed the HTML files even though they were a negligible fraction of traffic, because I dislike waste and because it was as simple as adding a line or two to the lighttpd configuration file. As for you old programmers, I think you exhibit the classic tendencies of old people: an unwillingness to admit that times have changed and the new environment, such as the massive traffic of an inefficient format like HTML, necessitates different solutions and a snide dismissal of tools that you didn’t grow up using. Thanks for the links to esr’s other writing- I hadn’t read those before- but they only cemented my notion that the main reason for text protocols is the UNIX hackers’ unerring love for the command-line and text editors. I actually think it made sense in earlier times of experimentation, when the UNIX designers were first figuring out how to structure a computer OS working off of text terminals or when Berners-Lee was first prototyping HTML. However, that should have been tightened up into a binary format by HTML 2 or 3, definitely by HTML 4. Just as Windows NT-based OS’s now use a binary hive format for its registry, as opposed to the text configuration files mentioned in esr’s book, the trend is towards binary for efficiency reasons.

  77. Miguel, I’ll punt on your request to discuss exactly what the structure of a binary HTML would look like, I’m uninterested in going into the technical details right now. Suffice to say your mooted implementation is simplistic and anyone who knows what they’re doing could do much better than that, as Jessica suggests. Better to “waste” $100k writing a compiler than hundreds of millions of dollars on wasted bandwidth. Not only am I seriously thinking of doing it myself, I’ve previously written up some posts on how to replace HTML with a better thin client format, that is binary as a fundamental feature. I’ve never designed a binary format, but if you’re really so ignorant that you don’t know that a designed binary format will usually beat the pants off of general-purpose compression, you really need to read more.

    Jessica, it’s interesting that you mention the idea of a binary format language because I had thought the same would arise, I guess great minds think alike. ;) I found one a little while back, EBML, that is used by the popular Matroska container format, but I’m skeptical about how they ape XML with their hierarchical structure. Nevertheless, I think more general binary format languages will help standardize binary formats, by creating a metagrammar similar to what you propose, mitigating some of the bad design decisions that esr suggests by providing better defaults. Given that you’re the commenter here who I most respect, I’d be curious to know what you think of my other ideas, which you can post on those blogs if it’s off-topic here.

    Mike, the precise reasons for the ramp don’t matter as much as that the ramp exists and that it would be mitigated by the smaller filesizes of a binary format. The alternate reasons you suggest wouldn’t cause the average 4 second load time for this webpage that I see, so bandwidth is always an issue.

  78. >Linux will not survive because it uses the GPL and cannot be relicensed according to my hybrid license

    I’m not even going to comment.

    >Emacs has never had much usage compared to vi, and Python pales compared to ruby usage

    Care to cite some sources?

    Looking up ESR’s language-use stats in The Art of Unix Programming, Python is a respectable language, while Ruby isn’t listed. Granted these are seven years old; does anyone have any more recent data?

  79. >A bad protocol design will die and good ones will proliferate, that is why HTML will die.

    You’d think there would have been plenty of time for that already; why hasn’t it happened yet?

  80. Linux will not survive because it uses the GPL and cannot be relicensed according to my hybrid license, Emacs has never had much usage compared to vi, and Python pales compared to ruby usage, care to name another technology loser?

    oh, i can’t WAIT to see where this leads.

  81. > There is NO POINT in KNOWING WHAT’S GOING ON

    Hm… I always thought that there’s no point in NOT knowing what’s going on. There IS a point in knowing format level of the web and having it as accessible as possible.

    As an example, I can ask questions and answer them immediately without any complex special-purpose tools.
    Like, say, if I want to know how many comments you’ve made on this page I can use one line of Python:

    >>> urlopen(‘http://esr.ibiblio.org/?p=1179’).read().decode(‘utf-8’).count(‘Ajay’)
    6

    I can embed this line on my site using a HTML template engine and it would always display the correct result.

    I’ve just showed you at least 2 points, I don’t see any valid arguments for your “NO POINT” statement.

  82. Ajay I should clarify my first statement. HTML is good as a PREP FOR programming, not an INTRODUCTION TO! Get it?

  83. Ajay: you are not arguing in the pursuit of truth. You said “One is that a general compression algorithm like gzip cannot possibly compete with a designed binary format for compression.” I pointed to a single example of a designed binary format for compression disproving your assertion. That’s all that is necessary for you to be in error — just one. Rather than admit that you are wrong, you continue to blather.

    *plonk*.

  84. FLMKane, I’m sorry that your school was dumb enough to think learning HTML was worthwhile. The whole point about a binary format is that learning formats is worthless, you just need to learn the GUI tools, just as most kids who bother with the web now learn Dreamweaver, not HTML. If you’re so ignorant that you think a markup language like HTML is a good prep for programming, I dunno what to tell you.

    More and more, they learn Flash. Flash is taking over the internet. Why? Because it’s easy for designers to work with. You know, that group of people who create things visually, who are sorely lacking in the OSS community? All other things being equal, the presentation format that is more friendly to designers beats the presentation format that is more friendly to programmers. And for the Web that means proprietary, binary Flash.

  85. > All other things being equal, the presentation format that is more friendly to designers beats the presentation format that is more friendly to programmers. And for the Web that means proprietary, binary Flash.

    Modern web designers are pretty comfortable with HTML/CSS AFAIK. Most of them convert Photoshop mockups directly to HTML/CSS. I don’t see Flash taking over, I see more and more designers learning HTML, CSS and Javascript instead. Hell, even some print designers are starting to use Scribus and automate some tedious tasks with Python scripts.

    I see proprietary GUI apps and formats slowly but surely dying in all areas, I see completely nontechnical people like kids and musicians understand and use HTML. I bet that in 10-20 years every single literate person could write a simple script. Then all these proprietary formats and applications will die.

  86. I think that HTML as a text-based format should be purely used for documentation and text-heavy content.

    For web applications that increasingly use more interactivity, we need a better protocol than HTTP and a more application-oriented language than HTML, which can be used by the client-side and server-side seamlessly, statefully and securely.

  87. Just as Windows NT-based OS’s now use a binary hive format for its registry, as opposed to the text configuration files mentioned in esr’s book, the trend is towards binary for efficiency reasons.

    @Ayaj, you couldn’t chosen worse example than MS Windows registry as argument for binary formats! Bwahahaha…

    Mind you, I am not for abusing XML as configuration format either.

  88. Ajay:

    Linux will not survive because it uses the GPL and cannot be relicensed according to my hybrid license

    Troll much?

    Emacs has never had much usage compared to vi, and Python pales compared to ruby usage, care to name another technology loser?

    [citation needed] Ever read any source code, Ajay? In the world of open source, I see far more Emacs modelines than vi[m] directives. You’re probably right that vi is used by more systems administrators (vi comes standard on every Unix on the planet, while Emacs is an optional package usually), but more programmers probably use Emacs as opposed to vi[m], including, if I’m not mistaken, our humble host, ESR.

    As far as Python vs. Ruby: a quick search on Sourceforge shows 4,906 projects matching the keyword ‘Python’ vs. 804 packages matching the keyword ‘Ruby.’ *shrug*

    Morgan, care to name some of that former plenitude of binary hypertext formats?

    I already named two different hypertext formats other than HTML, although I think TBL’s WWW slightly predates texinfo (GNU info). The other is HyperCard, which goes all the way back to Macintosh System 6, released in 1987. HyperCard, unlike info, is binary. There’s also KMS, which goes back to 1981, along with its predecessor at Carnegie-Mellon going all the way back to the 1970s called ZOG. There was also a hypertext system for MS-DOS computers back in the 1980s (which used binary data files), but I can’t think of the name of it right now.

    That enough for ya?

    BTW–HTML is (originally) a subset of another markup language called SGML, which was not invented by TBL.

    esr’s other writing [TAOUP]…only cemented my notion that the main reason for text protocols is the UNIX hackers’ unerring love for the command-line and text editors….Just as Windows NT-based OS’s now use a binary hive format for its registry, as opposed to the text configuration files mentioned in esr’s book, the trend is towards binary for efficiency reasons.

    That’s funny because apparently not even Microsoft agrees with you on that. And they have a vested interest in ensuring that proprietary formats stay proprietary, which means binary formats are far more useful for their purposes.

    As Russell Nelson says, you’re not arguing in the pursuit of truth, you’re simply talk out your arse.

  89. Troll much?

    No, you’re just educated stupid. Hybrid licensing stems directly from 4 simultaneous 24-hour days in one earth rotation. -1 * -1 = +1 is stupid and educated evil.

  90. Ajay,

    I’m beginning to suspect this is a troll, and a truly diabolical one…

    And, yes, the reason for the “ramp-up” matters. The time required for me to run a “HTTP GET foo.gif” from my work desk to one of our servers in London, assuming the server is lightning-fast, is about

    (one-way latency x 5) + Object size/bandwidth
    Or
    0.4 seconds + .001 second/kilobyte

    Obviously, the server is not infinitely fast, so add another few milliseconds of constant at minimum. Time required for a download is not proportional to size.

    Now, binaryness notwithstanding, I do agree that the constant attempts to use HTTP as a terminal emulation protocol are strange and wrong – it wasn’t designed for that, and we need something better. It’ll be interested to see if they manage to wrangle that into an upgrade or whether Flash or Java (ha!) or .Net takes over that space.

  91. @Mike:

    (one-way latency x 5) + Object size/bandwidth
    Or
    0.4 seconds + .001 second/kilobyte

    You forgot to add in the calculation for the DNS lookup. Even if you have a local DNS, there’s always chance that that host you need resolved isn’t in the cache.

    I do agree that the constant attempts to use HTTP as a terminal emulation protocol are strange and wrong

    Are you referring to AJAX, as in Google Maps or Gmail, or do you mean something else entirely? HTML 5 should obviate the need for AJAX and Flash, and maybe even client-side Java.

  92. @Morgan

    Oh, good point about the DNS. (I had some real fun with that once a while back; a UNIX DB application slowed to a crawl because worm-infected Windows PCs were hammering the DNS servers…!) And ARP, for that matter. Not to mention proxies and load-balancers and similar…

    Most of the AJAX seems reasonably advanced, if still a kludge; I was thinking more of the older but still quite prevalent (in my experience) PHP or ASP straight HTML-based applications that pretend to maintain state via cookies or worse… I honestly don’t know enough about HTML5 to say if it solves the problem, although people at least seem to clearly recognize it now.

  93. I can’t believe I am still participating in this argument, but it disturbs me to see such weak arguments go unchallenged.

    Ivan wrote:
    >As an example, I can ask questions and answer them immediately
    > without any complex special-purpose tools. Like, say, if I want to
    > know how many comments you’ve made on this page I can use
    >one line of Python:
    >>> urlopen(’http://esr.ibiblio.org/?p=1179′).read().decode(’utf-8′).count(’Ajay’)

    Were a binary format HTML common, couldn’t you do:
    >>> urlopen(’http://esr.ibiblio.org/?p=1179′).read().decode(’Ajays-binary-format′).count(’Ajay’)

    Why is that harder?

    Russell Nelson says:
    >You said “One is that a general compression algorithm
    > like gzip cannot possibly compete with a designed binary
    > format for compression.” I pointed to a single example
    > of a designed binary format for compression disproving
    > your assertion.

    Really, is this a serious argument Russell? You think that because there exists one exception to the rule that somehow this is an invalid rule. Perhaps he mildly over stated the case, but in the majority of cases, especially cases where the binary format is subject to any significant review, his point is almost certainly correct, outliers notwithstanding. Your point is simply a gotcha, worthy of a politician.

    However, Ajay, please, stop with the craziness. Python less used than Ruby? Linux is going to die because of the GPL? Emacs hasn’t had much use compared to vi? There is a difference between wrong and crazy wrong.

  94. A single example is all it takes to effectively contradict ‘cannot possibly,’ Jessica.

  95. a quick search on Sourceforge shows 4,906 projects matching the keyword ‘Python’ vs. 804 packages

    Sourceforge is little used by Ruby programmers. Rubyforge and, lately, GitHub are the favored spaces.

  96. Y’know the cacophonous sound that gulls make when flocking onto a landfill? That’s the sound I have in mind when I read a thread full of squabbling geeks ;)

    Emacs sucks. Linux is insecure. Pi == 3. VB rulez. ESR looks great in a tuxedo.

  97. Morgan says:
    >A single example is all it takes to effectively contradict ‘cannot possibly,’

    So the two of you are ragging on the guy because his wording wasn’t precise enough?. Give me a break. English language isn’t that precise. Have you heard of rhetoric and hyperbole? Are you comfortable with every word you speak being taken with mathematical precision? Really, the context clearly indicates the point, which is that custom design formats are better in all cases than general compression. Any reasonable reader would allow for the possibility of some strange outliers. Fighting over a slight exaggeration saves you the trouble of dealing with the substantial point. Like I said it comes across to me as little less than a disingenuous gotcha. I suppose it depends on what the meaning of “is,” is.

  98. custom design formats are better in all cases than general compression

    Not only not in all cases, but rarely if ever. A “custom design format”, if I understand the term correctly, will just be more terse tokens – gzip or the like will absolutely crush that because it can make the encoding context-dependent (note that gzip can typically compress binary executables by >30%, and the binary network packet headers we were discussing earlier also compress quite well; there a number of mechanisms that do just this in specialized circumstances).

    Now, if instead we’re talking about a binary format that includes a mandatory specialized compression algorythm, fine; but at that point why not keep the original markup human-readable and let the compressor deal with it anyway? And now we’re back to mod gzip…

  99. Really, the context clearly indicates the point, which is that custom design formats are better in all cases than general compression. Any reasonable reader would allow for the possibility of some strange outliers.

    RFC 1035 is hardly “strange”. Whether it’s an outlier or not depends. Text compresses very well and very quickly. Generally I find that HTML, for example, compresses, on average, 60-80% using gzip. Are you telling me that you can design a binary HTML-like format that saves that much? Even if you consider a gzip-compressed binary format, what are you going to save? 5%?

  100. > 60-80% using gzip. Are you telling me that you can design a binary HTML-like format that saves that much?

    Sure, I think that can be greatly improved with careful design. For example, in the past I have written binary protocols that represented large text blocks with one byte encoded for every original ten ascii bytes.

    However, I am disinclined to try to design such a protocol here or even enter into much of a discussion of how, since it is probably a several month project with a lot of qualitative and quantitative analysis to do right. Additionally, it is an artificially unfair exercise since it would be done in the context of the modern web rather than the web as it developed. Typical HTML back then and typical bandwidth back then were significantly different from now, and so the constraints on the design would have been different. Furthermore, designing a binary protocol for a page layout system that evolved as text is rather an unfair challenge to start with. Obviously, if the constraints on the binary design were different the capabilities and definitions of what HTML is would have been different too (e.g., I can think of reasons why removing the h5 tag might be beneficial. Would that make much difference?)

    It is also worth pointing out that your suggestion doesn’t really correspond with reality. For sure using gzip would make things better, but it is rarely used. Why? Simply because you can’t rely on it working, (or couldn’t at one stage in history anyway). A binary protocol would reduce traffic in reality rather than just in theory.

    Does the extra bandwidth really matter? Does your suggested 5% really matter? Apparently to some high volume sites it does. They feel the need to do very artificial bandwidth optimizations by changing IDs to single letters, or using abbreviated paths, or using intractably compressed javascript.

  101. Tom, dumb ideas like HTML proliferate till someone comes up with better ones. I think my new thin client idea is better and will beat HTML by making better technological choices, such as using an open binary format.

    Ivan, Jess beat me to exactly what I would have said about your scripting example, plus there will be better APIs so that you don’t have to scrape the visual representation format for data. Clearly you’re drinking the open source kool-aid with your statements about proprietary software dying off, I suggest you read my hybrid licensing ideas as that’s the only way open source gets anywhere. Btw, that second link of yours is fantastic, I recommend that pythonide link to everyone.

    Russ, funny that you accuse me of not pursuing truth when you continue to make such dumb arguments, that since a designed format exists that does worse than general compression somehow that matters. There’s always an arms race between general compression algorithms and designed binary formats, as we come up with better ways of packing data. As I’ve pointed out from actual benchmarks though, an old designed format like png is still the baseline for performance, that general compression algorithms cannot match because of the specialized information a designed format can take advantage of.

  102. Jeff, yeah, unfortunately the result of the open source community’s groupthink is that a closed, inefficient runtime like flash dominates the web. Any feedback on my new thin client ideas? I couldn’t make head nor tail of your trolling comment.

    hari, thanks for finally stating the obvious about the need for a better platform for internet apps, but I must quibble with you about keeping HTML around for documents. Ultimately, displaying information in something approximating a document is an internet application in itself and we can come up with all kinds of interactive ways to improve that experience, for example the NYT reader in Silverlight. That’s why I don’t think there’s even any vestigial use for HTML.

    Jakub, I wasn’t saying the registry was a paragon of design, only that the binary format was a good choice. However, as you point out, XML for config files is a horrible choice, that seems to dominate in open source.

    Morgan, programmers have other bloated editors like Eclipse to use nowadays while vi still dominates among sysadmins, which is where I picked it up. The binary hypertext formats you keep naming seem never to have been internet-aware, so I’m not sure how you claim them as HTML competitors. Yes, I know HTML was based off of SGML and I don’t really blame Berners-Lee for going with what he know for the prototype. However, the fact that such markup languages how now been attributed for the web’s success just goes to show the stupid, cargo-cult reasoning that most people use. I see, so Microsoft disagrees with me about the registry binary format because they used a text format for Word, that was rammed down their throat by the ODF crowd and govts that insisted on an equivalent open format? Microsoft makes dumb decisions all the time, you need to argue from technical principles, not from who’s making decisions about a barely used format right now. If you have to ask me exactly how much bandwidth a binary format would save, you clearly haven’t been reading the discussion so far, as I already addressed that with Mike. Funny that you accuse me of talking out of my ass and echo Russ’s claim that I’m not pursuing truth, when the two of you are the ones doing exactly that with your ridiculous arguments. Next you’ll say, “All Cretans are liars and I’m a Cretan.” ;)

  103. Mike, funny how you imply I’m a troll and then show an equation that clearly demonstrates that bandwidth is the dominant factor for a page like this, with its 4 second load time on average. I’m glad you at least admit that the web stack is horrible for interactivity. I don’t think any of those mooted alternatives will win out however, as VM languages like those are slow and have huge attack surfaces, as I mention in my blog.

    Jessica, what precisely was crazily wrong about those claims? Ruby has been more popular recently because of rails, and vi is used more because of sysadmins, as I said above. As for linux dying cuz of the GPL, that’s a prediction which you cannot possibly contradict as wrong when you don’t know how it will actually turn out, and I’m confident it will come to pass. :) As for your explaining the english language to Russ and morgan, perfectly done but I don’t think it’ll get you anywhere. ;)

    Mike, what precisely is “context-dependent” about HTML tags? A designed binary format wins because it can use specialized knowledge about the structure and tags of HTML documents that general compression cannot possibly compete with. The instruction sets used in binary executables are not designed for compression above all else, a binary thin client format for network applications would emphasize compression far more.

  104. Ivan, yes, I was aware that binary XML efforts existed. Thank you for demonstrating with that linked chart that a designed binary format like EXI will always beat gzip. However, it’s new to me that gzip will sometimes fail so badly that it will make an XML file bigger! However, I’m highly skeptical about applying the hierarchical data model of XML, EXI, or EBML to all data, I think a more general binary format language will be useful someday. For now, the correct next step is more open binary formats.

  105. Jakub, I wasn’t saying the registry was a paragon of design, only that the binary format was a good choice. However, as you point out, XML for config files is a horrible choice, that seems to dominate in open source.

    @Ajay How you can say that binary format for storing configuration is good idea and giving example of Windows registry? This is a very good counterexample that maintability of text formats (like INI files, or rc-style UNIX config files with comments, or YAML) is more important than aleged performance of binary format. Performance that do not exist. There are problems with binary formats such as [registry] silent corruption, or (in registry case) trying to do badly filesystem job.

    And no, because XML config files are used or were used widely by GNOME it doesn’t mean that they dominate in open source.

    Also designing binary format or binary protocol that is extendable and interoperable (very important for network formats and protocols) is f**ing hard to do right. Add to that the fact that binary formats/protocols are harder to maintain and debug means that binary formats should be used only where they are absolutely required.

  106. “Tom, dumb ideas like HTML proliferate till someone comes up with better ones.”

    That’s one reason; there’s other reasons, such as lack of business reasons, product availability, vendor support, compatibility, maturity, costs (e.g. re-training), resistance to change (for good and bad reasons), etc.

    “I think my new thin client idea is better and will beat HTML by making better technological choices, such as using an open binary format.”

    If you think by making better techological choices, people will somehow switch over to your proposal, I think you’re going to be disappointed. If you look at some of the reasons I give for why “dumb” ideas proliferate, it’s worth noting that none of them are to do with technology options.

    Your idea may well be better, but, and don’t take this the wrong way, the vast majority of the world simply *do not care*.

    They want something that solves a problem they have, or allows them to do things; I’ve looked over your links and I’m not really sure what problem you’re trying to solve — as far as I can tell, you think HTML / HTTP generally sucks and you propose a radical re-working of it all, just so it can be better technology wise. I hate to burst your bubble, but until you can provide a compelling reason for this, and one that can’t be done with the existing infrastructure (and people think is worth actually doing, and isn’t too onerous), this is a total non-starter.

  107. I couldn’t make head nor tail of your trolling comment.

    It was a reference to Time Cube, a very famous crank web site. No offense, but while your hybrid license ideas are interesting, strong statements of the form “Linux will fail because it doesn’t adopt my hybrid license” make you sound crankish.

    Also designing binary format or binary protocol that is extendable and interoperable (very important for network formats and protocols) is f**ing hard to do right. Add to that the fact that binary formats/protocols are harder to maintain and debug means that binary formats should be used only where they are absolutely required.

    Once again, the Amiga was 20 years ahead of its time. The IFF format was extendable and interoperable. And it was built right into Amiga OS. The Amiga OS also has a feature that no other commercial OS that I’m aware of has today: the concept of “data types”, which enables a program to register a parser and serializer for any binary file format it produces for use OS-wide, by any application. The common widespread use of binary file formats is pretty much a solved problem, especially to anyone who isn’t afraid of a hex editor (Amigans are used to working very close to the hardware).

  108. Jeff, yeah, unfortunately the result of the open source community’s groupthink is that a closed, inefficient runtime like flash dominates the web. Any feedback on my new thin client ideas?

    Ideas are just air till they’re implemented. Implement this and we’ll see how good it really is.

  109. @Ajay:

    blockquote>Morgan, programmers have other bloated editors like Eclipse to use nowadays while vi still dominates among sysadmins, which is where I picked it up.

    I have news for you: compared to bloatware like Eclipse and even GEdit, Emacs is lightweight. In fact, Emacs and Vim (the most common vi variant found on Linux systems) have very similar memory footprints. If you want something more lightweight than Vim or Emacs, you have to look to nano or the venerable joe. Or ed, which is, of course, the standard text editor and the true path to Nirvana. Not that I frown on the use of Bram Moolenaar’s fabulous text editor: I do use it on occasion for odd editing tasks now and again. But not for programming. But we should leave the vi vs. emacs holy wars now, because I think I see ESR glowering at us over there, and he has guns!

    The binary hypertext formats you keep naming seem never to have been internet-aware, so I’m not sure how you claim them as HTML competitors.

    Wrong. Also, there’s HyperCard on the Web. And, PythonCard apps can be Internet-aware since Python itself is Internet aware (well, at least the library is :). HyperCard itself still comes pre-installed on every Mac.

    I see, so Microsoft disagrees with me about the registry binary format because they used a text format for Word, that was rammed down their throat by the ODF crowd and govts that insisted on an equivalent open format?

    Oh. So they were ‘forced’ into OOXML. That’s funny. If they were ‘forced’ to use XML, then why does Microsoft have several XML APIs, such ADO.NET and MSXML? Why are these APIs used in Microsoft’s own applications? Also, considering XML formats were first incorporate into Excel in 2000, and by Office 2003, all the major applications had XML flavors for their document formats, that means Microsoft’s first use of XML formats in Microsoft Office predates ODF. By a lot.

    No, Ajay, I’m afraid you’re just simply wrong again.

    The Windows registry is really something very old and is not new at all. It goes back to Windows 3.1, and was created in response to complaints that INI files were scattered often all over the system, making system maintenance a nightmare. Now, the registry contains everything, including rot, making system maintenance a nightmare. That’s just so much better isn’t it?

    @Ivan: thanks for the binary XML links. Cool stuff.

  110. I love PythonCard’s motto. I believe it’s a famous Alan Kay quote.

    Speaking of which, Squeak makes for a beautiful thin-client environment. Shame nobody uses it.

  111. Mike, funny how you imply I’m a troll and then show an equation that clearly demonstrates that bandwidth is the dominant factor for a page like this, with its 4 second load time on average.

    Er, did you read the formula? This page is 90 KB, so if you have a 2 Mb connection, you can save a grand total of almost 0.4 seconds, 10% of that, by upgrading to infinite bandwidth. I wouldn’t call that a dominant factor.

    A compressor is more efficient than a binary format as such because it can modify the tokens and syntax used based on the specific file being compressed, rather than having to select for efficiency over a weighted average of all files.

    But here’s a better answer – if you really think your binary HTML-equivalent markup format can do better than general compression, go win The Hutter Prize for best compression of a large chunk of Wikipedia and pick up yourself some cash. Strangely enough, all the existing winners are using standard, if CPU-intensive, compression techniques…

  112. Mike Earl says:
    >A compressor is more efficient than a binary format as
    >such because it can modify the tokens and syntax used
    >based on the specific file being compressed, rather than
    >having to select for efficiency over a weighted average of all files.

    Several points:

    1. A binary format actually knows what the data means, and can compress at a semantic rather than lexical level. (Excuse if you will the anthropomorphism.)

    2. Binary formats can be compressed using a standard compressor, though generally with less success, because the binary format has all ready squeezed out a lot of the entropy.

    3. A compressed format *IS* a binary format, with all the disadvantages you guys have been ragging on for the last few days. You can’t load up a zip file in vi last time I checked.

    4. Since you guys are picking at tiny details of statements, let me take odds with your statement:
    “A compressor is more efficient than a binary format”
    If so, why don’t we store images in this format:
    Pixel(0,0) = RGB(123,45,67)
    Pixel(0,1) = RGB(222,111,0)
    etc.
    Then gzip them. Would a compressor be more efficient than a binary format in that case? After all, it would have the advantage of all that photo specific context?

  113. 1) Well, once you’re doing semantic analysis, I’d call that compression, but that’s… er, semantics?

    2) Which means that once you’re using a compressor (which is a win, if you do care about compression), there is little to no gain by starting with a non-human-readable format.

    3) Sure, but why not keep that at a different abstraction level; that removes many of the disadvantages and lets you punt compression in cases where it doesn’t matter (e.g., typical web usage). But, yes, “vim” is the usual Linux vi-compatible editor, and it does indeed open gzip files.

    4) If by binary format you mean raw 3-byte encoding, odds aren’t bad gzip would actually beat it depending on the image (certain color combinations would appear over and over); I’d have to do the test. Obviously, audio/visual data is an extreme special case of highly compressable data, especially if you allow lossy compression… there’s no question that a specialized compression scheme wins big in this case. More to the point, textual representations of image data *aren’t* human-readable, so nobody would do that anyway.

  114. Without commenting on Eric’s protocol, since he knows best in this domain, I find the suggestion that text formats have some kind of winning quality in general laughable. Both binary and textual formats are examples of the same thing: languages with some semantics. They require exactly the same process to handle. Unfortunately, textual protocols have opened the door to hand-hacking and construction of protocol code by amateurs who ‘think’ the protocol is easy to work with because it looks inviting. If you seriously think expanding everything out into tokens and whitespace somehow makes a protocol easier to work with, you are kidding yourself.

    Interesting that anyone should bring up HTML as an example of a ‘good’ textual format. Even ignoring that design decision, HTML is a horrible design for anything but static, hyperlinked documents. Now people want to build applications on top of it? What a joke. The only reason such a sloppy design can be afforded at this level is because all the levels below it are built in a manner that is not a complete impediment to speed. If you think binary protocols are hard to parse, you are dumb, period.

  115. Actually, how well would CHM (compiled html) do against HTML? Anyone pondered its efficiency, were it transmitted over the web and not used just for help files? I wonder why this example hasn’t yet appeared.

  116. Jakub, the Windows registry can be badly designed overall and still use good choices like a binary format, just as the Web can be very successful and yet use a bad choice like a text format. Since most people now edit their system configuration through GUI tools and there is a lot more data stored there, I think switching to binary as NT did for its registry makes a lot of sense. One can always provide command-line tools to access the same binary format for old-fashioned coders such as those here. ;) All data can be corrupted, doesn’t matter if it’s binary or a text file that you modified in vi simultaneous to some other user modifying it using a GUI tool. XML isn’t just used in GNOME but in Java and many other places it shouldn’t be, which for me is everywhere as I think it has no use. :) As Jeff points out, designing binary formats that are extendable and interopable is not a fundamental constraint of the format but simply something the designers had to consider ahead of time. People have this wrong impression about binary formats because they often think of something like the Internet Protocol, where it’s very hard to replace all the hardware and software in routers that implement IP. However, application protocols are on the desktop, where it’s much easier to swap in new software, as I’ve already noted on my blog. Of course I agree that binary should only be used where it’s required, we simply disagree about where that is: I think that’s most places while you think it’s very few.

    xyz, most of the business reasons you suggest don’t really apply to HTML or are very minimal factors. Obviously other factors matter but technical superiority is often primary, just because it can sometimes lose out to other factors, like a vendor monopoly, doesn’t change that. The world doesn’t care that the technology is superior, they care that they can get their real work done faster and easier, which will be the case with a new thin client. If you read my blog and are unsure what I’m trying to solve, you clearly didn’t read it very carefully as I precisely lay out the problems before suggesting solutions. It really doesn’t matter if you or others like you think it’s a non-starter, it’s a project that doesn’t take much resources as a small company could do it, therefore somebody will, hopefully me, and it will kill off HTML.

  117. Jeff, I make strong statements about my hybrid license killing off Linux, because I’m sufficiently versed in what’s happened with licensing in the past and why to be confident that this will happen. Closed-source code can usually be relicensed as hybrid-licensed code whereas Linux cannot, as Linus has noted it cannot even be relicensed according to GPL v3 since they don’t track all the contributors. This means less dogmatically licensed OSs like FreeBSD and Solaris will prosper while Linux will disappear. That’s my prediction, let’s see how soon it happens. ;) Interesting info about Amiga’s IFF format and the other guy who’s trying a crude hybrid license himself, thanks for that. As for implementation, one has to have good ideas to implement, so you have to be able to reason about what ideas are worthwhile. I think these ideas are worthwhile, but I asked you what you thought as you’re a smart guy who might be able to provide some feedback. If you can’t reason about these ideas, that’s fine, but I suspect you simply don’t want to try. :)

    Ivan, if you read my blog, you’d know that “new” refers to the thin client, not to the idea. The art of engineering is deciding what selection of often pre-existing ideas to implement, it’s rare that you have to come up with a completely new idea on your own and even if you do, it’s often a rediscovery. I give Atkinson credit for his popular hypertext implementation, although the ideas preceded him too from Ted Nelson’s Project Xanadu from the 60s and I’m sure others claim even earlier ancestry for the basic ideas. However, Atkinson himself has admitted that he made a big mistake by not making hypercard internet-aware so he certainly didn’t have it all figured out.

  118. Morgan, funny how you changed my simple statement about vi’s relative usage to your favorite bikeshed topic about what’s better, which has little to do with my market-share assertion. Also funny how whenever you say I’m wrong, you turn out to be wrong. Your first link simply shows that Hypercard tried to provide TCP/IP functionality through its scripting runtime after the Web had taken off in Dec. 1992. That would be similar to the web having no concept of global urls or hyperlinks, using local links like Hypercard instead, and then supplementing it later with TCP/IP functionality in Javascript, hardly a workable notion. What’s particularly notable is that that page says that “With the scripts described in this article it would probably be possible to create WAIS, Gopher, or World Wide Web clients.” So, the Web was sufficiently popular by then that they weren’t really thinking about becoming a competitor, they wanted people to simply create a web browser on their stack. As my link above shows, Atkinson himself regrets that Hypercard missed the boat on the internet. As for your other two links, those are runtimes that were initiated in the last decade, so they don’t qualify as early HTML competitors either. Since you cannot name a single early binary competitor to HTML, I think it’s safe to assume you were talking out of your ass. :) As for Microsoft, they use XML because they simply follow the technical crowd in many cases, it’s clearly a stupid decision to use XML for almost anything. Again, I have to explain to you that Microsoft doing something neither means it’s good nor bad, that judgment is based on technical considerations, all their doing it means is that someone thought it was a good idea for whatever reason. Funny how you claim I’m wrong again when the fact that Microsoft used XML formats contradicts nothing that I said. As for the windows registry, already addressed above.

    Mike, if that formula’s correct, why does it take 4 secs to load this page? It’s because of the ramp time for bandwidth that I’ve referenced, where the effective bandwidth for short downloads is much lower as a result. Your subsequent calculation of 10% for bandwidth assumes some other magical factor that’s eating up 75% of the load time, it’d be funny to hear what you think that is. Precisely how do HTML tokens and syntax vary so much from file to file that a general-purpose compressor would do better? I don’t think you understand how compression works. HTML has a structure that a designer of a binary format can use to pack the data far better than a general purpose compressor like gzip can: that’s why EXI handily beats gzipped XML. As for the Hutter Prize, EXI alone would compress that data better as the test data is XML-encoded. However, it’s not strange that nobody takes that approach as the rationale for the prize is Natural Language Processing, which means they’d likely throw out any approach that simply used a binary format as they would like the algorithm to work for plain text too, not simply for XML. No doubt the contestants understand this while you don’t. I don’t think you understood what Jessica meant by semantic analysis, it’s done once in the design of a binary format, never again. If you think a compressor can beat a binary format that was designed for compression, you clearly haven’t seen the benchmarks I linked to above, that show PNG wins to this day. If you think compression doesn’t matter on the web, this whole discussion is lost on you. The whole point of a binary format is that it can use specialized information about the structure of the data in its design, that a general-purpose compressor like gzip cannot possibly match.

    Adriano, I was unfamiliar with CHM but if it’s simply using text compressed with a general-purpose compression algorithm, as it appears to be, it’s not a designed binary format as we’re discussing.

  119. > 1) Well, once you’re doing semantic analysis, I’d call that compression, but that’s… er, semantics?

    I think you miss the point, if you know the meaning of the data, you can readily rearrange it in forms that are more likely to be useful. For example (using [] for elements because I don’t know how to do it with angle brackets in this comment.)
    [div style=’color:white; text-align:left; border-color: black; border-width: 1; border-style:thin;”]Div1[/div]
    [div style=’color:#ffffff; border:black 1 thin;”]Div2[/div]

    can be rearranged to take advantage of the fact that the styles are identical save the text alignment, and encoded in much less space. A dictionary compressor would not have enough information to know that, for example White is the same as #ffffff, or that “border-color: black; border-width: 1; border-style:thin” is the same as “border:black 1 thin”.

    That is a very simple example, extended more broadly it is very powerful. That is what I mean by semantic style compression.

    >2) Which means that once you’re using a compressor
    > (which is a win, if you do care about compression), there
    > is little to no gain by starting with a non-human-readable format.

    No, I knew you were going to miss the point of this when I re-read it. The point is that compression is less effective when there is less entropy, but systemically, binary coding followed by compression is better than text encoding with compression.

    > 3) Sure, but why not keep that at a different abstraction level;

    What does that mean? You have defined your abstraction stack in a way that is tilted toward your answer. Text files are no less abstract than binary encodings, the only difference is the tools placed on top of them, and the ecosystem is currently skewed to text files, there is not reason why an eco system could not be skewed toward binary files.

    > that removes many of the disadvantages and lets you punt
    > compression in cases where it doesn’t matter (e.g., typical
    > web usage).

    Which is true regardless of the base encoding.

    > But, yes, “vim” is the usual Linux vi-compatible editor, and it does indeed open gzip files.

    And would load binary files were that the ecosystem.

    > 4) If by binary format you mean raw 3-byte encoding,

    Nope, I mean an encoding that takes advantage of semantic understanding of the underlying data. Great example of what I was talking about above. JPEG is a very good example, it is lossy because the encoder knows enough about what the data means to be able to throw away bits that don’t really matter.

  120. >Text files are no less abstract than binary encodings, the only difference is the tools placed on top of them, and the ecosystem is currently skewed to text files, there is not reason why an eco system could not be skewed toward binary files.

    Text files are binary encodings. They are binary encodings of…drumroll please…text. I know this is nitpicking, but using ‘text files’ and ‘binary encodings’ as separate entities too often will skew your thinking. The debate is not text vs. binary; the debate is between more-structured and less-structured binary. Also, less nitpicky, no, the ecosystem could not be skewed towards binary files as it now is towards text files, because AFAICT what you’re talking about is a separate designed binary format for each application; hence, an application like vi would have to have a separate plugin for each type of file that it wanted to edit. As it is now, you can edit any text file with any text editor, no plugins or decoders required, and it’ll work. It’s a strong case of ‘if it ain’t broke, don’t fix it’–especially not with a ‘solution’ that introduces a whole new set of problems.

    >Jakub, the Windows registry can be badly designed overall and still use good choices like a binary format, just as the Web can be very successful and yet use a bad choice like a text format.

    As far as I know, your argument was that text formats are wasteful of bandwidth. What relevance does the Windows registry have? Unless you’re planning on recommending that .xrc or .ini files be replaced with binary to save disk space.

  121. Tom, try to keep up, your trivial point about text being binary at root was already addressed by Jessica and me 5 days ago. As for asserting that it would be a big problem for vi to support multiple open binary formats, not particularly if those formats were mostly based on common binary format languages, for example, the EBML example that I already gave, and not really even without. Your web browser supports a multitude of binary video formats, no reason vi can’t do the same. The point all along has been: the “problems” you claim cost much less than the efficiency problems we’re solving. As for the Windows registry, binary is again more efficient for latency reasons. It annoys me that every time I compile a FreeBSD port on my machine, each build has to constantly look up the same header files and other tools, no reason that all can’t be cached in a binary lookup somewhere on the system that monitors the necessary files. The point is that getting away from the text antecedents of many OS internals has great advantages in this day when almost nobody uses text. As I said previously however, no reason we can’t keep command-line options around for you old fogies too. ;)

  122. As for implementation, one has to have good ideas to implement, so you have to be able to reason about what ideas are worthwhile. I think these ideas are worthwhile, but I asked you what you thought as you’re a smart guy who might be able to provide some feedback. If you can’t reason about these ideas, that’s fine, but I suspect you simply don’t want to try. :)

    No. The internet runs on consensus and working code. There are a lot of people with ideas that they could spend forever discussing on mailing lists and the like but nothing beats a working prototype or proof-of-concept as a starting point. Do you think Linus would have gotten all the attention if he were like “Hey, I’ve got these great ideas for a kernel”? No. He got the barest minimum going and slapped it up on Usenet, and then people continued the “discussion” by contributing changes.

    Open source is a wonderful thing, time capsulized or no.

  123. Jeff, I agree that you have to implement at some point but I don’t have the resources to do so myself right now. Therefore, I talk about the ideas for now and see what others think. Unfortunately, I have found that most cannot reason at all, therefore I get silly dismissals like xyz above. I had hoped you would be able to reason about it, but like most you seem unwilling or unable.

  124. Tom Dickson-Hunt Says:
    >Also, less nitpicky, no, the ecosystem could not be skewed towards binary files as it now is towards text files,

    I suggest you try “man -s 4 magic”, then “man -s 4 terminfo”. Then consider how the principles underlying the two together could readily overcome the issue you raise. Were Unix to have taken a binary file approach, I would imagine something like this would be the way to go.

    >As it is now, you can edit any text file with any text editor, no plugins or decoders required,

    In cosmology they call this the anthropic principle.

    > It’s a strong case of ‘if it ain’t broke, don’t fix it’–especially not with a ’solution’
    > that introduces a whole new set of problems.

    I am not suggesting fixing it, all I am saying is that a binary file style is certainly conceivable, would be better in a number of significant ways, and, of course would be worse in a number of significant ways. Dismissing it with a guffaw is disappointingly unimaginative.

    As I said, evolution sometimes gets stuck in a suboptimal hole.

  125. Also funny how whenever you say I’m wrong, you turn out to be wrong. Your first link simply shows that Hypercard tried to provide TCP/IP functionality through its scripting runtime after the Web had taken off in Dec. 1992.

    Moron. TCP/IP is the Internet, or at least it’s the duct tape that binds the Net together. The Internet != WWW. There was an Internet long before the Web was gleam in Tim Berners-Lee’s eyes. Anyway, obviously HyperCard missed the whole Internet thing early on, but you said that HyperCard was never Internet aware. I simply contradicted your wildly inaccurate assertion.

    Again, I have to explain to you that Microsoft doing something neither means it’s good nor bad, that judgment is based on technical considerations, all their doing it means is that someone thought it was a good idea for whatever reason. Funny how you claim I’m wrong again when the fact that Microsoft used XML formats contradicts nothing that I said.

    1) The registry was not chosen to be a binary format for any sort of efficiencies or performance gains. 2) I never said Microsoft doing something is good or bad. You pointed to the registry as an example, and I simply pointed out that the people that invented that registry don’t believe your statement that binary formats are the future. That makes you pretty much alone in the world.

    This means less dogmatically licensed OSs like FreeBSD and Solaris will prosper while Linux will disappear. That’s my prediction, let’s see how soon it happens. ;) I

    Ok, Theo. Whatever. You still sound crankish.

  126. morgan, yes, we all know TCP/IP is the internet: the point that seems to have flown over your head is that sneaking TCP/IP in through the scripting runtime that late in the game didn’t make Hypercard much of a WWW competitor. Try not to descend to name-calling, for your own sake, as it just underlines that you’ve lost the argument and only have epithets left to throw. If the initially text registry was not changed to binary for efficiency reasons, you’d presumably be able to give the real reason, you don’t. Hilarious how you say Microsoft doing something doesn’t mean anything, then proceed to tell me I’m wrong about binary formats because Microsoft’s using them less. XD Also funny how you think that the people that moved the registry to binary are the same people who used XML formats in Office. :) Microsoft is a huge company, with all kinds of dumb decisions coming from the almost 100k people there. Considering that most data sent in MS document formats is still sent with binary formats, I believe that makes you pretty much alone in the world. As for whether I’m a crank, I’ve detailed the theory and observations behind that licensing claim elsewhere on this blog. Fools always think prophets are cranks when they make their predictions. Time will tell who is what, but I claim you’re the fool. ;)

  127. Ajay, what the hell is really your problem? You are smart enough to understand what your talking about (to an extent). However you keep on making dumb statements (HTML should be an open binary protocol). You keep calling people names, keep making snide comments, and try to promote the (generally) marginal benifits of binary formats… to people who have lived through them, AND AT LEAST ONE PERSON WHO DESIGNED them.

    In short you are posting like a troll. Jessica Boxer is doing a better job of supporting binary protocols than you are, simply because your posting style is VERY trollish.

  128. If the initially text registry

    [citation needed]

    Also funny how you think that the people that moved the registry to binary are the same people who used XML formats in Office.

    I’ve followed every move Microsoft have made for almost 20 years. I’ve read numerous books about Microsoft’s history, development methods, and general worldview. In my opinion, you have no idea what you’re talking about. From what I’ve seen, they’ve had a major strategy shift that started about 1998, right around the time the Halloween Documents were released. They’ve more aggressively pursued this strategy, along with the semi-related strategies of killing Google. Microsoft may be a company of 100k people, but they follow exactly one master strategy and that is the only reason they have had as much success as they’ve had. I’m not saying I agree with them about anything. I agree with ESR, they are on their way down. Windows 7 will do nothing to save them.

    Considering that most data sent in MS document formats is still sent with binary formats

    [citation needed]

    Fools always think prophets are cranks when they make their predictions.

    *sigh* Look, Ajay, Linux’s (and *BSD’s) major markets are servers, embedded systems, and mobile devices (including netbooks). Corporate customers don’t care about licenses or ideals. It’s like ESR said more than 10 years ago: shut up and show them the code. Linux wins because it supports the most hardware, has the biggest corporate backing, and, most importantly, is developed using the Bazaar model. Finally, consumers don’t give a rat’s ass about anything but functionality.

    All I can say about binary formats is this Ajay: Been there, done that, didn’t even get a lousy T-shirt.

  129. Jeff, I agree that you have to implement at some point but I don’t have the resources to do so myself right now. Therefore, I talk about the ideas for now and see what others think.

    Tim Berners-Lee created HTML and HTTP and his example program, WWW, all by himself on a single NeXT workstation. Surely if binary formats are just as simple to work with, you could design and implement your binary version yourself with no trouble at all. Or does “I don’t have the resources to do so myself” equate to “I don’t know anything about network programming, not even how to open a socket, and I’m just talking out my ass?”

  130. Jeff, I agree that you have to implement at some point but I don’t have the resources to do so myself right now.

    Yes you do. You’ve a warm computer and an internet connection. Free compilers exist; you can go get one if you haven’t one already.

    Therefore, I talk about the ideas for now and see what others think. Unfortunately, I have found that most cannot reason at all, therefore I get silly dismissals like xyz above. I had hoped you would be able to reason about it, but like most you seem unwilling or unable.

    I can reason just fine. Impugning someone’s logic skills just because they want you to show your work is a crank tactic. You’re really not making yourself look good here.

    I’ve seen too many over-ambitious projects — things like TUNES and Imari Stevenson’s entire collected body of work — wherein the author (typically a lone individual who thought he was Special) wanted to masturbate over his Wonderful Ideas rather than, you know, getting something done.

    Are you serious about this? Do you really want to succeed? Then roll up your sleeves and start doing the work. Otherwise, you’re just another “idea guy” with absolutely nothing to show and nothing to contribute.

  131. @Ajay: Oh, BTW–your link to your earlier discussion about hybrid licensing proves that you are a crank (or maybe worse). You were unable to sell your hybrid licensing to the one guy in the open source tribe who is most likely to be convinced that it’s a good idea (just ask anybody), instead spewing FUD and doom and gloom scenarios.

  132. @Morgan – to be fair, somebody linked to a site earlier (hardcoded.net), where the guy has just adopted this form of hybrid licensing. Of course, it all has that ‘new car’ smell to it, so we should see how his business pans out.

  133. Ajay, this thread is giving me a headache…..so I’ll be brief.

    Take a leaf from Jessica’s book. You may have good, arguable points to make regarding your ‘hybrid’ license and binary protocols, but your presentation of them online is so jarring in its arrogance that you risk being a terrible ambassador for such ideas – toxic, even. By contrast, Jessica has presented argument in your general defense that is engaging.

    Food for thought, perhaps.

  134. WordPress has developed a discerning appetite for my postings…..apparently some are more delicious to eat than others ;)

  135. “xyz, most of the business reasons you suggest don’t really apply to HTML or are very minimal factors.”

    Really? So having every browser maker re-implement their existing (working) software, re-test it all, re-package, change docs, websites, etc. is minimal? Have every company that makes software that embeds one of said browsers do the same to use the new version? Have everyone who uses a browser (or software that has a browser embedded), upgrade to the new version? Have every web page and server code changed over to your new format?

    You are honestly trying to say that a total redesign of how HTML works is *minimal*?

    Not to mention that whilst all this is on-going, the old standard will need to be supported NB: I am presuming you think there would need to be a transition phase (you can’t *possibly* be thinking of some big-bang switchover).

    And all for what? A technically superior version of HTML but with pretty much the same functionality? What motivation is there for people to expend resources on doing that?

    “Obviously other factors matter but technical superiority is often primary,”

    Most businesses (and people) I encounter seem to think differently, technical issues are important but I can’t think offhand of a situation where the technical superiority of something has been the primary issue; my experience is they’re interested in things that work and provide functionality they need, it could be written in COBOL for all they care — yours may be different, perhaps you could detail some experiences you’ve had where technical superiority has been the primary factor?

    “The world doesn’t care that the technology is superior, they care that they can get their real work done faster and easier, which will be the case with a new thin client.”

    Yes, they care about that, but it’s not the only consideration, and just because they can get work done faster and easier, *doesn’t mean they’ll switch*.

    “If you read my blog and are unsure what I’m trying to solve, you clearly didn’t read it very carefully as I precisely lay out the problems before suggesting solutions.”

    I should have been clearer — I’m unsure what problem *that most of the world cares about* that you’re trying to solve:

    Better GUI elements — solved / solvable; CSS, skinning, Flash, Silverlight. I don’t see what it is that HTML lacks, or couldn’t be extended to support.

    Sessions — solved; you say “most sessions on the web are not very responsive” which I find surprising, please could you provide some examples?

    Binary encoding — solvable (but nobody thinks it’s a big enough problem to solve); you say “almost nobody edits HTML at the text level so there is no use for markup”, could you provide some evidence of this?

    A GUI for design, not code — solved; get Dreamweaver or similar.

    No standards process — solvable in certain scenarios; you seem to think that if a binary standard is set by some organisation that everyone will keep to it, whilst I think they won’t, both intentionally and unintentionally — I mean, every released web server, mail server, DNS client, TCP / IP stack all implemented the standard correctly, right? Right.

    “It really doesn’t matter if you or others like you think it’s a non-starter, it’s a project that doesn’t take much resources as a small company could do it, therefore somebody will, hopefully me, and it will kill off HTML.”

    It’s not creating it that’s the problem — how do you plan to convince the rest of the worl to take this up? You seem to think that technically superiority is the key factor, whilst I think you couldn’t be more wrong.

    At the end of the day, if you want to make such radical changes, you need to provide a compelling reason for people to change — something I see no evidence of in your web pages and postings.

  136. I realize it’s off topic, so I’ll just mention it this once on this thread.

    You have joined the “digital underground” (pardon the awful pun) in Iran. Any plans for something similar here to combat Barack Obama’s crackdown on dissent, including a “snitch file” for turning in political opponents and bussing in thugs to attack protesters at town hall meetings?

  137. @Ken – I imagine there’s not really much need for it here. As reprehensible as obama’s tactics are becoming, they still don’t rise to the threat level of the Iranian regime. Perhaps you could establish sites that have no possible way to trace contributors – no IP logs etc….?

  138. You have joined the “digital underground” (pardon the awful pun) in Iran.

    He’ll drink up all the Hennessey you got on your shelf.

  139. xyz said:

    At the end of the day, if you want to make such radical changes, you need to provide a compelling reason for people to change — something I see no evidence of in your web pages and postings.

    Bravo, xyz. That’s it exactly, Ajay. If you think an open binary HTML/HTTP replacement would take the world by storm, prove it. Write one. If and when this binary HTML/HTTP replacement comes to pass, if it’s significantly faster and provides significantly more functionality than HTML/HTTP as it exists today in its various implementations as servers and clients and such, the whole world will say “Ajay was right!” and you will be deified in accordance with grand Internet tradition, alongside such names as Tim Berners-Lee, Jon Postel, Eric Allman, and others.

    The Internet was built by a meritocracy; if you want to change it, step right up and take your swing.

  140. One of advantages of HTML being text format is ‘View Source’ / ‘View Selection Source’, making it possible to see “how it was done”. This would be difficult with HTML-replacement binary format.

    Besides at start HTML was for text with some formatting and hyperlinks, so it was natural to take existing and proven meta-standard for markup used in publishing, namely SGML, simplify it and take from it relevant fragments and make it standard not meta-standard. Evolution, not revolution.

  141. # Morgan Greywolf Says:
    >if it’s significantly faster and provides significantly more functionality
    > than HTML/HTTP as it exists today … you will be deified…
    > The Internet was built by a meritocracy; if you want to change
    > it, step right up and take your swing.

    Four letters Morgan: IPv6.

    Nonetheless, I do basically agree with you. Implement something, write an RFC, submit it for peer review, set up a demo network for people to try, and see what happens. If you succeed, Cisco will hire you with a very large salary, and Morgan will personally come tune your web server for you.

  142. One of advantages of HTML being text format is ‘View Source’ / ‘View Selection Source’, making it possible to see “how it was done”. This would be difficult with HTML-replacement binary format.

    WordPerfect’s ‘reveal codes’ command? :-P *ducking*

    Four letters Morgan: IPv6.

    *hangs head in disgust*

    Jessica: You don’t want to get me started on that. Trust me. But that’s a good example of an outlier.

  143. Jessica, when someone says “cannot possibly” they must mean what they say, that it is an iron law. Iron laws are brittle. You only need one exception to prove them wrong. I expect there are more examples, but I only need one. Why bother looking for it, when Ajay doesn’t accept that he’s making a crank argument? More examples wouldn’t convince him.

  144. I have updated the paper with a section on “Paths not taken” summarizing the argunments agains XML and packed binary protocols.

  145. FLMKane, funny how all your trolling arguments actually apply to you. :)

    Morgan, if you think MS has a single master strategy that is slavishly followed, you’ve clearly been deranged by drinking the anti-MS kool-aid for too long. ;) Heh, if you think my hybrid license argument has anything to do with ideals, it clearly flew over your head. I simply note that both closed and open source software have been successful at different things, the former made Gates and Ellison billionaires while the latter has had some successful projects, and that perhaps the best solution is a blend of the two. In that, I’m just a pragmatic engineer trying to come up with a better solution practically, which is funny considering your admonishment of ideals, as though I buy the free software jihad or “all source will be open” BS. Consumers and corporations will care about hybrid licensing because it will produce the best software, that’s the claim that matters. As for implementing a thin client, Berners-Lee had funding for his project, I don’t. Knowing how to code is not the issue, coding is simple, I just don’t have the time and resources for a small-to-medium sized project like this right now, but I will someday soon. :) As for not being able to convince esr in that other thread, clearly he’s become deranged by open source and cannot or will not reason about any alternative. If you thought esr’s arguments there were compelling, even when the author of his one big counterexample emailed that esr was wrong, the arguments there either flew over your head or you will say anything to contradict me.

    Jeff, I know how much time this takes, it’s not something you do on your own. When you refuse to reason on a topic, calling for empirical evidence before you’ll say anything, you call your own ability to reason into question: I merely verbalized it. I already agreed that you have to implement at some point, but criticizing me for not having the resources to just go off and do this on my own right now is silly.

  146. Dan, if I’m confident in my ideas and the predictions that I make based off them and you consider that arrogance, clearly without being able to make a single argument contradicting me, I don’t much care how you perceive it.

    xyz, dunno what browser makers or embedders have to do with anything. As for webapps, only the views need to change, not all the server code. As for reasons, I suppose you consider companies that use AJAX or Flash or Silverlight for richer user experiences to be fools? Why do that, what are they really gaining, right? If you think technical superiority and things that work and are functional are two different things, I’m not even sure what you’re asking. I see, so they don’t care about working faster and easier but they want something that works and is functional, got it. ;) If you think the bloat of Flash, Silverlight, etc, successfully solves the GUI problem, we clearly have different definitions of “solution” in mind. ;) As for sessions, every time it takes 3-5 seconds to load a webpage, whether at a random site or in a webapp, that’s an unacceptable user experience and optimizing for user experience is a great need right now. As for HTML GUI editors, read the preceding comments in this thread. If you’d actually read my post about replacing the standards process, you’d know that I don’t care if companies add new features on their own, I think that’s a good thing. The reason you see no compelling reasons for a new thin client is because you don’t want to, what I get from you is that you see no compelling reason for any change anywhere. ;)

    Russ, again with your dumb distortion. Funny that you say I’m making a crank argument considering both you and esr are considered cranks by most people, with your anarchocapitalist and “all source code will/must be open” views, yet you throw that crank term around so lightly yourselves. One would hope for some perceptiveness or self-awareness instead, but then those qualities are always hard-won and you two clearly haven’t put in the effort.

  147. “Remember, one of my requirements is “no malloc!””

    Interesting. By ‘no malloc’ you mean some specific implementation of malloc or you mean “no dynamic memory allocation” ?

    If its the latter, is it simply because memory management is hard and takes time and generates bugs? Or is there something specific to gpsd here?

  148. On a completely unrelated note, I also notice that open-quote marks and close-quote marks are changed in-place in comments. Is this a feature specific to your blog or part of WordPress?

    ESR says: WordPress does it automatically, I think. I didn;t do anything to request it in my blog configuration.

  149. Ajay, I didn’t say you were a troll, I said your style of posting, was trollish. Second, I’m aware of what my posting style can be like at times. But hell this a flame war ( don’t deny it)

  150. There is yet another thing that you have to take care if you design binary protocol (beside byte-order and floating-point representation).

    In binary formats it is very easy to go for fixed-width records. Lets take example of UNIX-epoch timestamp. If you use 32-bit, you will have problems in (quite close now) year 2038.

    In text formats numeric values are essentially unlimited/unbound length and unlimited precision.

  151. “xyz, dunno what browser makers or embedders have to do with anything.”

    You want to make radical changes to HTML / HTTP — it affects makers because they have to implement it; it affects embedders because they use what the makers produce.

    “As for webapps, only the views need to change, not all the server code.”

    Not all of it maybe, but it will all need re-testing, and also checking in case it generates any HTML or uses HTTP in any way. Even if it’s just the views, it’s still every single web page, so it’s still absurdly unfeasible.

    “As for reasons, I suppose you consider companies that use AJAX or Flash or Silverlight for richer user experiences to be fools? Why do that, what are they really gaining, right?”

    Not at all, but that’s completely different to what you’re suggesting — using AJAX / Flash / Silverlight is taking advantage of *things that already exist* to provide a richer user experience, whilst you’re talking about radically re-working HTTP / HTML.

    “If you think technical superiority and things that work and are functional are two different things, I’m not even sure what you’re asking.”

    They are different. Users don’t care about the technical aspects, and people creating software care only as much as they impact on the software’s requirements (e.g. cost of maintenance, performance, extensibility). Do you pick the applications you use based on the language they were written in and how they were implemented?

    “I see, so they don’t care about working faster and easier but they want something that works and is functional, got it. ;)”

    They care, but only if they’ve actually got a problem with their current software, it’s enough of a problem for them to actually want to spend time changing how they work and the new version actually provides enough of a benefit to be worth it. Your proposals have a ridiculously high cost to them for little to no benefit, hardly a winning proposition.

    “If you think the bloat of Flash, Silverlight, etc, successfully solves the GUI problem, we clearly have different definitions of “solution” in mind. ;)”

    Depends how you define “success” and “solve”. So far, I’ve yet to see you say what specific problems HTML doesn’t solve that can’t be solved through a combination of HTML, CSS, Flash, etc. On what grounds do you think these technologies don’t solve the problem, what is your proposal? NB: Your article on graphical elements needed by a thin client is odd as you say that HTML provides 70% of what’s necessary and then, rather than just extend it, you instead propose incredibly costly fundamental changes.

    “As for sessions, every time it takes 3-5 seconds to load a webpage, whether at a random site or in a webapp, that’s an unacceptable user experience and optimizing for user experience is a great need right now.”

    And how have you determined that this was caused by sessions? I am highly doubtful — in all my experience of website development, I’ve never encountered a situation where poor user experience was down to sessions, it has usually been poor hardware, lack of bandwidth, inefficient code, non-optimised databases, etc.

    “The reason you see no compelling reasons for a new thin client is because you don’t want to”

    I think what you mean is I don’t agree with the reasons you’ve given.

    “what I get from you is that you see no compelling reason for any change anywhere. ;)”

    Not true; change in business and software is a good thing — it tells you people are using your business / software.

    The difference is you’re proposing radical changes to HTML / HTTP when the same thing is either already available or could be achieved with the existing technology / infrastructure (i.e. much, much easier), with the only real benefit being “technical superiority” (depends on your point of view of course). That, to my mind (and, I suspect, the vast majority of the world), is a terrible tradeoff.

  152. Jeff, I know how much time this takes, it’s not something you do on your own. When you refuse to reason on a topic, calling for empirical evidence before you’ll say anything, you call your own ability to reason into question: I merely verbalized it. I already agreed that you have to implement at some point, but criticizing me for not having the resources to just go off and do this on my own right now is silly.

    Again, it didn’t stop Tim Berners-Lee from booting up his NeXTstation and doing the work. Even crazy, fanatical RMS, squatting in an office at MIT, put his time and effort where his mouth was by writing a whole bunch of free software under the GPL and this spreading the word about his crazy ideas, backed up with real working free software. And then Linus joined in and it sort of took off from there.

    Look, genius is 1% inspiration and 99% perspiration. You want credit for the former without having to do the latter. You want people to take your ideas seriously without having developed them at all. Therefore, you fail at invention, having provided us with nothing but empty air to work with. That’s my reasoning.

  153. @Ajay

    As for not being able to convince esr in that other thread, clearly he’s become deranged by open source and cannot or will not reason about any alternative. If you thought esr’s arguments there were compelling, even when the author of his one big counterexample emailed that esr was wrong,

    Maybe I missed something, but I don’t get the impression that esr was taking your arguments too seriously, and therefore wasn’t trying very hard to counter them.

    That being said, hybrid licensing already exists beyond the examples that were shown, and you’re obviously just not paying attention because some of these examples are highly controversial and have caused much consternation in the community. But my primary example is the Linux kernel itself.

    *pauses while those still reading have twist their faces in confusion*

    The Linux kernel uses a sort of ‘defacto’ hybrid license. Ever heard of this driver, etc.? And the many other drivers in the kernel that contain large sections of binary blobs, some of them even GPL themselves?

    Basically, I say it’s a ‘defacto’ hybrid license mostly because of Linus’ well-publicized attitude towards binary blob-type violations of the GPL of “I don’t really care.”. It isn’t codified, and Linus has said that he doesn’t like binary blobs, but is basically resigned to doing nothing about those who use binary blob kernel modules.

    Contrast this with the FSF’s rather, errm … militant stance with regard to binary blob drivers in the Linux kernel.

    (My own opinion is rather irrelevant as the kernel isn’t called Morgux for good reason. But I have 2 desktops and a laptop at home with NVidia cards in them running the proprietary drivers simply because they work. :)

    If i were you, I’d do a bit more research as to whether ‘hybrid licensing’ is merely a solution in search of a problem.

  154. If i were you, I’d do a bit more research as to whether ‘hybrid licensing’ is merely a solution in search of a problem.

    The problem is that pure open source contributes to the global body of source code and knowledge (good) at the expense of scuppering any chance at monetizing the software product (bad). You have to realize that programming code represents valuable intellectual property that can make or break a business. It’s not all commodity shit like Windows. In the professional audio world, having a fast algorithm that the other guys don’t, that lets you do, say, effects processing on 24-bit 96kHz samples with less than 1 ms latency, may very well be your business. It will allow you to sell synths or effects boards at the kinds of prices it takes to stay alive in the high-margin, low-volume pro audio market. The leverage gained by such information asymmetry is precisely what drives innovation in fields like this. If we treat this code as RMS and ESR do, saying it must be free and it’s wrong or counterproductive to keep it proprietary, we’re threatening the bottom line of some of the most advanced companies in the field today. Whole industries would simply dry up and disappear if they took that approach. Hybrid licensing represents a compromise that serves the interests of both parties. It’s worth looking into.

    When Ajay says hybrid licensing, he is referring specifically to his time-release open source scheme, not the patchwork of GPL and binary blob licenses in the Linux kernel. A similar scheme was used by id Software for their game engines for many years without much controversy.

  155. “Dan, if I’m confident in my ideas and the predictions that I make based off them and you consider that arrogance, clearly without being able to make a single argument contradicting me, I don’t much care how you perceive it.”

    Fair enough. You make your own bed, as they say.

  156. In the professional audio world, having a fast algorithm that the other guys don’t, that lets you do, say, effects processing on 24-bit 96kHz samples with less than 1 ms latency, may very well be your business.

    *snip*

    Hybrid licensing represents a compromise that serves the interests of both parties. It’s worth looking into.

    Exactly! This is where the ‘patchwork’ model adopted by the Linux kernel fits in.

    Okay, let’s say you are the lead developer on a project like Audacity, and a company approaches you about a plugin for your software that can be used in combination with something like JACK and your hardware effects processor to do those 24-bit 96Khz samples in under 1 ms of latency, but the code is a trade secret.

    If your project is licensed under a BSD-like license, there’s already no problem: they can simply bundle the software and their proprietary plugin, and nobody will care. No need for hybrid licensing here.

    If your project i licensed under a GPL-like license, you can tell them to write a GPL shim, and install their compiled code as a binary blob, ala the Linux kernel.

    Unless you are RMS or one of his freedom fighters at the FSF, I don’t think there’s a huge problem with either situation.

  157. >Interesting. By ‘no malloc’ you mean some specific implementation of malloc or you mean “no dynamic memory allocation” ?

    No dynamic memory allocation. It’s too hard to get right for use in a long-running service daemon, at least when you’re doing it by hand.

  158. >The problem is that pure open source contributes to the global body of source code and knowledge (good) at the expense of scuppering any chance at monetizing the software product (bad).

    I described nine different ways to monetize open source a decade ago. At least seven of them are in execution now. Do at least try to keep up.

  159. >Tim Berners-Lee created HTML and HTTP and his example program, WWW, all by himself on a single NeXT workstation. Surely if binary formats are just as simple to work with, you could design and implement your binary version yourself with no trouble at all.

    This is true. When I was much younger and didn’t know any better, I implemented a far more complicated binary persistence protocol solo. It turned out to be a bad idea for reasons unrelated to my ability to code it.

  160. ESR writes:
    > No dynamic memory allocation. It’s too hard to get right for use
    > in a long-running service daemon, at least when you’re doing it
    > by hand.

    As a Windows programmer I can tell you that simply isn’t true. It is easy to deal with memory leaks in your server programs. The solution? Frequent reboots.

    :-)

  161. A while ago Miguel wrote:

    I saw someone referring to Bittorrent as a binary protocol; well, I guess you haven’t ever programmed a .torrent decoder. It is a text protocol. It uses HTTP for file transfers and plain text for torrents, and I think it could not have been developed otherwise.

    Wrong.
    There’s a number of formats involved in the operation of bittorrent, so let’s disentangle this.
    First, there’s .torrent metafiles. Those use a format called bencoding. This format is quite similar to JSON in what it can encode, but uses length-prefixes to delimit lists, strings and maps. Strings are written to file without any transformation, including any binary contents like NUL bytes. The actual metadata is text; but any non-textual strings embedded will turn the resulting file non-text, and in any case it’s a lot of work to parse or write the format for a human; you really want format-specific tools to do that for you. Whether you want to call this a ‘text format’ is up to you, but it certainly doesn’t have the strengths typical of such formats (easy read- and writability without specialized too support) espoused by ESR here.
    Secondly, bittorrent clients talk to trackers. The standard and original format for those communications is indeed HTTP based; part of the transmitted data is again bencoded. For efficiency, many bittorrent clients and trackers also support a UDP protocol. This tracker protocol is purely binary.
    Thirdly, bittorrent clients transfer data to each other. They do this over a purely binary protocol whose low-level design has nothing to do with the previously mentioned ones. This is spoken by all bittorrent clients (though many extensions are only supported by some), and makes up the bulk of bt traffic.
    Fourthly, there is a DHT protocol>DHT protocol for clients to tell each other about peers they know. This protocol is built on bencoding.

    So, to summarize: Every bittorrent client speaks at least one pure binary protocol. Many speak two. Every common not-purely-binary protocol and format involved in bittorrent uses bencoding, and while bencoding is arguably a text format, it’s not significantly easier for humans to read or write without tool support than binary ones.

  162. First, there’s .torrent metafiles. Those use a format called bencoding. This format is quite similar to JSON in what it can encode, but uses length-prefixes to delimit lists, strings and maps.

    Apologies, I got that slightly wrong: length-prefixes are used to delimit strings, and only strings. This doesn’t impact my larger point: you’re still not likely to want to write .torrent files by hand (or for that matter read them, unless you know they’re syntactically valid and none of the strings contain bencoded data themselves).

  163. You have to realize that programming code represents valuable intellectual property that can make or break a business. … The leverage gained by such information asymmetry is precisely what drives innovation in fields like this. If we treat this code as RMS and ESR do, saying it must be free and it’s wrong or counterproductive to keep it proprietary, we’re threatening the bottom line of some of the most advanced companies in the field today. Whole industries would simply dry up and disappear if they took that approach.

    “When the rent from secret bits is higher than the return from open source, it makes economic sense to be closed-source. When the return from open source is higher than the rent from secret bits, it makes sense to go open source.” — ESR

  164. @strongpoint: I was going to dig out that quote from one of ESR’s essays to counterpoint that “as RMS and ESR do” comment but you beat me to it. :)

    It should be apparent to anybody who has read anything ESR and RMS have written about open source software that each as a very different, if somewhat overlapping, worldview. Both agree that open source software is the right thing, but for RMS it is a lifelong idealistic crusade that all software should be open source (he would use the term “free” and argue they aren’t the same thing), while for ESR it is the right thing because for most software, it is the right thing both from an economical theory standpoint, and as a practical development methodology.

  165. “No dynamic memory allocation. It’s too hard to get right for use in a long-running service daemon, at least when you’re doing it by hand.”

    I understand your rationale here (although I do not experience such difficulties), but I was wondering: if everything in gpsd is statically allocated, how sensitive/brittle is it to external stresses that may cause it to breach ‘realtime’ constraints?

  166. >how sensitive/brittle is it to external stresses that may cause it to breach ‘realtime’ constraints?

    Not at all. I get away with the no-dynamic-allocation policy because all the input information gpsd ever needs to handle comes in packetized chunks of a known maximum size. All it need to be able to do is buffer one of those and an O(1) amount of static state information.

  167. Jeff Read Wrote:
    > You have to realize that programming code represents
    > valuable intellectual property that can make or break a
    > business…. The leverage gained by such information
    > asymmetry is precisely what drives innovation in fields like this.

    Jeff, you are only looking at one side of the equation, you are forgetting the opportunity cost. For sure, keeping programs secret does provide profits that encourage innovation. However, keeping programs secret also prevents the spread of ideas, and places rent on ideas, which clearly decreases innovation.

    This is almost an identical argument to patents: give people a restricted right to an idea, and people will innovate to get the rent payments, however, doing so also significantly decreases innovation because the ideas are locked up in a box. At least with patents you are supposed to make a public description of your ideas.

  168. JessicaBoxer, do you read The Freeman? A few months back they had an article pointing out a counterexample to the idea that patents give an incentive to innovate. I blogged about it on opensource.org. Steam engines improved faster when patents weren’t used.

  169. > No dynamic memory allocation. It’s too hard to get right for use in a long-running service daemon, at least when you’re doing it by hand.

    Look at djb’s qmail-send. It uses dynamic memory allocation, doesn’t have memory leaks, and runs for a long time.

    A lot of people get grumpy about djb’s personality, but he writes interesting code.

  170. while for ESR it is the right thing because for most software, it is the right thing both from an economical theory standpoint, and as a practical development methodology.

    I know. Hence, “wrong or counterproductive”. I suspect that the rent from secret bits is higher than the return from open source for a much broader spectrum of software than ESR does; otherwise, Microsoft really would have been dead and buried by now.

  171. Look at djb’s qmail-send. It uses dynamic memory allocation, doesn’t have memory leaks, and runs for a long time.

    Unless you’re the Pai Mei of memory management discipline, it’s still risky. Avoiding malloc(3) is good for a number of other reasons too: it’s slow, and can lead to memory fragmentation compared to using a fixed-size hunk of memory for everything. If you can get away with doing the latter, it really is smart.

  172. @Russ: I think if you were djb, you’d be a bit quirky, too. But the guy is super smart and his code is very tight and very clean.

  173. I suspect that the rent from secret bits is higher than the return from open source for a much broader spectrum of software than ESR does; otherwise, Microsoft really would have been dead and buried by now.

    Hmph. Microsoft had the benefit of being in the right place at the right time. Back before GNU and the FSF and Unix System V, when most hackers were still pounding away on PDP-10s and 11s and micros hadn’t yet caught the attention of most, Gary Kildall went flying, so IBM talked to Paul Allen and Bill Gates (who were already selling them BASIC), who told them “sure, we have an operating system!” The next day, they were negotiating the purchase of something called QDOS from a guy named Tim Patterson at Seattle Computer Products and the rest is history. Microsoft rode IBM’s coattails. “Nobody got fired for buying IBM” became “Nobody got fired for buying Microsoft.”

    Microsoft learned to be evil from IBM so they learned from the best. *shrug*

  174. Russell Nelson Wrote:
    >JessicaBoxer, do you read The Freeman?

    No I don’t. However, if my comment was unclear I am a very strong advocate of limiting, or preferably eliminating all patent laws including legacy patents. I think I have commented on this a number of times before on this blog. As I said on a few comments about medical care, medical patents are undoubtedly responsible for millions of deaths every year, and hundreds of millions of deaths that could have been saved but for the opportunities lost to patent holders.

    Patents are an immense drag on innovation, even back in James Watt’s days when innovation was much slower. The only reason the economy doesn’t fold up and die under the burden of patents is that the system is run by the government and court system meaning that its destructive powers are ameliorated by the spectacular inefficiency and ineffectiveness of governments.

    To see what I mean: consider patent trolls. What exactly is a “patent troll.” It is simply a mechanism to introduce some free market efficiencies into the patent system, and look at the damage they do. Spectacularly effective companies brought to their knees by small time lawyers, with stupid ridiculously broad, unimplemented, unused claims of ownership.

    The regular patent system is in reality one big, slow, inefficient patent troll itself.

  175. ““When the rent from secret bits is higher than the return from open source, it makes economic sense to be closed-source. When the return from open source is higher than the rent from secret bits, it makes sense to go open source.””

    When a tautology is true, it is a true tautology. Thank you very much.

    Let’s focus on something that actually predicts something useful: under what personal, cultural, technological, economic or otherwise conditions would/will the returns from OS be more than the rents from secrecy for most typical kinds of software?

  176. Let’s focus on something that actually predicts something useful: under what personal, cultural, technological, economic or otherwise conditions would/will the returns from OS be more than the rents from secrecy for most typical kinds of software?

    Does the promotion of computing and/or programming education and/or contribution to culture count for anything? Personal enrichment? Howsabout it’s just plain fun?

    Or…how about closed source development is in trouble:

    The problem with the proprietary, closed-source way of doing software is that, increasingly, the brainwashing isn’t working anymore. The costs from normal software error rates are exceeding the tolerance of even the most thoroughly acquiescent customers (one visible symptom of this trend is the exponential trend of increase in email Trojan horses and website cracks). Partly this is because the cost per error rises as the customers come to rely more on their software. Partly it’s because the bugginess of closed-source software is getting progressively worse in absolute terms of errors per line of code.

    and that Microsoft’s problems with Vista are actually a manifestation of the phenomenon that ESR alludes to in Open Minds, Open Source?

    Finally, I think this chapter in The Magic Cauldron makes a pretty good argument overall from a business standpoint. For example, Web server customers needed a highly-reliable and well-performing Web server in the 1990s, so they collectively wrote one known today as Apache.

  177. Morgan wrote:
    > Web server customers needed a highly-reliable and
    > well-performing Web server in the 1990s, so they
    > collectively wrote one known today as Apache.

    It is worth pointing out that there are two different reasons software is created broadly speaking: to sell the software or to use the software. If your purpose is to make money from software, selling it shrink wrap in the store is a pretty good route to go. There are other models, but charging a price and keeping the source secret is a pretty simple business model, with a very low marginal cost. (BTW, to be pedantic, it is not the bits that are secret it is the higher order meaning of the bits that is secret, to obscure the method of operation, and to make modification extremely difficult.)

    If your goal is to use software though, generally an open source model offers a number of significant advantages. The huge challenge there is the fear that should you use open source software that you will not be able to rely on it. Big open source projects have sufficient activity and review that you can, hence apache, gcc, and Linux are reliable choices. However many open source projects have low contribution and activity, and as such can’t really be relied on. Of course you have the source — you can work it out yourself. But not if you don’t have the skills to do so. If you have to hire the skills you end up in a scary place for many people, that scary place being managing a software development project.

    To put it another way, big open source projects are diamonds for software users. Small open source projects are not. It is much better to blow $299 on Quickbooks that risk your finances on some low activity open source project. And if you are using Quickbooks, I guess you need to use Windows. And if you are using Windows, your sysadmins only know how to run IIS, and so forth.

    From what I see, and I am not expert, and I doubt I will find to many friends of this opinion on this blog, open source software can provide 80% of the needs of 80% of businesses. However, the ecosystem realities of the remaining 20% force everyone back to a whole company proprietary system.

    > and that Microsoft’s problems with Vista are actually a
    > manifestation of the phenomenon that ESR alludes to in
    > Open Minds, Open Source?

    The problem with Vista was simply that it wasn’t very compelling. It was a fine product, relatively stable, and sufficiently backward compatible, and it looked pretty. However, the benefits over XP were small enough that there was little reason to upgrade. This is a huge problem at Microsoft IMHO. All their revenue comes from operating systems and Office. Yet they are deeply mature products that frankly don’t need any new features. Office 2007 being a perfect example, where they added no new functional features at all (practically speaking), they just completely rearranged the GUI to make it look pretty and make it hard for experienced users to learn.
    The problem with Microsoft is that they have run out of steam, and new ideas. Or at least, they are so scared of regulators that they won’t bring out any new revenue generating products. What did they spend the 2000s doing? Writing .NET and C#. Both compelling products for programmers, but hardly revenue generation opportunities.

  178. Exactly! This is where the ‘patchwork’ model adopted by the Linux kernel fits in.

    A better implementation of the patchwork model: Mac OS X, which is currently trouncing every other desktop Unix in terms of market and mind share, even among developers. There’s a reason for this.

  179. However many open source projects have low contribution and activity, and as such can’t really be relied on.

    Whether you can rely on an open source project with low contribution and activity depends on what it is. Simple applications like GUI front-ends to mature, stable command-line o daemon backends tend to be easily maintained, even if it is not maintained by the command-line or daemon program’s developers. Furthermore, simple tools, like, say a particular implementation of sed or grep, tend not to need very much maintenance at all.

    It is much better to blow $299 on Quickbooks that risk your finances on some low activity open source project. And if you are using Quickbooks, I guess you need to use Windows. And if you are using Windows, your sysadmins only know how to run IIS, and so forth.

    I wouldn’t call GNUCash a “low activity” project.

    What did they spend the 2000s doing? Writing .NET and C#. Both compelling products for programmers, but hardly revenue generation opportunities.

    You, like so many geeks, fail to understand the Microsoft mindset. .NET and C# are about paranoia. Microsoft observed that much new development work, especially Web applications, was being done in dynamic languages like Java, Perl, Ruby and Python (think LAMP, J2EE, Rails, Zope, Django). What these languages and frameworks have in common is that they are not tied to a specific platform. I can write code in Java or Python and it will run anywhere Java and Python run.

    That’s a problem because Microsoft wants to make sure that there plenty of applications that run exclusively on Windows because the one thing that they learned from Apple is that customers adopt an application, not a platform. .NET and C# are all about getting dynamic languages and a development framework that works only (or only well) on Microsoft platforms. (Microsoft did not count on Mono or Moonlight, but it’s worth noting that as of now, most .NET applications won’t run under Mono).

  180. The problem with Microsoft is that they have run out of steam, and new ideas. Or at least, they are so scared of regulators that they won’t bring out any new revenue generating products. What did they spend the 2000s doing? Writing .NET and C#. Both compelling products for programmers, but hardly revenue generation opportunities.

    That’s been Microsoft’s target all along, really: Developers! Developers! Developers! Developers! Provide an easy migration path for developers and the user base inevitably follows. Before you pipe in with “…but Linux has great developer tools!” — no, it doesn’t. Show me the Unix tool that lets you automate an Excel sheet to generate quarterly earnings reports from revenue and expense data kept on an SQL server. It’s the day-to-day business developer that Microsoft has historically targeted, not hackers. It has proven a phenomenally successful strategy.

    All the indicators are pointing to a phenomenal release for Windows 7. It’ll be the next best thing to Snow Leopard, and run great on a netbook.

  181. Show me the Unix tool that lets you automate an Excel sheet to generate quarterly earnings reports from revenue and expense data kept on an SQL server.

    You’re looking for JasperReports.

  182. Morgan Greywolf writes:
    >I wouldn’t call GNUCash a “low activity” project.

    No, probably not, but I sure wouldn’t run my business off it either. I was not familiar with the product so I had a quick look at the FAQ. It is very disorganized, telling me one place how to calculate GST (which I presume is an Australian thing), and in another place how to comply with Norwegian business standards. I didn’t find anything about how to run, for example, my US payroll system, or where to download tax tables. I presume this feature is missing. It seems a pretty big omission to me. Perhaps I judge it unfairly, but frankly I don’t care — I only want to run my AR AP and payroll not save the world from evil Intuit.

    >You, like so many geeks, fail to understand the Microsoft mindset.

    That is a pretty big conclusion to draw from very little evidence.

    > .NET and C# are about paranoia.

    I find this an ironic statement, given the bristling paranoia in your following comments.

    > Microsoft observed that much new development work, especially
    > Web applications, was being done … LAMP [etc.]… not tied to a specific
    > platform. … Microsoft wants to make sure that there plenty of applications
    > that run exclusively on Windows

    Morgan, that is not paranoia, it is called good business. It is also not an accurate history of the reasons Microsoft adopted .NET. ASP.NET was certainly not the driving force at all.

    > Microsoft did not count on Mono or Moonlight, but it’s worth noting that as
    > of now, most .NET applications won’t run under Mono).

    I am not an expert on Mono, however, I know quite a bit about ASP.NET. Of all applications written using these Microsoft tools, web sites are the simplest, and most likely to translate easily to a Mono platform because they do not, generally speaking, use most of the hyper Microsoft specific graphical and platform libraries. From what I understand there is an Apache Module to run Mono, so it would be an interesting experience to wee how well it works. Anything to save us from the evil that is perl.

    I have plenty of problems with the way Microsoft does business, but it is spectacularly ironic for members of a community who write things like “micro$oft is evil” to call Microsoft paranoid. (Though undoubtedly they are paranoid, or is it paranoia if they really are out to get you?)

  183. Jeff Read Says:
    > All the indicators are pointing to a phenomenal release for
    > Windows 7.

    I believe you are mistaken. I think Windows 7 will do a little better than Vista, but not a lot. There is already a whispering campaign against it since the RTM version.

    However, frankly they deserve it. Windows 7 is just more of the same. Microsoft really has blown it the past five years. Look, they have a highly controlled environment in terms of the CLR, and complete control over the operating system code. If they had applied their 50,000 programmers to the task, what they could be bringing out is an operating system that ran very effectively on multiple cores, and more importantly could automatically run CLR applications on multiple cores. That would have been a killer application that would have knocked Linux on its heals for ten years. But what do we get instead? Pretty graphics! an improved calculator program! Fancy desk top graphic icons! What a waste of quarter of a million programmer years.

  184. It’s faster and less bulky than Vista. That’s all most people care about where I live. People here have less money, BUT ‘software piracy’ is rampant to the point where the price set by Microsoft is a non issue. We just crack and copy.

  185. > Microsoft observed that much new development work, especially
    > Web applications, was being done … LAMP [etc.]… not tied to a specific
    > platform. … Microsoft wants to make sure that there plenty of applications
    > that run exclusively on Windows

    Morgan, that is not paranoia, it is called good business. It is also not an accurate history of the reasons Microsoft adopted .NET. ASP.NET was certainly not the driving force at all.

    I see what you’re saying, but I’m also implying things like Java and Swing, Python and PyGTK or PyQt, Ruby and GTK or Qt, etc. The driving force behind .NET I think was distributed applications, client/server stuff, and, to a lesser extent, desktop apps. I’m not trying to imply that ASP.NET was the driving force, though I can see why you were thinking. (My bad — I meant to include Web and networked apps …. really all programming these days is Web programming, though. [1])

    [1] This is not a difficult argument to make, but it takes time. :)

    Finally, I don’t think “micro$oft is teh ev1l!” but I do know that their business strategy historically can be summed up like this: In order for Microsoft to win, they don’t need to be the best, they just need to make sure that everyone else loses and that their product is “good enough.” I call it the McSoftware model. I just think that their entire business model is outdated and outmoded and they are grasping at straws at this point trying to stay in business.

    Anything to save us from the evil that is perl.

    Don’t you know anything? That’s what Python for! :-P

  186. Morgan Greywolf wrote:
    > In order for Microsoft to win, they don’t need to be the best, they just need to make sure
    > that everyone else loses and that their product is “good enough.”

    This is a curiously engineering mindset. It is interesting from the point of view of Maslow’s hierarchy of needs. An engineer needs to make a certain amount of money to live, but once he passes a certain point money is less of an issue, doing something worthwhile and good takes over.

    However, businesses are run differently. The purpose of all (or at least most) businesses is to get a good return on investors’ capital. If a product is more that “good enough” then you have spent money on things that are not necessary, and are not doing your investors a favor. I think though by “good enough” you really mean “not good enough.” And of course it is fiduciary duty of every company to try to kill their competitors (as long as they do so within the rules.)

    >I call it the McSoftware model.

    You are aware that McDonalds is probably the most successful resturant chain in history, right?

    > I just think that their entire business model is outdated and outmoded

    I think there is some truth to that, though I think you greatly overstate the case.

    > and they are grasping
    > at straws at this point trying to stay in business.

    I suggest you look at their financials if you really believe that.

  187. I am aware that most people here are not making the dogmatic argument that text files are always better and open source software is always better, but it sure sounds like it. I was unaware that binary database files, binary machine code and Java .class files were such bad design decisions. I am chagrined that I spent so much time using C compilers to produce object files in an inferior binary format! If only I could have applied my Mark I eyeball more easily….

    As regards open source software, as I read the history of computing, we have always had both open and closed source software. Closed source software companies are not disappearing with the speed I would expect if the open source business model was so superior. Empirically, it really looks as if closed source is the right way to go for a very large amount of economic activity. I’ve read Eric’s writing on open source and agree, you can make money on it. But, empirically speaking, closed source appears to have significant economic advantages over open source for many software companies. I don’t know if hybrid licenses are the technology that will change that. I suspect that there are some deeply human reasons we haven’t all switched to open source. One might be the utterly human desire to own the fruits of their labor. Just a hunch. If my suspicion that these deeply human reasons exist is true, than the ultimate conversion to open source will fail for the same reasons that the effort to convert everyone in Russia to the new Soviet man did.

    Yours,
    Tom

  188. This is a curiously engineering mindset.

    Duh. I’m a systems engineer.

    It is interesting from the point of view of Maslow’s hierarchy of needs. An engineer needs to make a certain amount of money to live, but once he passes a certain point money is less of an issue, doing something worthwhile and good takes over.

    That’s closer to Herzberg’s two-factor theory, but then again, Maslow’s hierarchy maps neatly onto Herzberg’s two-factor theory. Anyway, it’s clear that, like me, you’re a business major and this explains your worldview considerably. (My bachelor’s degree is in Business/Information Systems)

    And of course it is fiduciary duty of every company to try to kill their competitors (as long as they do so within the rules.)

    Emphasis mine. Yes. Note that Microsoft has been convicted of violating the rules that are designed to keep markets competitive[1]. It could also be argued, however, that locking out competitors with closed protocols and closed specs violates unwritten rules of scientific cooperation necessary for advancement of the art. OTOH, it is this stubborn clinging to closed systems that has doomed MIcrosoft. You can only capture secrecy rent up until the point that your “good enough” software has competitors that are also “good enough” but are using open specs and open sources.

    You are aware that McDonalds is probably the most successful resturant chain in history, right?

    Yes. But they sell a product that almost everyone agrees is substandard. And they’ve lost considerable standing and market share in recent years to competitors who are putting out a better product with at a similar price point. What does that have to say for Microsoft, whose primary competitors these days are putting out a free product?

    I suggest you look at their financials if you really believe that.

    Yes, I’m overstating my case somewhat, however, I’m doing so to point out Microsoft’s mindset because this what the culture within Microsoft really believes.

  189. [1] Whether you believe those rules are correct or not are outside the scope of my argument.

  190. I am aware that most people here are not making the dogmatic argument that text files are always better and open source software is always better, but it sure sounds like it. I was unaware that binary database files, binary machine code and Java .class files were such bad design decisions. I am chagrined that I spent so much time using C compilers to produce object files in an inferior binary format! If only I could have applied my Mark I eyeball more easily….

    The discussion is actually about textual vs. binary protocols, but people who are not cognizant of the difference between a protocol and a file format keep steering it towards file formats. There are valid reasons to use binary file formats; databases are a very good example as things like XML databases exist — however, binary databases almost always win for performance reasons.

  191. Morgan Greywolf wrote:
    > Emphasis mine. Yes. Note that Microsoft has been convicted of violating
    > the rules that are designed to keep markets competitive[1].

    Yes indeed. And that is what we really need: judges that know nothing about business, running businesses. Good gravy, next thing we will have community organizers running car companies or hospitals. I know that is a crazy idea, but the thin end of the wedge sometimes becomes thicker.

    > It could also be argued, however, that locking out competitors with closed
    > protocols and closed specs violates unwritten rules of scientific cooperation
    > necessary for advancement of the art.

    Right, that is normal practice in our society. That is why we have a patent and copyright system. Obviously,the very existence of patents make is clear that people don’t want to freely share these sorts of things.

    You want them to make their secret protocols public why? So you can write software that competes with Microsoft? There is a name for that behavior: it is called “breach of fiduciary duty.”

    Rather than secrecy rent, would you rather Microsoft followed the more traditional route and charged patent rent? At least with secrecy rent you have the option to reverse engineer.

    >> You are aware that McDonalds is probably the most successful resturant chain in history, right?
    > Yes. But they sell a product that almost everyone agrees is substandard.

    And your basis for this claim is what? I don’t agree that everyone agrees that at all. True, I personally very rarely eat there, however, what people do with their pocketbooks is a far more accurate reflection of what they actually believe than what they say with their mouths. Actions, as they say, speak louder than words. Perhaps it is substandard from the nutritional point of view, however, it is superior in a number of other ways (taste, convenience, familiarity, access.) So your statement, unqualified is almost certainly wrong.

    This is applicable to the discussion in hand. Perhaps Linux and its friends are superior from an engineering point of view, (though I certainly don’t stipulate to that claim), however, apparently, as people vote with their dollars, they are less acceptable that Microsoft products by other measures, and apparently those measures are more important to people.

    However, I agree things are not heading in Microsoft’s direction right now, and the reasons why I think are more due to the beating they have taken in the courts than in the competitive markets. However, any open source advocates who think Microsoft is down and out are in danger of greatly underestimating them. 50,000 programmers and tens of billions of dollars cash can change things remarkably quickly.

    I might add this, that even if your old bugbear was dead, you have a new vampire to worry about in Cupertino. Undoubtedly the personal computer of the future is going to either be, or be deeply tied to the phone, and if you think Gates was a control freak, hard nosed, cut my grandmother’s throat business man, let me introduce you to a man named Steven.

  192. Morgan Greywolf says:
    > people who are not cognizant of the difference between a protocol
    > and a file format keep steering it towards file formats. …
    > binary databases almost always win for performance reasons.

    Thing about file formats is that they tend to end up on the wire. Would you recommend text based network protocols to communicate with your binary database? (Input and output of course.)

  193. I happened to notice this thread today, and have to chuckle at the side argument on EMACS vs. vi. Once you’ve saved a file in either editor (or for that matter nano, ex, ed, sed, awk, perl, etc.) it is irrelevant what application wrote it. All that matters is that the file is properly formed for whatever application will parse it.

    The OpenOffice developers wrestled with the issue of text bloat vs. binary formats, and adopted the idea that the native format for their applications would be XML plus embedded objects in well-defined binary formats like JPEG, stored as members of a PKZIP archive. (Yes, gzip would be more efficient at the compression, but they wanted something to create archives of multiple member files, which PKZIP already had, along with compatible tools for all target platforms.) The beauty of this design is that one need not have any particular software to work with the format. As an example, my employer has gone through several name changes (including being sold to a different parent company) in my tenure. If I have some OpenOffice documents that mentions the company name, it is trivial to write a shell script that iterates through them, using unzip to extract all the members of the archive to a work directory, echo the commands to global search/replace, then save to ex, and zip everything back together to replace the OO file, all without ever having to use any program that “knows” anything about a binary structure.

    The win from using text-based formats, whether they be termcap/passwd-style, XML, JSON, or whatever, is this sort of flexibility. There are off-the-shelf tools that parse each of them, and when you have a special need (like ESR’s reluctance to use a JSON toolchain that makes malloc calls in a daemon) you can roll your own quickly enough (while allowing clients that need to communicate with gpsd to use malloc if the programmer wishes, under the assumption that a memory leak in the client isn’t as dangerous as in a daemon). You don’t need a specialized tool to manage a format that nothing else understands.

    If you can write an application-aware compressor that beats gzip by at least 10%, then maybe it’s worth implementing it. I suspect there are few real-world cases where that would be true.

  194. Morgan Greywolf writes:
    >And they’ve lost considerable standing and market share in recent
    > years to competitors who are putting out a better product with at
    > a similar price point.

    Oh, and BTW, I don’t have access the the market share numbers that you apparently do, but according to google shares of McDonalds are up 650% from 1990 to the beginning of 2008, compared to the S&P which is up 320% over the same period. (The story is the same roughly since then, but I omitted that data due to the craziness of the market over the past 18 months.) If McDonalds had my money over that time, I think I’d be pretty happy, notwithstanding the dreadful performance you seem to attribute to them above.

  195. The Monster writes:
    > You don’t need a specialized tool to manage a format that nothing else understands.

    You are incorrect. Emacs, vi, ex, sed ARE all specialized tools designed to work the a particular special format — text. The fact they come pre-installed with Linux is a historic consequence of the early choices to standardize on that particular specialized format, there is nothing intrinsic about their “generality”, except that the cultural context of Unix has been to minimize your generality to within the bailiwick of these tools.

    Had a binary format been chosen, with some sort of meta format to describe them, as I have discussed here before, the general tools would work with that meta format just as effectively and smoothly.

    Text format is a local minimum in the evolution of Unix, there is nothing intrinsically special about it. (Ask a Chinese person if you doubt that fact.)

  196. Yes, I’m overstating my case somewhat, however, I’m doing so to point out Microsoft’s mindset because this what the culture within Microsoft really believes.

    Cue propaganda-style footage with Microsoft, along with the noble Windows-powered U.S. of A., being overrun by the communist Penguin Hordes. And a flashing caption saying “THIS IS WHAT THE CULTURE WITHIN MICROSOFT ACTUALLY BELIEVES”. :)

    The discussion is actually about textual vs. binary protocols, but people who are not cognizant of the difference between a protocol and a file format keep steering it towards file formats.

    Yes, but many of the same engineering concerns apply. A well-designed binary format, whether you are talking a disk-based format or a wire protocol — occupies a sweet spot wherein it is both compact and easy for the machine to read and parse. This is an optimal area that text protocols can’t possibly touch.

  197. @Jessica:
    You are incorrect. Emacs, vi, ex, sed ARE all specialized tools designed to work the a particular special format — text.

    Is that the best you have? quibbling over whether text is a “special” or a “general” format?

    Yes, you need a computer to translate 01000001 into the letter “A”. Yes, it would be a different pattern of bits in EBCDIC from the one we use in ASCII, but a text editor written to work with ASCII can easily use a translator to convert to/from that abomination. If you insist that text is a special format, then all I can say to you is 05 FF C4 00 2F 11 00 02.

    Even Chinese isn’t a big deal; UTF-8 encodings would allow an AIS report showing the ship name in any arbitrary language to be handled flawlessly by gpsd’s clients, because they doesn’t have to understand any particular value for an object, only the names of the objects. An application that sees an AIS report on a ship named “深圳” instead of “Shenzhen” might show whatever code page is defined for 0x80-FF to represent the individual bytes making up the UTF-8 for each character, but they’d pick up from the MMSI that it’s a PRC vessel, which would explain the inability to read the name.

    Of course, the air traffic people long ago rendered your “Chinese” argument irrelevant in their context, by requiring that all ATC conversations take place in English. There’s a lesson to be learned there, I think. HTML uses words that look like English for the things that matter to the protocol (but not for the text itself) just like GPSD uses abbreviations that make sense to English speakers for the names that matter to it (but not for those values like a ship name that are allowed to be any string) because most computer programmers writing code that talks to Unix daemons know at least a limited English vocabulary.

  198. BTW, Google uses binary protocols/formats internally.
    Benchmarks show that “the performance of Protocol Buffers depends heavily on the scenario including the platform, programming language and complexity of the data structure; performing between 15 times slower and 5 times faster than JSON”.

  199. >Benchmarks show that “the performance of Protocol Buffers depends heavily on the scenario including the platform, programming language and complexity of the data structure; performing between 15 times slower and 5 times faster than JSON”.

    Yes. Their use case may even really justify a binary protocol, though I remain a bit skeptical about that. Depends on the volume; it certainly wouldn’t in gpsd, as the volume of data passed over my client-server pipes is so low that even a 15-fold performance increase might not reduce latency measurably. The tradeoffs change if the data volume approaches saturating the available bandwidth.

  200. BTW, Google uses binary protocols/formats internally.

    Yes. And if I were writing code to run within an architecture like Google’s, I would be too. In fact, I’m currently designing and prototyping an object-oriented distributed computing architecture that may (or may not) use Protocol Buffers; I’m not fully settled on it just yet. The whole thing will use smart objects that pass messages back and forth. I may do both JSON and Protocol Buffers and see which works better in a variety of test cases, which will include debugging scenarios.

  201. The tradeoffs change if the data volume approaches saturating the available bandwidth.

    Well-said, ESR. I think this is the key in Google’s case. Their volume is ginormous compared to an daemon designed to communicate with one or more GPS devices. It’s not like anyone someone’s going to have millions of GPS trackers hooked up to a PC simultaneously or using GPSD to serve data for a single GPS unit to millions of people …. (hmmm…geolocation-aware microblogging?)

  202. Before anyone notices my last parenthetical comment: there are obviously better tools for geolocation-aware microblogging…like say, a Web server. :)

  203. Upthread, someone was making the case that binary protocols would help decrease user interface latency. Note that customer facing expert users (I’m thinking of people who answer phones for a living) really benefit from fast response times. So do gamers. If Google is trying to battle Microsoft for market share using cloud computing, fast response times will be key to some markets. Of course this does not apply well to GPSD. Or for perusing hyper-linked academic papers over the internet. But people do seem to keep wanting to use HTTP and HTML to serve customer facing expert users and gamers, not just academic papers and blogging.

    I haven’t seen any convincing arguments that HTTP and HTML provide short user interface latency. But my own experience with using the web is that they don’t. I am aware that this may all be due to poorly designed web sites, but when I’m playing some kind of first person shooter, I want the designer to wring out every speed advantage possible.

    Yours,
    Tom

  204. The Monster writes:
    > but a text editor written to work with ASCII can easily use
    > a translator to convert to/from that abomination.

    Thanks for making my point so succinctly. Great example regarding EBCDIC.

    > Of course, the air traffic people long ago rendered your “Chinese” argument
    > irrelevant in their context, by requiring that all ATC conversations take place
    > in English.

    Couldn’t agree more. It is way past time those damned foreigners learned to speak English.

  205. @Morgan
    > I’m currently designing and prototyping an object-oriented distributed computing architecture

    Can you tell more? Sounds very interesting to me.

  206. >Thanks for making my point so succinctly. Great example regarding EBCDIC.

    No one uses EBCDIC. As far as I know, it’s used on only a handful of OSs, all of which together have minuscule market share.

    Question for you: Your hypothetical open binary metaprotocol–what would it look like? My thought is that any metaprotocol that is set up so that you don’t just write protocols on top of it, like you do with text (in which case why not use text?), would be either so general that you can’t get any performance gains with it over text, or optimized for one specific application. Text compressed with gzip has the advantage that it’s completely general, and while perhaps some specially designed binary protocol could beat it in some specific situation, I don’t think a binary metaprotocol could, at all.

  207. Here’s a high level stab at a hypothetical open binary metaprotocol. Replace text XML tags with a two byte big-endian unsigned integer. Add a four byte big-endian length. Reserve one tag for namespace. The minimal tag length for a single character XML tag is seven bytes if there is any data. You have already beat it by a byte. Oh, and my open binary metaprotocol can be configured to use gzip too. BTW, gzipped data is remarkably incomprehensible to the Mark I eyeball. An open binary metaprotocol which can fetch the dictionaries which go with it’s namespaces would be very easy to translate into text via a standard and quite simple program.

    However, such a protocol would not be good for a really fast UI. A protocol for that purpose would need to be Huffman encoded for it’s application. The application in that case is actually rapid encoding and decoding of keyboard, mouse and other user interface device events. It could still be relatively general purpose, though, in that multiple programs could use the same event encodings.

    Yours,
    Tom

  208. > open binary metaprotocol–what would it look like?

    I guess like this or this.

    Both are completely general and accessible from any modern programming language. There’s nothing hypothetical about them, both are widely used in production environments. The first, for example, powers the internals of the sourceforge.net and the second powers all Google’s RPCs.

  209. Tom Dickson-Hunt says:
    >Question for you: Your hypothetical open binary metaprotocol–what would it look like?

    Interesting question Tom. I am sure I don’t really know, I am sure something like that would be organic rather than designed by committee. I am also sure there would be a few painful transitions as it grew organically. However, these have been endured many times quite successfully before. (Consider for example the transition of the structures of filesystems, or the structures of binary files.)

    As I said earlier it would be very unfair to design it today, because, in the alternative history I suggest, the starting point would be very different than today. To execute a design today with full knowledge of the sorts of applications, we use, the capabilities of networks and computers and disks, and the wisdom of 20/20 hindsight seems both an unfair and pointless exercise. More importantly, the design of a format of this type would not only affect files, but would affect everything else too. Were such a binary file format specified, we would undoubtedly have great tools to work with it, but would the resemble sed, awk, or bash? Probably not. The tools would be designed around the needs of the file format, just as they have been designed around the needs of text files. Regular expressions, for example, are basically a requirement because of the need to cram format structure into an unstructured medium. We might still have reg-ex, but it would undoubtedly be quite different.

    However, FWIW, I imagine it would be some sort of typed grammar that described the data as a sequence of typed symbols, and some sort of meta data for those symbols.

  210. @Ivan:

    Can you tell more? Sounds very interesting to me.

    (Been a little bit busy or I would have replied sooner)

    Sooner or later I’ll post details on my shiny new, empty blog, but the basic idea is that it’s built around “live” objects that will pass messages back and forth. An object can provide certain services and reside anywhere on the network, and can receive messages passed to it from another object (say, a job queue object, which can also live anwhere) that tell it what to do and where to send the results of that job (another object that can receive the results). Objects can be connected together, kinda/sorta similar in concept to Unix pipes, one emitting an event message and another one waiting for that event message, etc. The whole thing is similar to GTK in that it’ll use an event-driven model with callbacks, etc. I’m even considering the objects being based around GObject. The whole thing should ideally be platform and language agnostic (eventually) and will support a wide range of object types, even custom objects that can be built from scratch.

    It’s more of a distributed computing framework and infrastructure more than anything, but it aims to be able to accomplish whatever the implementer needs with whatever hardware, platforms, and languages the implementer needs.

    FWIW, I’m prototyping in Python and plan to implement other languages later.

    I’ve got a long way to go, but participation, including just ideas and criticism, is always appreciated. :)

  211. OT: @Morgan – like your blog about espresso grinders/machines. I’m a low-end guy myself (even if I had the money I couldn’t bring myself to spend that kinda dough), and appreciate your mod for the Capresso. I’m still producing wonderful espresso from a pot, but your technique has to be spot-on ;)

  212. The Amiga OS also has a feature that no other commercial OS that I’m aware of has today: the concept of “data types”, which enables a program to register a parser and serializer for any binary file format it produces for use OS-wide, by any application.

    Actually I take this back: Windows gets most of the way there with the Component Object Model, the development of which will be regarded in computing history as a milestone for enabling ubiquitous automation on the desktop. That the open source community still hasn’t gotten this right is yet another way Linux is broken compared to Windows.

  213. >Component Object Model, the development of which will be regarded in computing history as a milestone for enabling ubiquitous automation on the desktop.

    Probably not. I know two people who have worked with COM, and they both say it’s hideous. Overcomplicated, poorly documented, buggy in itself, and a fertile cause of application bugs — in short, pretty much what history would have led one to expect from Microsoft

  214. Probably not. I know two people who have worked with COM, and they both say it’s hideous. Overcomplicated, poorly documented, buggy in itself, and a fertile cause of application bugs — in short, pretty much what history would have led one to expect from Microsoft.

    True. It really is a really sloppily done knockoff of what NEXTSTEP and OS/2 (whose object desktop, to complete the circle, was designed by former Amiga engineers!) promised us. But what alternative mechanism does Linux offer in its stead? D-Bus? (snicker!) KParts? Bonobo? The object models local to Mozilla or OpenOffice? There are several competing ones, which cruft up /usr/lib, don’t really interoperate, and don’t make in-process, out-of-process, and remote-machine calls accessible via an identical interface. COM is a single standard used by virtually all Windows developers. It provides developers a single interface for e.g., automating a workflow based on app Foo and integrating it with app Bar over in sales. The only reliable interface Linux has is the usual Unix pipes and IPC mechanisms, which would be a wonderful way to work except no one actually works that way.

    Many of the issues with COM could be papered over with a development environment that masked them (that’s why Visual Basic was so popular), and were later corrected with .NET, which has yielded among other boons the PowerShell, which is quite simply a better Unix shell than any Unix shell.

Leave a comment

Your email address will not be published. Required fields are marked *