Finally, one-line endianness detection in the C preprocessor

In 30 years of C programming, I thought I’d seen everything. Well, every bizarre trick you could pull with the C preprocessor, anyway. I was wrong. Contemplate this:

#include <stdint .h>

#define IS_BIG_ENDIAN (*(uint16_t *)"\0\xff" < 0x100)

That is magnificently awful. Or awfully magnificent, I'm not sure which. And it pulls off a combination of qualities I've never seen before:

  • Actually portable (well, assuming you have C99 stdint.h, which is a pretty safe assumption in 2013).
  • Doesn't require runtime code.
  • Doesn't allocate storage, not even constant storage.
  • One line, no auxiliary definitions required.
  • Readily comprehensible by inspection.

Every previous endianness detector I've seen failed one or more of these tests and annoyed me in so doing.

In GPSD it's replacing this mess:

/*
   __BIG_ENDIAN__ and __LITTLE_ENDIAN__ are define in some gcc versions
  only, probably depending on the architecture. Try to use endian.h if
  the gcc way fails - endian.h also doesn not seem to be available on all
  platforms.
*/
#ifdef __BIG_ENDIAN__
#define WORDS_BIGENDIAN 1
#else /* __BIG_ENDIAN__ */
#ifdef __LITTLE_ENDIAN__
#undef WORDS_BIGENDIAN
#else
#ifdef BSD
#include <sys/endian.h>
#else
#include <endian.h>
#endif
#if __BYTE_ORDER == __BIG_ENDIAN
#define WORDS_BIGENDIAN 1
#elif __BYTE_ORDER == __LITTLE_ENDIAN
#undef WORDS_BIGENDIAN
#else
#error "unable to determine endianess!"
#endif /* __BYTE_ORDER */
#endif /* __LITTLE_ENDIAN__ */
#endif /* __BIG_ENDIAN__ */

And that, my friends, is progress.

UPDATE: I was wrong: I thought the preprocessor would do all these operations, but it turns out this macro does expand to a small anount of code. It’s still pretty neat, though.

192 comments

  1. Now, I don’t know if I’d call that “readily comprehensible”…. I’ve been banging out embedded C code, well, since before C99, and I’m stunned to learn that the preprocessor will honor that particular incantation without a runtime check. But it is pretty wicked, in both the ‘evil’ and ’80s senses of the term.

    One of my regular job interview questions involves endianness. I’m frustrated by how often candidates misunderstand it. Occasionally, one of them will try to figure out a detector on the fly (I guess they never heard of ntoh?) and universally get it wrong.

    For what it’s worth, the best answers on my interview question are generally endian-independent. Because if you’re worrying about endianness, you probably *ought* to be worrying about alignment and integer width, too, and it’s just easier to handle all those things a byte at a time. (Disclaimer: I’ve never looked at the GPSD code, but I can imagine it has a good reason for endian-dependence.)

    1. >(Disclaimer: I’ve never looked at the GPSD code, but I can imagine it has a good reason for endian-dependence.)

      Not really, but at least there’s very little of it. All the code except the RTCM2 driver is endianness-clean – but the RTCM2 code predates when I wrote my set of systematic endian-independent bit-field macros and it takes a … different … approach.

      No, I’m not going to tell you. Go take a look. Prepare to be amazed and horrified.

  2. I’m guessing that the reason it took so long is because most programmers (including me) were trained that #define is for constants, not code, and thus it is hard to think of using it to produce executable code. I haven’t done programming in C++ in twelve years and I can figure this out. It certainly isn’t // You are not expected to understand this
    stuff.

  3. > Prepare to be amazed and horrified.

    Wait… Did they really…? No, they wouldn’t… OH, MERCIFUL GOD!

    I feel bad critiquing open-source code that someone clearly spent a lot of time on, but I shudder to think that a novice might come across that code and emulate the approach. Adding insult to injury: struct bit fields? I thought ANSI C left the ordering of bit fields unspecified, even for a given target endianness… I could be wrong. But I did get a kick out of the compiler compatibility comments.

    I feel obliged to confess that, early in my career, I built an entire network protocol on top of structs with bitfields. That’s when a mentor (ahem) showed me the error of my ways, something I’ve appreciated ever since. Now, the first thing I do on a new project is lay down a set of endian-clean bit- and byte-field inline routines, then build a protocol parser/packer library.

  4. That is a jewel of a hack.

    But yeah, the Right Thing is to unpack external formats into, or pack them out of, structs one bye at a time. That doesn’t stop C hackers from snarfing stuff into a buffer and casting it as a struct, byteswapping as necessary; the local style of even some code I otherwise deeply respect (I’m looking at you, NetBSD) is to do this. Blech.

  5. Terry,

    If you are working in C++, you are supposed to avoid use of the C preprocessor to the maximum extent possible. Constants should be variables declared const; and code should be inline functions and templates.

  6. @Casey Barker: Now, I don’t know if I’d call that “readily comprehensible”…. I’ve been banging out embedded C code, well, since before C99, and I’m stunned to learn that the preprocessor will honor that particular incantation without a runtime check.

    Well, you are rightfully stunned because preprocessor can not handle this:

    $ cat test.c
    #include <stdint.h>

    #define IS_BIG_ENDIAN (*(uint16_t *)”\xff” < 0x100)

    int foo() {
    return IS_BIG_ENDIAN;
    }
    $ gcc -O3 -S test.c -o-
            .file  "test.c"
            .section        .rodata
    .LC0:
            .string ""
            .string "\377"
            .text
            .p2align 4,,15
            .globl  foo
            .type  foo, @function
    foo:
    .LFB0:
            .cfi_startproc
            xorl    %eax, %eax
            cmpw    $255, .LC0(%rip)
            setbe  %al
            ret
            .cfi_endproc
    .LFE0:
            .size  foo, .-foo
            .ident  "GCC: (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3"
            .section        .note.GNU-stack,"",@progbits

    That's second time in a last week Eric published "sensation" which was half-cooked. What happened? Previously he was somewhat more distrustful.

    Yes, this hack is interesting, but as we can see it's not even eliminated with -O3, so you can easily see that it DOES require runtime code and it DOES allocate storage.

  7. > Doesn’t allocate storage, not even constant storage.

    To be pedantic, it does allocate a constant string.

  8. > Would the use of Unicode + a language like Arabic or Chinese screw it up?

    No, plain strings are always 8-bit. Unicode strings in C++11 are written u”like this”, and are a different type.

  9. Another advantage is that it’s a compile-time error on systems where stdint.h does not define uint16_t (which will tend to mostly overlap with systems where endianness isn’t meaningful at all, such as where CHAR_BIT != 8)

  10. A disadvantage (sorry for the double post, I clicked submit too early) is that it will silently misdetect the PDP-11 as little-endian. Though I don’t know that even 2.11BSD can run gpsd anyway, so it’s probably not an issue, and if you’re only working with 16-bit fields anyway it’s definitely not an issue, except for the abstract case of where a system doesn’t neatly fit into “big-endian” or “little-endian” category.

  11. You should rather use this, which usually does not generate any code at all with GCC even with -O1:

    static inline is_big_endian() {
    const uint16_t endianness = 256;
    return *(const uint8_t *)&endianness;
    }

    Since it is declared as static inline, you can safely put that in a header file.

    Here is the generated code on ARM thumb in little endian mode (-O1 -fomit-frame-pointer, when not inlined):

    is_big_endian:
    movs r0, #0
    bx lr

    and in big endian mode:

    is_big_endian:
    movs r0, #1
    bx lr

    You’ll easily see in the assembler output of the following code:

    #include

    int main()
    {
    printf(is_big_endian() ? “big endian\n” : “little endian\n”);
    return 0;
    }

    that no test takes place and that only the appropriate string gets incorporated in your code.

  12. You should rather use this, which usually does not generate any code at all with GCC even with -O1:

    #include 
    
    static inline is_big_endian() {
      const uint16_t endianness = 256;
      return *(const uint8_t *)&endianness;
    }
    

    Since it is declared as static inline, you can safely put that in a header file.

    Here is the generated code on ARM thumb in little endian mode (-O1 -fomit-frame-pointer, when not inlined):

    is_big_endian:
      movs r0, #0
      bx lr
    

    and in big endian mode:

    is_big_endian:
      movs r0, #1
      bx lr
    

    You’ll easily see in the assembler output of the following code:

    #include 
    
    int main() {
      printf(is_big_endian() ? “big endian\n” : “little endian\n”);
      return 0;
    }
    

    that no test takes place and that only the appropriate string gets incorporated in your code.

  13. Casting a char* to uint16_t* and then dereferencing it is portable now? On ARM I wouldn’t want to guarantee that.

  14. None of these can be used to make compile-time decisions in the preprocessor, which the very messy autoconf-style endian headers and macros can do. You can’t write “#if IS_BIG_ENDIAN” for example:

    $ cat foo.c 
    #include 
    #include 
    
    #define IS_BIG_ENDIAN (*(uint16_t *)"\xff" < 0x100)
    
    int main(int argc, char **argv)
    {
    #if IS_BIG_ENDIAN
            printf("Hello world!\n");
    #else
            printf("!dlrow olleH\n");
    #endif
            return 0;
    }
    $ gcc -std=c99 foo.c
    foo.c: In function ‘main’:
    foo.c:8:5: error: operator '*' has no left operand
    

    So at best you get a C++ acolyte’s usage of the C toolchain, where we pretend the C preprocessor doesn’t exist (and pretend #include does something other than merely paste source files together), and we expect the compiler to optimize away all our constant integer expressions (with recent compilers, our constant pointer and string expressions too) until just before it’s time to emit assembly code.

    If you already presume a recent C compiler, you might as well use C++ and do it with templates or something. (Ducking.)

  15. Just curious…why do we need this? You compile a program for a specific machine, whose endianess is known. Why not a simple defined constant? What burning itch does this thing soothe?

  16. Only recently, I came across this microcontroller that uses middle-endian for 32bit values, i.e. it stores 0x3210 as “2301” or something like that. It was some 32bit MCU, maybe Microchip’s MIPS implementation, or the AVR32? I can’t remember at the moment.

    In any case, it made me realize, it’s shortsighted trying to be clever about this when trying to write portable code. There are only ugly and slow solutions. I prefer the slow ones in this case, since they’re easier to understand, and maybe the compiler writers figured out a way to optimize it for the specific architecture.

  17. @Carston

    Casting a char* to uint16_t* and then dereferencing it is portable now? On ARM I wouldn’t want to guarantee that.

    Indeed, it is not guaranteed by the standard:

    A pointer to void shall have the same representation and alignment requirements as a pointer to a character type.) Similarly, pointers to quali?ed or unquali?ed versions of compatible types shall have the same representation and alignment requirements. All pointers to structure types shall have the same representation and alignment requirements
    as each other. All pointers to union types shall have the same representation and alignment requirements as each other. Pointers to other types need not have the same representation or alignment requirements.

    Still dodgy but safer is the guy who suggested casting a uint16_t to uint8_t (since the alignment requirements are probably stiffer as types become wider), though there’s nothing in the standard that guarantees it will work. The only truly portable solution is to do as you’ve done elsewhere and unpack everything using arithmetic.

  18. @LS:

    Just curious…why do we need this? You compile a program for a specific machine, whose endianess is known. Why not a simple defined constant?

    Yes, you compile for a specific machine architecture. But your original program (in the more abstract sense of the functions you want the computer to perform) may need to be portable across several architectures. As examples, Linux, GCC, or (for this specific case) GPSD all need to perform identically when run on different architectures; to make this happen from the same source code tricks like this are quite helpful.

  19. I don’t think that’s actually portable, because there’s no requirement that the string literal be suitably aligned for a uint16_t access. (And strictly speaking, the existence of uint16_t isn’t required, so it won’t work on a machine which has no 16-bit types at all, and some such do exist.)

  20. I finally found the quote I was looking for a few posts back. It is, of course, cosmically off topic here, but this post seems to be winding down and it is a wonderful quote.

    From “The Restaurant at the End of the Universe” – page 163 in the paperback edition – Ford Prefect is speaking to Arthur Dent…

    ‘Garden of Eden. Tree. Apple. That bit, remember?’

    ‘Yes of course I do.’

    ‘Your God person puts an apple tree in the middle of a garden and says, do what you like guys, oh, but don’t eat the apple. Surprise surprise, they eat it and he leaps out from behind a bush shouting “Gotcha”. It wouldn’t have made any difference if they hadn’t eaten it.’

    ‘Why not?’

    ‘Because if you’re dealing with somebody who has the sort of mentality which likes leaving hats on the pavement with brick under them you know perfectly well they won’t give up. They’ll get you in the end.’

  21. @LS
    > Just curious…why do we need this?

    I’m with you LS. If you absolutely have to know this it should come from the compile context and have some sort of enforcement mechanism (Like #error if the constant isn’t defined.)

    I’d say a good rule is that if you need to know this you are doing something wrong. Of course like all rules it isn’t true at the margins, and low level system software and some of the “no dynamic allocation” stuff like ESR is doing in GPSD might require it too (though I should add that I think that is an interesting architectural decision — not interesting as in “I think it is wrong” but interesting as in “makes me think a little about the trade offs in architecture.”)

    But programmers should be thinking higher up than that stuff. Computer cycles and memory bytes are generally cheap enough that abstractions should hide this. It means we can trade off more functionality for slightly less performance and that is almost always a good trade when most processors idle 99.99% of the time.

    (And for the smarty pants that want to point out that you need to know this stuff to build the abstractions — yes, that would be the “on the margin” stuff above.)

  22. @Jessica
    “If you absolutely have to know this it should come from the compile context and have some sort of enforcement mechanism”

    In a perfect world, the language would have constants like __ENDIAN_BIG__ that are guaranteed to be supported by every compiler.

    In that perfect world, Little-Leaguers slide into home plate and their sparkling-white uniforms gather nary a speck of dust. Also, there are babbling brooks of chocolate milk, and candy trees, none of which ever cause fat or cavities.

    Here in the real world, the best we can do is put all of the weirdness in one place and let it define the constants we wish were defined for us.

  23. @Jessica

    (And for the smarty pants that want to point out that you need to know this stuff to build the abstractions — yes, that would be the “on the margin” stuff above.)

    The problem is that “this stuff” really exists at the ISA level, not the C level. You get no guarantees from the standard that this sort of code will work. Compiler implementers can rely on the guarantees given by the ISA; they’re not casting pointers around, as no such thing exists in the ISA. It’s not clear that casting pointers is any faster than using arithmetic anyway.

  24. @The Monster
    > In a perfect world, the language would have constants like __ENDIAN_BIG__

    Your cheesy sarcasm notwithstanding, in a perfect world programmers would never think about endian-ness. Perhaps in a slightly less perfect world compilers would provide functions to transform native representations into known endian formats. In the absence of utopia, the best thing to do is to force the right person to make the decision. Embedding it in code means that nobody has to think about it, and the compiler makes a guess. Clever hacks like the one in the OP usually work, sometimes they don’t (as has been pointed out in the comments.)

    Force the programmer building the program to decide, and make the program break hard if he doesn’t.

    BTW it is another reason I am not a proponent of Postel’s law. Better to be picky in what you accept so that everything you accept has a well defined semantic rather than guess. Better to break, quick and early, if you don’t know, than have some untested code path execute. Though in some contexts, such as XML, JSON and TCP packets some liberality in input is a good thing

    I was particularly struck by the guy above who talked about how he kicks interviewees out the door if they don’t have a mastery of endian-ness. To me, if I ever interviewed with him, he wouldn’t have to kick me out the door, I’d leave. Any interviewer who asks about such minutia during a programmer interview doesn’t know what he is doing, and represents a business that apparently has its focus on the wrong things in programming.

    Maybe if he is interviewing people to write code generators for compilers though, I’d give him a listen…

    1. >Any interviewer who asks about [endianness] during a programmer interview doesn’t know what he is doing

      One again, Fluffy Girl, the cultural difference between you and most of the hackers who hang out here shines through.

      We’re mostly what you would probably think of as systems programmers, and we tend to retain that mindset and culture even when we’re writing stuff further up the stack. In our culture, an interviewer asking an endianness-related question would seem completely reasonable. It’s your disgust with the question that looks alien.

      And that’s no criticism of you – in your context, your reflexes are no doubt valid. Just bear in mind that they can seem odd and detached from reality to people whose daily work is heavily involved with low-level data formats, network protocols, and cross-compilation to embedded systems. That’s our world – and even when it isn’t, we understand programming through expertise acquired there.

  25. This reminds me of some code I saw about 20+ years ago.

    I worked on a team writing an internal application that was required to use code in an internal library written by another team. The library had functions like Begin() and End(), with multiple versions of the start function for different cases, like BeginWithString(char *str) and BeginWithInt(int z). I don’t remember the actual names or arguments, and they aren’t relevant to this story.

    We were given some simple example code to show how to use these. So I sat down to write a multi-case something like this:

    int z=0;
    /* value could be set here */
    if (z == 0) {
    Begin();
    else {
    BeginWithInt(z);
    }

    only to watch it fail to compile. And the compile errors made no sense.

    So I looked more closely at the header file instead of the example code I had been provided. There I discovered that the begin functions were actually C precprocessor macros, and each begin function contained a “{” that was matched to a “}” in the end function.

    That has to be the worst use of the C preprocessor that I’ve ever seen. It was made even worse by making the macros look like they were C functions.

  26. @esr

    C hackers tend to greatly overestimate how “low-level” their work really is. Which would be fine if they’d stick to assembler, but they end up writing broken C that happens to work on whatever compiler build they’re working with. Half the time they get it wrong anyway. The hack in this post is a great example – depending on how the memory is allocated it will be slower than bit shifting on a platform where the OS has to trap unaligned memory accesses (and out right break elsewhere).

  27. …or, to put it another way:

    If you are tempted to write a macro based on typographical errors in the C Standard….please think twice…and then DON’T! The next revision of The Standard is likely to break it anyway.

  28. It’s not FluffyGirl’s fault that she was trained in the school of Java, where endianess is disguised by the runtime.

    I weep for an entire generation who know not of endian issues, never mind the deeper cosmologies of floating point arithmetic, the enormous amount of work done by a runtime or OS kernel, and memory access (especially latency and cache-line effects)

  29. “Any interviewer who asks about [endianness] during a programmer interview doesn’t know what he is doing…”

    An interviewer who asked me endian-related questions (back when I was doing software engineering) would have quickly found out two things:

    1) I fully understand the concept of endianness, and can explain it clearly and even (at the time) give examples of both big- and little-endian processors.

    2) I have no clue how to write code that deals with the nuances of endianness.

    Which should come as no surprise, since most of my engineer career was writing back-office systems for enterprises back before tools like SAP existed. Fluffy Girl probably lives in the same application-driven world that I did. Worrying about the compatibility of CORBA implementations on specific compiler/processor architectures was about as low-level as I ever needed to get.

  30. I hope no one minds – particularly our host – I would like to insert a little refreshment break here…

    I just got to what I consider the best line in the H2G2 trilogy, in the fourth book – So long and thanks for all the fish – on page 41 in the Pan paperback edition…

    Arthur watched it go, as stunned as a man might be who, having believed himself to be totally blind for five years, suddenly discovers that he had merely been wearing too large a hat.

  31. > 1) I fully understand the concept of endianness
    >
    > 2) I have no clue how to write code that deals with the nuances of endianness.

    Either:

    a) you don’t program well, or
    b) you don’t really understand endianness. perhaps you are “book smart”, and can regurgitate answers for a test, without real depth of knowledge.

  32. @esr
    >We’re mostly what you would probably think of as systems programmers, and we tend to retain that mindset and culture even when we’re writing stuff further up the stack. In our culture,

    So let me say this — in my opinion perhaps the most important contribution of Unix to the world of programming is that it was the first important operating system written almost entirely in a high level language. Which is to say, in my view the most important innovation of Unix is that “system programmers” had to think a lot less about these sorts of things.

    System programmers should KNOW about these things, they should however eschew situations in which thinking about them is necessary. I have written a lot of machine code. I know how the high level code I write translates into machine code. But I avoid thinking about it focusing on higher level abstractions except in the 1% of situations where it is necessary.

    Why? Because programming large systems is mostly about managing billions of interacting details. And details are best managed by layered abstraction. Abstractions leak, and you need to plug the leaks occasionally. But it is better to choose better abstractions that constantly sticking your finger in the dike (with all due respect to Winter.)

    1. >System programmers should KNOW about these things, they should however eschew situations in which thinking about them is necessary.

      A fond and pious hope which, alas, remains futile.

      I, for example, unavoidably have to know and worry about endianness issues because of the the binary wire protocols that a lot of GPSes use. I can’t “eschew” them; I have to deal with the data coming in. That means knowing that sometimes numbers are big-endian, sometimes they’re little-endian, and I have to do the right thing to convert them to ints or whatever.

      You might say “Oh, that’s just GPSes”, but in fact similar issues are pretty much unavoidable any time you’re dealing with a hardware interface. Or a binary data serialization such as a graphics file – did you know that the GIF standard uses binary numeric fields and doesn’t specify endianness? Or a network wire protocol that isn’t purely textual.

      I do my best to hide these issues from my users. GPSD clients get nice clean JSON in which endianness is not an issue because he numeric representations are textual. But somebody has to deal with GPSes at at that level, and that somebody is me. Most of the hackers on this blog code in places where ground truth is similar.

      To us, your protest that we should “eschew” such situations smacks of wishful thinking nearly on a level with the the Obama administration’s “make it so!” attitude about building websites. It can’t all be glossy surfaces. Somebody has to do the down-and-dirty work. That somebody is us.

  33. @LeRoy
    > I weep for an entire generation who know not of endian issues, never mind the deeper cosmologies of floating point arithmetic,

    Why? I have many friends who are programmers, I have working in programming for a good while. I read StackOverflow regularly. Out of that pretty broad range of experience I have NEVER heard someone complain about an intractable bug that originated in their lack of knowledge of endian-ness or floating point representations. I have hear of a lot of intractable and difficult bugs, but never a single one that pertains to that which apparently causes you lacrimation. Which isn’t to say they don’t exist, just that they are based on this sampling, extremely rare in the broad programming population.

    Dry your eyes my friend. The world is not coming to an end. This “generation” is standing on the shoulders of the giants who have solved these problems and are thinking about unsolved problems.

  34. esr: Yes, but almost all programmers at almost all times can eschew such things now, because someone else (like you!) has written a library which (as Fluffy Girl pointed out most usefully) abstracts it all away.

    I’ve had to talk directly to hardware or mangle Legacy Binary Formats, and I despise it…

    (Godspeed you for taking the hit with GPSD.)

  35. >[…] it will be slower than bit shifting on a platform where the OS has to trap unaligned memory accesses (and out right break elsewhere).

    > … or, to put it another way: If you are tempted to write a macro based on typographical errors in the C Standard….please think twice…and then DON’T! The next revision of The Standard is likely to break it anyway.

    reminds me of :

    One of the long-term lessons of the Unix experience is that macros tend to create more problems than they solve. Modern language and minilanguage designs have moved away from them.

    http://www.catb.org/esr/writings/taoup/html/ch08s03.html#macroexpansion

  36. @esr
    > It can’t all be glossy surfaces. Somebody has to do the down-and-dirty work. That somebody is us.

    Indeed, and the first question I ask in such situations is: how can I deal with this annoying detail one time, and hide it under a glossy surface. In your case, for example, I would most likely be thinking about some sort of language to enable the description of the incoming formats, so that I dealt with that particular splinter in my palm one single time in the parsing engine.

    When you are building a new house for your pigs, it is usually a good idea to put down some boards so that you don’t keep sinking into the mud.

    DRY applies particularly strongly when you are slopping about in pig poop.

  37. So, at the risk of sounding naive – why is exactly that the posix functions {ntoh,hton}[ls], possibly combined with an extra byte swap function (or an analogous set of functions, call it “htols” etc) in case the wire format is little-endian, aren’t good enough for GPSD’s needs? Is it just a matter of performance? Is the performance difference (vs always doing a loop, calling a no-op htons function inside it on big-endian systems) that much?

    Because otherwise I can’t see the argument for having a macro instead of using these functions.

  38. @Fluffy Girl

    Indeed, and the first question I ask in such situations is: how can I deal with this annoying detail one time, and hide it under a glossy surface. In your case, for example, I would most likely be thinking about some sort of language to enable the description of the incoming formats, so that I dealt with that particular splinter in my palm one single time in the parsing engine.

    I think two things are getting conflated in the discussion. One is doing bit-wise operations, the other is relying on particular representations of data types in memory. Relying on the endianess of the host is an example of the latter. What Eric is doing in the OP is the latter, but the conversation seems to have stealthily shifted to the former. Unpacking data formats doesn’t require knowledge of the host byte ordering (or type punning), but it does require knowledge of the byte ordering of the format.

  39. As I’ve maintained before — both here and in previous employment — the Right Way to handle binary integer formats is, usually, to consume them one byte at a time, shifting the bytes into the appropriate place in the CPU word and then ORing them together. By applying this policy, most endian issues simply vanish: regardless of the host CPU architecture, if the format is big-endian it will be treated as such by the program, and if it is little-endian it will be treated as such. Pretend that variables in memory are things whose size you know but whose layout in RAM you don’t. “But Jeff,” you say, “isn’t the advantage of C and C++ that you know and can control the layout of things in memory?” No, not entirely; the C standard makes some guarantees but not others and leaves enough implementation-dependent or undefined that you are really relying on your assumptions about the compiler. If your code’s assumptions about memory layout are in your code and not your compiler’s code, life will be much easier for you.

  40. Came back a few days later to see that Fluffy Girl is dissin’ my interview question.

    I don’t take offense, of course. Everyone works at some layer of abstraction. They see those operating a layer below as heathens, and those above as twiddling with tinkertoys. Those assembly programmers are a bunch of old crack pots, amiright?

    I assert that, if your job involves code that cracks network protocols, you’d better have the minutiae of endianness and alignment down cold. Same goes for folks writing device drivers, compilers, FPGAs, etc. That’s the job I do, and it’s in the interview I give because it’s the job the CANDIDATE would have to do.

    And we’re not talking about some obscure, old-school 80’s protocol or something, either. My day job involves “Openflow” (yes, that new-fangled SDN stuff all the cool kids are raving about) and the wire protocol is defined IN THE STANDARD by a bunch of #@*&^ing C structs. No kidding!

    >Unpacking data formats doesn’t require knowledge of the host byte ordering (or type punning), but it does require knowledge of the byte ordering of the format.

    If the data formats involve multi-byte fields, and you wish to unpack them, you’d better know the endianness of both the formats and the host. Or you need to use a preprocessor macro to hide it. Or you need to do it in an endian-agnostic way. But the point is, you need to KNOW that you need to do one of those things.

    So now, please let me tell you about my (*)interview question: I give the candidate a piece of code, and I ask them to tell me why it doesn’t always work. (Like several of my questions, it’s directly lifted from a former coworker’s production code.) This code parses an IP header by fetching *(uint32 *)ptr directly out of the frame buffer. (The coworker wrote it for PPC, and asked for my help when it didn’t work on ARM.)

    If you, as a candidate, can tell me “Hey, doesn’t this code have a portability issue?” you’re already a leg up. I don’t even so much care if you get the details wrong. I accept all manner of corrections, from rebuffering and ntoh(), or checking __BIG_ENDIAN__ or, if you’re really smart, pulling it in an endian- and alignment-agnostic manner. The question is really, do you know *when* endianness is important, and do you have the slightest idea of how to deal with it? If so, you pass. And to be clear, it’s one of my warm-ups. Most folks either figure it out inside of 3 minutes, or I spend a moment to explain, and we move on. I’d swag that 80% of my candidates at least sheepishly ask “is it something to do with byte ordering?” and that’s enough to tell me that they’ve cracked protocols before.

    So I hope, Fluffy Girl, that with this additional context, you might approve of my question. But even if you don’t, I still can’t hire folks who don’t know how to deal with endianness because they wouldn’t be able to do the job. That’s life down here in the bit-plumber trade.

    * My company really cares about interview questions “leaking” online and bans them if they do. Well, I don’t care. If you’re reading this, and it encourages you to brush up on portability concerns before you walk into my interview, bravo! You will brighten my day, and I will love you for it.

  41. .> I think two things are getting conflated in the discussion. One is doing bit-wise operations, the other is relying on particular representations of data types in memory. Relying on the endianess of the host is an example of the latter. What Eric is doing in the OP is the latter, but the conversation seems to have stealthily shifted to the former.

    It’s because GPSD uses the latter for what I can only understand as a performance optimization for the former (for example, one driver has an “end_write” function that writes an array of shorts in big-endian, and it detects that the system is big-endian and makes it a #define for write… something this new macro cannot actually do, incidentally), and he has confused that for it actually being necessary to accomplish it at all.

    A platform-agnostic definition of end_write follows:

    int end_write(int fd, uint16_t *buf, size_t nbyte) {
    	uint16_t buf2[BUFSIZ];
    	size_t limit = nbyte / 2;
    	if(limit > BUFSIZ) limit = BUFSIZ;
    	for(size_t i=0; i < limit; i++) {
    		buf2[i] = htons(buf[i]);
    	}
    	return write(fd, buf2, nbyte);
    }
    

    Works on big-endian and little-endian systems, no #defines or preprocessor tricks necessary. It’s possible to write an endian-agnostic htons, too:

    uint16_t htons(uint16_t hw) {
        union { uint16_t w; uint8_t b[2] } u;
        u.b[0] = hw >> 8;
        u.b[1] = hw & 0xFF;
        return u.w;
    }
    

    Making the former an alias to write and the latter a no-op on big-endian systems is an optimization, not a necessity.

    I will note that this runs up against the fact that this kind of type punning isn’t technically allowed by the standard, but only because of a poorly-chosen interface. POSIX’s fault, not mine. You could instead define this function:

    void copy_to_bigendian_16(uint8_t *ptr, uint16_t value) {
        ptr[0] = value >> 8;
        ptr[1] = value & 0xFF;
    }
    
    1. >for example, one driver has an “end_write” function that writes an array of shorts in big-endian, and it detects that the system is big-endian and makes it a #define for write… something this new macro cannot actually do, incidentally), and he has confused that for it actually being necessary to accomplish it at all.

      You’re looking at old code. The repository version uses putle16(buf, n*2, data[n]), as it should. That corner was so dusty and disused that I hadn’t actually looked at it since I wrote the group of endianness-independent macros of which putle16 is one, until quite recently when I went looking for endianness dependencies to remove.

      There were exactly two. Now there’s just one, but it’s going to be a real bastard to get rid of, sigh.

      To see how this sort of thing is done right, interested parties should look at bits.h and bits.c in the GPSD source distribution. I’ve thought seriously about breaking these functions, and their regression tests, out into a small project that ships on its own.

  42. er, crap, that last line should be write(fd, buf2, limit*2);

    “2” could be spelled as sizeof(uint16_t) in both places if you don’t like magic numbers. It’s true that this will silently drop a byte if given an odd byte count, but I’m not convinced that what the real end_write function does in that case is sensible either. And the caller should be checking the return value for short writes anyway.

  43. > But the point is, you need to KNOW that you need to do one of those things.

    Or you could code to a paradigm where it is impossible to do things in a way that is not endian-agnostic, and be correct without having to think about it because there is no valid way to do the incorrect thing. If you stick to operations the standard actually permits, the C standard provides such a paradigm.

  44. A couple retractions…

    I was mistaken about the end_write function – it actually converts to little-endian and then writes. The function definition I provided is therefore incorrect, but could be salvaged by using the same shifting-and-masking concept I mentioned for htons (but with & 0xFF first and >> 8 second, for little-endian) in either the inner loop itself or a “htols” function.

    > If you stick to operations the standard actually permits, the C standard provides such a paradigm.

    I meant “almost”, of course. fwrite and fread are a gap in this paradigm.

  45. @Casey Barker
    > I assert that, if your job involves code that cracks network protocols, you’d better have the minutiae of endianness and alignment down cold.

    So were you on my coding team my question to you would be “why are you writing code to crack open IP packets when that code has been written approximately one million times before? And when that code has been tested and retested on a thousand platforms?”

    The foundation for standing on the shoulders of giants is DRY.

    My contention is not that endian-ness isn’t a significant issue, but rather than it is an issue that should mostly have been abstracted away except on the extreme margins.

  46. So were you on my coding team my question to you would be “why are you writing code to crack open IP packets when that code has been written approximately one million times before? And when that code has been tested and retested on a thousand platforms?

    Because somebody had to write that code the first time, and occasionally make it do new tricks. The world did not stop inventing new network protocols and features in 1984.

    So to answer your question, this particular bit of code was an part of ad-hoc wireless mesh stack for an embedded board, and we were sniffing IP headers on the fly to handle some ARP issues. (Broadcast floods are expensive on a mesh.) Not especially glamorous, but in embedded land, you get whatever core your price and power budget affords. Our target didn’t have the RAM to fit an entire IP stack, so we rolled the just the parts we needed. Yes, my first step was to build an “IP header cracking” library and isolate the portability concerns to it, but alas, my coworker’s effort was not as enlightened.

    I still use that as my example today because most candidates know what an IP header is, so it’s a good proxy for network protocols in general. I don’t want to throw them for a loop by asking them about a protocol they’ve never seen.

    Or you could code to a paradigm where it is impossible to do things in a way that is not endian-agnostic, and be correct without having to think about it because there is no valid way to do the incorrect thing.

    Yes! You absolutely could do that. The best candidates do. The point of my interview question is, do you KNOW that you need to? Do you know the difference between endian-dependent and endian-agnostic? If so, you pass.

    1. >So to answer your question, this particular bit of code was an part of ad-hoc wireless mesh stack for an embedded board, and we were sniffing IP headers on the fly to handle some ARP issues.

      Welcome to our world, Fluffy Girl. This is what’s normal for us, the way Java and web dev and “business logic” are for you. Actually, by comparison with what Casey is describing, even most of my code is relatively far off the metal.

      If you bear in mind that this is home space for the open-source/Unix-geek/hacker-neckbeard types who hang out here – that this is what shapes our culture and our perception of the skills programmers need to have – I think much will be clearer to you in the future.

  47. @esr
    > Welcome to our world, Fluffy Girl.

    Thanks!!! So nice to be welcomed! However…

    > This is what’s normal for us,

    Sure, but that doesn’t mean there isn’t a lot to learn from people who do it different. The original comment was about pulling IP addresses out of TCP packets. Unless I am mistaken the IP src and dest addresses are pretty much in the same place regardless of what layers you toss on top, because IP has this lovely layered protocol that simply begs for a layered abstraction. So, again, why exactly are we writing stuff for parsing that part of an IP packet when it has been done a million times before? Do we also write code to parse the TCP bit too? Did we remember to write in the code to deal with compression, and other variations? There is a huge amount of NIH going on AFAICS.

    I agree there is a hacker culture that relishes such things, and if you do them for fun, all the better. But if your goal is doing it right reliably then I suggest reusing widely tested mechanisms with nice shiny surfaces would be ideal.

    I will grant you that embedded is rather different, partly because embedded processors are often simpler. But the whole thing reminds me of people who want to rewrite the “inner loop” of their code in assembler to “make it faster.” Sorry that ship sailed fifteen years ago. Compilers generate better code than humans, because compilers can much better handle things like multi-instruction pipelines, super scalar architectures and multilevel cache locality than a human ever could.

    That isn’t true for simple processors, but it is for anything you’d find in a desktop or laptop. And of course, there are always exceptions on the margin.

    >the way Java and web dev and “business logic” are for you.

    Hey, that is a pretty low blow. I never wrote a Java program in my life. (Though I have written lots of javascript, and that is about as horrible as it gets.)

    1. >But if your goal is doing it right reliably then I suggest reusing widely tested mechanisms with nice shiny surfaces would be ideal.

      Sure, if they existed. We operate where they don’t exist – well, not in the way you think they ought to, anyway. That’s what Casey and I have been trying to explain.

      Your belief that widely tested mechanisms necessarily exist to be reused rests on our work in creating and maintaining those mechanisms. You live in a surround generated by the fact that we’ve done a pretty good job – not flawless, but enough to give you an illusory sense that the lower-level problems are trivial or solved. Example: last time you looked at the shiny surface of a GIF, and every time before that, you relied on code that I wrote to turn format-file bits into pixels – code that necessarily dealt at the byte level with the little-endianness of numbers in the GIF data.

      I still have to tweak that code occasionally after more than twenty years. Your belief that when we maintain and patch that kind of infrastructure we must be succumbing to some sort of macho NIH syndrome is…at best, touchingly naive.

  48. @LeRoy
    > There appear to be nearly 5,000 questions in the StackOverflow floating-point category

    But almost none of them deal with floating point representations — because that problem has mostly been solved.

  49. @esr
    > Sure, if they existed. We operate where they don’t exist

    Really? You operate where there is no piece of C code that can be compiled to parse an IP packet header? If that isn’t what you mean (since it is the subject under discussion) I’m confused as to what you are talking about.

    Yes indeed, we stand on the shoulders of giants. But I don’t need to know how to dope a piece of silicon to be able to write a useful computer program, in a sense there is an argument to be made that I SHOULDN’T, because civilization is founded on specialization. I sure wouldn’t be impressed if I were quizzed on the process in an interview, even though I have some familiarity with the process.

    Thanks for the gif library, btw. I use it all the time.

    1. >Really? You operate where there is no piece of C code that can be compiled to parse an IP packet header?

      Of course there are such pieces; we have our reusable libraries too. In fact, one way to understand the historical genesis of what is today called “open source” is as a consequence of systems programmers discovering that their job is intractable if most of the code like “parse an IP packet header” is inaccessible to them in proprietary silos. We’re actually more obsessive about re-use than you guys up there in application-land, because we have to be.

      Trust me, as bad as you think the consequences of violating DRY are (and you’re not wrong) they’re worse down here. Why do you think one of the five laws of the hacker mindset is No problem should ever have to be solved twice? When we write things like that, do you suppose we don’t mean them?

      But integration and task-specific logic still have to be written, and in our world they are intertwined with the kinds of problems like representation endianness that you think ought to be neatly solved and boxed – but aren’t. Casey gave you a good example which, unfortunately, you lacked the context to understand; another one is GPSD, which as I pointed out previously must grapple with endianness in GPS wire protcols.

  50. > But almost none of them deal with floating point representations — because that problem has mostly been solved.

    No, it was standardized. IEEE 754 was primarily driven by Intel, and since 90% of the world runs on Intel processors, (or did until the rise of the Smartphone gave ARM an, um, shot in the arm, and even then ARM implements a (ahem) similar 754-compliant FPU), you’ve not seen what your elders had to deal with.

    and… I count several issues with FP representation in the first page of results on StackOverflow.

  51. @R. Duke
    > you’ve not seen what your elders had to deal with.

    I don’t pee in an outdoor toilet either. Didn’t you get the whole Newtonian reference above?

    And before you judge what I have and have not seen,you might want to know a little about my experience. I have written Z80 assembler on an embedded medical board subject to FDA regulation, and debugged it with an oscilloscope — it kept crashing because the ISR took too many processor cycles.

  52. I sure wouldn’t be impressed if I were quizzed on [silicon doping] in an interview, even though I have some familiarity with the process.

    This strikes me like a house painter looking down her nose at interview questions about concrete, because every house she ever painted already had a foundation.

    So, again, why exactly are we writing stuff for parsing that part of an IP packet when it has been done a million times before?

    Again, we had a parser. But some programmers don’t understand why/when they need to call it. I’m trying to filer for those folks so I either don’t hire them, or know to re-educate them.

    You see, at some level, I think we actually agree here: Decision logic shouldn’t pull words out of the frame buffer; wire protocols should hide behind a parser. Agreed? So if you express disdain at the thought of grappling with endianness in decision logic, fantastic! Don’t you want to work with people who hold a similar disdain? Then you should really use my question. :)

    Except that I have to go one step further and make sure they could write a parser if they had to, because in this job, the parser code hasn’t been “done a million times.” The protocol is a moving target, and a small but important part of the job is to write the parser. There is no plank. So occasionally, I have to chop trees, split logs, and plane boards. I’m not even talking about tiny cores anymore; this parser runs in Linux userspace. (And once we’ve parsed the protocol, we set hardware registers over a bus. Do you suppose endianness might be relevant there?)

    I think endianness is like a secret handshake handed down to apprentices in the systems programmer guild, except we teach it to everyone in undergrad, and just count on the application programmers to promptly forget it.

    Anyhow, it makes me happy that so many programmers can assume, as you do, that these planks always magically exist and function in a modern system. Countless systems builders have elevated your thinking, and that’s a good thing. But those systems don’t stand still, so we still need people building and maintaining them, and yes, those people still must occasionally grapple with endianness.

  53. I’m sceptical of the need to ever detect endian-ness. If you always do this sort of thing, it will always work:

    unsigned char buf[4];
    unsigned long value;

    buf[0] = value & 0xff;
    buf[1] = (value << 8) & 0xff;
    buf[2] = (value << 16) & 0xff;
    buf[3] = (value << 24);
    write(fd, buf, 4)

    I can just about imagine detecting endian-ness for some sort of speed optimisation. What other purpose is there?

  54. > There is a huge amount of NIH going on AFAICS.

    The core problem is C.

    The C standard library is… sparse, compared to that of any other modern language. Even including POSIX, there’s a lot that’s just not there that any language you’ve worked with (I’m assuming mostly C# and/or Java or the like) provides. So you’re faced with the choice of taking a third-party dependency, pasting in some code someone else wrote and freely licensed, or writing your own stuff. Options A and B can be hard to justify, particularly if the existing solutions available are low-quality.

  55. > I can just about imagine detecting endian-ness for some sort of speed optimisation. What other purpose is there?

    There’s not. And I think ESR recognizes this, this is why he’s been eliminating endian-dependencies in GPSD, and why the macros in bits.h do work in an endian-independent way basically the way you mentioned. I think he ran across this code, thought (incorrectly, as it turns out, since the preprocessor’s inability to evalute it means you can’t conditionally declare structs based on it) it’d be a way to simplify the one piece of GPSD that doesn’t do the right thing, and got excited and posted it here. It’s at least led to a reasonable discussion of the issues.

  56. > another one is GPSD, which as I pointed out previously must grapple with endianness in GPS wire protcols.
    (emphasis mine)

    The problem that’s driving this discussion (in which most people seem to be in violent agreement at this point) is it does not need to grapple with endianness in CPUs, which was the subject of this blog post. You’ve said yourself, it only does so in exactly one place, which is only there because “it’s going to be a real bastard to get rid of”. Once that last one is eliminated, the macro that is the subject of this blog post, or anything like it, will have no place in GPSD.

  57. > There’s not. And I think ESR recognizes this, this is why he’s been eliminating endian-dependencies in GPSD

    Thanks for the explanation, Random832. Glad I’m not missing anything.

  58. @Casey Barker
    >This strikes me like a house painter looking down her nose

    That is a curious analysis. I am not looking down my nose on anyone, certainly not the freaking geniuses who create silicon chips, a process I think is akin to black magic even though I grasp the general principles, neither the people who build the lowest level device abstraction layers. I believe “giants” was the word I used about them.

    > Again, we had a parser. But some programmers don’t understand
    > why/when they need to call it.

    Ah, well don’t hire those guys! Curiously, though, if that is your intent, what you are actually looking for is for them to agree with me!!! Which is to say, to know and agree that these details should be abstracted away to a shiny surface.

    But let me put it this way: I can remember the basic principles underlying how to do a quicksort. It’d be a challenge for me to sit down and write one without cracking a book (wait, did I really say that? I mean “without googling”) but I don’t have too. I just call the appropriate library. Tony Hoare and the library writers thought about all the minutiae so that I didn’t have to.

    I think it is naive to clutter one’s mind with the solution to problems that have already been solved taking up space that can be filled with new problems to solve — except insofar as the old problems are a source of ideas to solve new problems.

    What I think is ironic about this whole discussion is that the IP addresses in an IPv4 header aren’t even an integer, they are an ordered group of four octets. It is char[4] not int, which is to say the whole discussion of endian-ness as it pertains to IP addresses is rather an discussion about a special way to hack it out rather than a real discussion about endian-ness at all. And it is particularly relevant given that all these “on the fly” random hacks are now broken with IPv6, which of course, you can’t store in the standard int on most architectures.

  59. > What I think is ironic about this whole discussion is that the IP addresses in an IPv4 header aren’t even an integer, they are an ordered group of four octets.

    In the early literature, and in POSIX struct in_addr (via in_addr_t which is an alias for uint32_t), it’s an integer. Thinking of it as a group of four octets comes from working with higher-level languages that do this (I’m guessing you also think of UUIDs as a group of 16 octets rather than the “1 long, 2 shorts, 8 bytes” structure they’re technically defined as).

  60. But anyway, there’s certainly no shortage of things that are semantically multibyte words (for example, the length field, or the TCP/UDP port number) in network protocol headers, so focusing on “addresses should be analyzed as a tuple of bytes, and IPv4 is obsolete anyway” misses the larger issue.

  61. Dear Fluffy Girl,

    This just in: http://possiblywrong.wordpress.com/2013/11/15/floating-point-equality-its-worse-than-you-think/

    I suppose that it’s OK, because “that problem has mostly been solved.”

    (Not)

    > What I think is ironic about this whole discussion is that the IP addresses in an IPv4 header aren’t even an integer, they are an ordered group of four octets. It is char[4] not int,

    Expressly incorrect. IP addresses are an unsigned 32-bit quantity. Your ‘char[4]’ is wrong beyond measure. To see why, consider CIDR. (Google for it, or check StackOverflow if you don’t know what CIDR is.)

    > And it is particularly relevant given that all these “on the fly” random hacks are now broken with IPv6, which of course, you can’t store in the standard int on most architectures.

    IPv6 uses a 128-bit address. The AS/400 virtual instruction set defines all pointers as 128-bit. This gets translated to the hardware’s real instruction set as required, allowing the underlying hardware to change without needing to recompile the software.

    The ASICs, TCAMs and other flow state tables that actually forward IP datagrams on the Internet all use 128-bit native quantities when dealing with IPv6.

    You may want to obtain more clue before spouting off your worldview here. There are real hackers about, and they will explain the error of your ways without regard for your feelings.

  62. Ah, well don’t hire those guys! Curiously, though, if that is your intent, what you are actually looking for is for them to agree with me!!! Which is to say, to know and agree that these details should be abstracted away to a shiny surface.

    I don’t think anyone is disagreeing with you, Jessica. But the thing is, systems-level software — which provides the abstractions — has to be maintained, just like the software you write. Once the job is done, it’s not done everywhere and for all time. It’s yet another ongoing software artifact with its own life cycle. Know that you sleep soundly in your bed because rough men stand ready in the night to make sure you don’t have to deal with endianness issues, flaky protocols, or subtle pointer bugs.

  63. R. Duke: CIDR doesn’t really imply that – you can apply masks to a byte array just as easily as to a single numeric quantity – loops were invented before IP addresses. It’s a fact that IPv4 addresses are a single 32-bit value, but that doesn’t mean there was a good reason. And struct in6_addr does indeed have a byte array instead of a single value.

    Also, I’m pretty sure you can’t work with arbitrary values in an AS/400 pointer register… The AS/400 is often specifically called out in “weird C architectures” lists as an example of a system where pointer arithmetic can cause a crash at the time of the operation rather than when an invalid pointer is later dereferenced, and therefore why the C standard makes bad pointer arithmetic undefined behavior.

  64. Expressly incorrect. IP addresses are an unsigned 32-bit quantity. Your ‘char[4]‘ is wrong beyond measure. To see why, consider CIDR. (Google for it, or check StackOverflow if you don’t know what CIDR is.)

    It is more useful to consider IPv4 addresses a unique abstract data type over which certain operations (such as bitwise operators for CIDR) are possible. The ADT would admit an integer or char[4] implementation with no change in the interface.

    To be fair, this is very difficult to do in C which is why IPv4 addresses are uint32_ts in C. It’s much easier in C++; and doing this kind of thing is central to program design in Ada.

    By the way, we’re still waiting on “C++ Considered Harmful”. Eric, if your partner won’t collaborate, why not seek his authorization to finish the essay off yourself?

  65. @R. Duke
    > Dear Fluffy Girl,

    I don’t know who you are, but I find your excessive hostility unnecessary unprovoked and a little offensive. I’ll be happy to address your comments if you can express them in a less belligerent manner.

  66. @Jeff Read
    >systems-level software — which provides the abstractions — has to be maintained,

    Yes, I believe I said from the very beginning — on the margins.

    > Know that you sleep soundly in your bed because rough men stand ready

    Thanks Jeff, that is very comforting. And you too my friend, sleep soundly knowing that your banking and retail web sites will stay up and running looking pretty and being functional, because fluffy girls stand ready to optimize their database queries and fiddle their jQuery to keep the wheels of commerce going.

  67. So can the people on the side supporting it explain just exactly why anyone who is at a high enough level to be coding in C (even in a kernel) rather than assembly should have to worry about the endianness of the CPU they are running on (rather than that of a disk or wire data format) in non-performance-critical code? Apart from the unfortunate implementation of the RTCM2 driver, GPSD doesn’t. Not even the parts in bits.[ch] that all the rest rely on for dealing with the wire formats.

    I mean, it can’t possibly have escaped you that there is a difference and that the CPU endianness dependency is a big part of what we’re objecting to.

  68. Ah, well don’t hire those guys! Curiously, though, if that is your intent, what you are actually looking for is for them to agree with me!!! Which is to say, to know and agree that these details should be abstracted away to a shiny surface.

    Fluffy, the only possible explanation for your response is that you stopped reading and started replying after my first 3 sentences, because beyond that, I said I wanted candidates to agree with you. So I’ll try to make this next sentence count:

    How, pray tell, do you avoid hiring “those guys” without asking a pertinent question in an interview?

    And now that Fluffy has stormed off to pen her reply, I shall align two choice quotes for the peanut gallery’s amusement:

    Any interviewer who asks about such minutia during a programmer interview doesn’t know what he is doing…

    I am not looking down my nose on anyone

    Sorry. I’ve tried to avoid snark, but this has turned silly. I fear Fluffy had a regrettable gut reaction and is now entrenched and flailing to defend it. There’s really no need.

  69. Actually, my argument applies just as well in principle even for coding in assembly, except that it is more likely to be performance critical, more likely to be excessively tedious to write endian-agnostic code, and if you’re coding in assembly you already know all the details of your architecture and your code isn’t supposed portable to a different one anyway. Though it might be a fun exercise to do it for an architecture that supports both byte orders, such as SPARC64, PowerPC, or ARM.

  70. @Casey Barker
    > How, pray tell, do you avoid hiring “those guys” without asking a
    > pertinent question in an interview?

    The question one asks is not about endian-ness but about abstraction. Don’t hire the ones who is blabbing on about endian-ness, hire the ones who have a visceral reaction against even discussing such a thing.

    Casey to Beard Stroker: How do you deal with the different way integers are laid out in memory vis-a-vis the predefined order in a data structure like an IP packet?

    Beard Stroker: Hmmmh, interesting this phenomenon called endian-ness, has a historic background we need to discuss. On certain types of architecture the low order byte comes first in an integer representation… bla bla bla…. the C standard inadequately describes …. bla bla bla…. consider the fact that some computers use 30 bit words… bla bla bla… one must also factor in the potential for quantum computers …. bla bla bla … and sometimes we receive packets encoded with Token Ring … bla bla bla….

    Casey to Jessica: Same question…

    Jessica: Ewwww! Don’t do that! Use a library!

    In this interview you should hire Jessica. I like her, she is smart, she is pretty, AND she even smells nice.

  71. Thanks Jeff, that is very comforting. And you too my friend, sleep soundly knowing that your banking and retail web sites will stay up and running looking pretty and being functional, because fluffy girls stand ready to optimize their database queries and fiddle their jQuery to keep the wheels of commerce going.

    My bank’s web site is a difficult-to-use pile of J2EE shit. It looks pretty, though.

    The flip side of your argument is not that applications-level developers are useless or any less “studly” than systems developers; it’s basically “every Marine a rifleman”. The USMC expects every Marine to be up on their basic infantry tactics, even if they never actually serve as infantry, because you just never know when you may make contact with the enemy and have to either fight or die.

    For most of us programmers (who sleep soundly in our beds thanks to the Marines and other armed forces), ideally we wouldn’t have to worry about things like quicksort implementations or endianness issues. But it helps to stay brushed up on the basics because abstractions are leaky, and you never know when you might run into a bug in your abstraction layer or, worse yet, someone else’s abstraction layer on their remote server. As an app developer you don’t want to be called on to fix bugs of this sort, but unless you have a working knowledge of the basics of systems-level programming, you may not be knowledgeable enough to even determine that the bug isn’t your responsibility.

    Anyway, didn’t you mention optimizing database queries? Shouldn’t the Entity Framework be abstracting away the database for you so you don’t have to think about it? If you think it silly of me to expect a library like Entity Framework to simply “make the problem go away”, then QED.

  72. The question one asks is not about endian-ness but about abstraction. Don’t hire the ones who is blabbing on about endian-ness, hire the ones who have a visceral reaction against even discussing such a thing.

    The thing is that even the most grizzled beard-stroker is cognizant of the pertinent abstractions over IP packets — TCP, UDP, and the sockets layer — and won’t hesitate to reach for them if they do the job.

    If you’re low-level enough to be munging IP packets directly, it’s generally assumed that you’re systems-savvy enough to deal with things like endian issues as well.

    And sometimes beard-strokers do come up with better solutions by eschewing abstractions and solving the problem that’s right in front of them.

  73. @Jeff Read
    > If you think it silly of me to expect a library like Entity Framework to simply “make the problem go away”, then QED.

    Ah, touché, As it happens I rarely use that tool, However, SQL is usually pretty good as it comes, especially if you use various automated tools to determine the choke points and then follow their advice. In truth, excessive tweaking of SQL is rarely productive, and often counter productive.

  74. Ah, touché, As it happens I rarely use that tool, However, SQL is usually pretty good as it comes, especially if you use various automated tools to determine the choke points and then follow their advice. In truth, excessive tweaking of SQL is rarely productive, and often counter productive.

    SQL is a nice abstraction — being based on relational algebra, which is one of those jewels in the realm of abstraction that hide all sorts of implementation details, presenting just the details of data retrieval that data seekers are interested in.

    But Oracle consultants make enough money to buy expensive suits and Rolex watches by understanding what Oracle the database does at a low level to any given SQL query and being able to fine-tune it appropriately.

  75. > I know how to code in assembly. I also have the good sense not to.

    That’s nice. And as for my question, on whether there’s a good reason everyone coding in C shouldn’t use code that looks like bits.c and bits.h, rather than code that looks like #define IS_BIGENDIAN (some clever trick) or #if WORDS_BIGENDIAN (one thing) #else (other thing) – any thoughts on that? Though it wasn’t really directed at you per se.

    1. >And as for my question, on whether there’s a good reason everyone coding in C shouldn’t use code that looks like bits.c and bits.h, rather than code that looks like #define IS_BIGENDIAN (some clever trick) or #if WORDS_BIGENDIAN (one thing) #else (other thing) – any thoughts on that?

      You can deduce my position from my code.

  76. Geek non-sequitur:

    Anyone recommend decent LED bulbs?

    I’m getting tired of CFLs taking time to fire up but do like the power savings because I can’t seem to get the kids to turn off lights reliably.

  77. @esr
    > You can deduce my position from my code.

    BTW, although I don’t live in your world, I have often thought of something that would help y’all with that portability thing.

    I’d suggest that gcc have a new option added gcc -whacky. What this option does is that it takes the areas of the C specification which are implementation specific and deliberately make extremely obtuse choices. For example 5 byte integers with 10 bits each with odd endian-ness, or strange alignments on structs etc. etc. Basically take all the implementation
    specific and choose the strangest legal choice possible.

    You might want to have four or five choice sets, gcc -whacky[1-4] (you can’t do random, because you need to link with standard libraries). So, you can stress test portability by compiling your code with all the various “whacky” options, and run your regression tests to detect possible areas of portability.

    Just a thought. Implementation is left as an exercise for the reader.

  78. @Jessica –

    … a new option added gcc -whacky. What this option does is that it takes the areas of the C specification which are implementation specific ….

    Well, there’s a mechanism which gives this effect, possibly in a more useful fashion. The Debian Porter Boxes are a collection of a variety of architectures; it is possible to get guest access to these machines with a reasonable justification. Our host has talked about this before, and points out that he has used them to “smoke out” portability problems in the GPSD code.

    Yes, you can’t do it on just one box. I would submit that hacking GCC to do your (useful but) “whacky” things would be quite a bit more effort than just getting tight with the Debian team and accessing the porterboxen.

  79. @John D. Bell
    >Well, there’s a mechanism which gives this effect, possibly in a more useful fashion. The Debian Porter Boxes are a collection of a variety of architectures;

    Interesting. Although useful it isn’t quite the same in a really important way.

    I have a friend who, in the context of soft dev, talks about “normalizing the unlikely.” That is to say, making highly unlikely events more normal in the experience of the programmer so that (s)he handles it better.

    At least 50% of most code in my experience is there to deal with unusual or aberrant events, and a lot of that code never gets tested, and certainly not tested regularly. In my experience an EXCELLENT unit test suite hits 85% code coverage (though it depends a lot on the nature of the program — gpsd and gcc are the types of program that can probably get much higher because they don’t deal with capricious users.)

    The purpose of the -whacky option is to “normalize the unlikely.” to bring to the forefront of developers’ minds through experienced failures, the unlikely events so that they do a better job coding for them. To be most effective it has to be part of the standard build cycle, or at least the nightly build/CI cycle. So shipping it off somewhere, while useful, is less than optimal. It makes the unlikely less normalized.

    My favorite example of normalizing the unlikely is Netflix’s Simian army. For example Chaos Monkey is a program they run continuously against their production server farm which randomly kills servers. The purpose is to normalize the unlikely event of failed servers so that their coders code for it properly, and ensure that the users see continuous operation even on a crash prone server far. And they do this on their PRODUCTION servers — now that is hard core.

    http://www.codinghorror.com/blog/2011/04/working-with-the-chaos-monkey.html

    1. >gpsd and gcc are the types of program that can probably get much higher because they don’t deal with capricious users.

      No, we deal with capricious hardware and protocol designers instead. I am not sure this is an improvement; some of the stupid I have seen coming up a wire at me would make your hair fall out.

  80. Jessica, what you are proposing is a cross-compiler back end to a deliberately pathological CPU architecture, not just a command-line flag.

    I’m not saying it wouldn’t be useful, but I am saying it would require more tooling — for example, a software implementation of the bizarre architecture so that programs could be run-tested on it, and of course separate compilers and linkers for it.

    That said, it reminds me a bit of the MIX architecture in The Art of Computer Programming. In introducing this pretend architecture, Knuth states that some MIX machines are binary machines and some are decimal machines, with different effective word sizes between the two — and then insists that you write your program to run identically regardless of which type of machine it is.

  81. Just FYI: I was reading this article via RSS and it didn’t make any sense at all, until I realized that the last part of the expression was missing; something in the processing chain failed to properly escape the less-than sign.

    I love computers. ;)

  82. @Jeff Read
    > Jessica, what you are proposing is a cross-compiler back end to a deliberately pathological CPU architecture,

    That isn’t necessarily true. It is, for example, perfectly possible to use big endian integers on a little endian architecture, it is just that the code generation back end is different, and it would be considerably slower and bulkier. Similarly you could certainly run five byte integers with ten bits each on an Intel architecture if you really wanted to, only changing the back end code generator.

    BTW, this is part of a broader philosophy that I have, namely that the only path you know that works is one that has been tested. It is why I do not agree with Postel’s law, not in a general sense anyway. If you are liberal in what you accept, and attempt to interpret an unexpected input into a form that is expected then you are guessing at the caller’s intent, and have consequently not properly exercised the code path. It is better to break than guess, which is to say there is no shame in saying “I don’t know.”

    The classic example of this would be HTML. Huge amounts of HTML code is utter intractable crap, and one of the main reasons is that Postel’s law was applied liberally in early browsers, attempting to render the crap rather than saying “I don’t know.” If the browsers had errored out, a little extra effort would have been forced on the html writers and we would not have to swim through the dreadful pile of doo doo that is the average web site.

  83. @Random832:

    So can the people on the side supporting it explain just exactly why anyone who is at a high enough level to be coding in C (even in a kernel) rather than assembly should have to worry about the endianness of the CPU they are running on (rather than that of a disk or wire data format) in non-performance-critical code?

    Most processors have hardware-specific registers. Because they _are_ hardware specific, you usually don’t need code that works with them on multiple different processors, but programmers communicating with those registers often have to be quite cognizant of endianness issues.

    Other than that, maybe not so much. On the other hand, that’s partly because you used a lot of qualifications in your question. While “premature optimizations are the root of all evil,” I think that some optimizations that are done before you know you need them are not necessarily premature.

    Sometimes, when you’re coding libraries that will have a lot of users (and a kernel is, after all, at least partly a specialized library) it can pay to go ahead and do some cheap optimizations, even before you have a known performance issue.

    Once you have a performance issue, of course, everything winds up on the table. So if you have a record that needs to be sorted by a 16 bit type field concatenated with a 16 bit subtype field, and your code runs on a 32 bit processor, it certainly makes sense to place these fields adjacent to each other, align them, and order them appropriately so that a single comparison can sort on both fields.

    Performance is one area where being able to operate well at any level of abstraction comes in handy. At the top, you might be able to come up with an entirely different way of doing things. At the bottom, it might be useful to know various bits of arcana such as the machine’s cache line size.

  84. Performance is one area where being able to operate well at any level of abstraction comes in handy. At the top, you might be able to come up with an entirely different way of doing things. At the bottom, it might be useful to know various bits of arcana such as the machine’s cache line size.

    There’s actually one language that does this really, really well — C++. Only C++ actually delivers on the promise of allowing you fine-grained control over things that the hardware is sensitive about, while simultaneously providing powerful high-level abstractions in the same language that
    enable the programmer to not even have to think about the micro-optimizations in many cases. (Adding support for SMP vectorization to an existing C++ application is often a matter of a drop-in replacement for the STL library, for instance.)

    So many software applications require both performance and high levels of abstraction to manage all the complexity that C++ is pretty much the only choice for a broad spectrum of the computing space. C++ is the Chance-Vought F4U Corsair of programming languages: ridiculously high performance and extremely versatile, but so powerful that having anyone but an experienced and disciplined pilot at the yoke will only result in tragedy. Some commentators have confused this property with being the Corvair of programming languages — “unsafe at any speed” — but then again they’re not writing a physics simulation, embedded control software for a robot or military vehicle, high-performance system software, or a triple-A game.

  85. @Jeff Read
    > There’s actually one language that does this really, really well — C++.

    The problem with C++ is that the language itself is so damn complicated, and so filled with arcane legacy stuff that it is an extremely hard tool to use; I know, I have written a LOT of C++ code.

    Now perhaps your testosterone is kicking in, “Damn fluffy girls can’t handle the big chair”. But regardless of whether you can pee your name in the snow, higher levels of complexity in the programming language mean a higher level of bugs. It is as simple as that. Higher levels of bugs mean either considerably extended timelines, lower quality software or a focus at too low a level.

    Using a more tractable language than C++ means that you can think about higher level abstractions, which is to say you can get more of the top line optimizations which are far more powerful, and need to focus less on low level details.

    Furthermore C++ is an anti code bloat language, but in many cases code bloat is a good thing — it is a trade off of cheap stuff (RAM for example) against expensive stuff (programmer time.) Which is to say your claim that a physics simulation should run in a language like C++ is not one I would agree with. Better to think about physics than endian-ness, and if necessary buy a couple extra VMs from AWS to make up for any imagined performance hit.

    Auto garbage collection is definitely a challenge and an issue when you are dealing with lower level code, but when you manually collect your garbage you exacerbate the problem above, and again, the problem can often be addressed with more memory, and better control over the GC.

    By no means am I saying that C++ doesn’t have any uses. When you are close to the metal in kernels and high performance embedded it probably is the language of choice. And BTW, must embedded programs are very simple, and certainly don’t need C++. But in many categories of software it is a poor choice. Insofar as it is a good choice (such as the aforementioned SMP) it is better to encapsulate that part in a closed box and do all the hard stuff in a higher level, more tractable language.

  86. Now perhaps your testosterone is kicking in, “Damn fluffy girls can’t handle the big chair”.

    Hey, check your privilege! You’ll be hearing from the Babbage Initiative about this!

    I kid of course, but the assumption that I would trivialize your criticisms of C++ on grounds of lack of studliness is invalid. You’ve actually used the damn thing, which many a brogrammer wouldn’t even countenance in this era of “instant startup, just add Node.js”. And it is worthy of criticism — there’s a reason why I compared it to one of the most dangerous planes to fly in the WWII Allied fleet.

    That said, the two nearest candidates to displace it — Rust and Ada — do not nearly match it in raw speed, though Ada’s good-enough performance and safety-first design philosophy make it arguably a better choice in highly critical, lives-are-on-the-line type applications like avionics. Ada code is also much easier to maintain; aside from the much stricter type-safety features in Ada, the source code is just much easier to read than the punctuation tangle that template-heavy C++ tends to become. (And if you’re writing C++ and not using templates everywhere, youre doing it wrong.) But when every CPU cycle counts — and at scale it does more often than you might think — it is literally impossible to beat C++ except with raw assembly.

    So yes, C++ sucks when it comes to safety, tractability and even readability and you have to really actively work to avoid self-foot-shooting with it. My original point was there’s nothing that can really replace it in its (quite broad) niche in spite of this. Everything else is still about a base-2 order of magnitude too slow at the very least.

  87. Jessica,

    “The road to perdition is paved with compatibility hacks” —The Guile Reference Manual, paraphrased

    Postel’s law is one of those Unix design nuggets of wisdom — alongside “small programs are beautiful” and “plain-text ALL THE THINGS!” — that since the 1970s have been widely discredited. Those Buddhas have been killed — bereft of life, they rest in peace, they’ve joined the choir invisible, etc. Not even Linux is really a Unix system any more, not in the old-school sense.

    The fact is that by being strict about what you accept, you can treat your input as structured data, not a plain bag of text that has to be massaged with ad-hoc agglutinations of parsing or unparsing tools, as in Windows PowerShell which applies this principle to the command line of old, yielding something quite a bit more powerful and useful than the traditional Unix userland.

  88. I often compare C++ to a sharp knife. If you earned your totin’ chip, and you follow the rules you learned, it’s actually safer than a dull knife.

    It’s not, however, a screwdriver, a drill, a hammer, a saw, or a water bucket. It is not a clay sculpting tool. It is most particularly not a drywall knife.

  89. However, there’s value to deciding when and how to be strict. Python uses indentation levels for semantic blocks, so it has to be somewhat stricter about whitespace than most languages, but whatever the style guide says, it doesn’t force you to use four spaces per indent level. It doesn’t even care about indenting at all when the lines belong to something in nested parentheses [such as an argument list or a tuple/list/dict literal].

    On default settings, it even allows tabs, though it forbids cases where there’s tabs on one line and spaces on the next in the same position because these can change in meaning depending on how many spaces a tab is.

    A literal rejection of “be liberal in what you accept” would say that nothing should ever allow unnecessary whitespace. I’ve seen many stories (on sites such as the daily wtf) of “xml” parsers that are so ‘strict’ as to only accept attributes delimited in double quotes (i.e. not single quotes), or choke on <empty/> element syntax, or require elements to be in a certain order even in a format that does not actually demand that order, or require attributes to be in a certain order even though I don’t think it’s even possible for a DTD to define an order attributes must appear in – all matching up with the subset of XML that the generator they tested with happens to generate.

    There is clearly value to being liberal about what you accept, so long as you’re well-defined about it.

  90. @Random832:

    > xml” parsers that are so ‘strict’…

    That’s not strict. That’s incomplete. Which may look superficially the same, but isn’t.

    FWIW, in some cases, incompleteness is fine, when you’re building a special tool. Although, to echo Jessica’s sentiment about libraries, it’s difficult for me to imagine a scenario today where one would be in a position where one is required to roll one’s own XML parser. If you’re doing mainstream computing, there are parsers available for whatever language you’re using. If you’re doing embedded on something with enough constraints to not use an off-the-shelf XML parser, why are you wasting time and space on XML at all?

  91. > That’s not strict. That’s incomplete. Which may look superficially the same, but isn’t.

    They are complete. The language they parse just isn’t XML. A literal interpretation of “do not be liberal in what you accept” would consider that language to be superior to XML.

  92. Alternately, imagine a system where /etc/fstab were required to separate fields by exactly one space, rather than an arbitrary amount of whitespace.

  93. @Random832:

    > They are complete. The language they parse just isn’t XML.

    OK, but then it’s not an XML parser, and it’s false advertising if someone labels it as such.

    > A literal interpretation of “do not be liberal in what you accept” would consider that language to be superior to XML.

    “Be liberal in what you accept” is about the parser, not the language. And most XML parsers are already pretty good about accepting input iff that input conforms to valid XML.

  94. I often compare C++ to a sharp knife. If you earned your totin’ chip, and you follow the rules you learned, it’s actually safer than a dull knife.

    A much more apt analogy is with a construction material that gives you a deadly cancer that takes 20 years to develop. If you injure yourself with a knife the feedback is instant. Not so with C.

  95. > “Be liberal in what you accept” is about the parser, not the language.

    I’m not sure the distinction between a parser and the language it accepts has any real-world relevance. Certainly two parsers that differ in what they accept are in fact parsers for different languages, even if those languages are related and have the same name.

  96. They are complete. The language they parse just isn’t XML. A literal interpretation of “do not be liberal in what you accept” would consider that language to be superior to XML.

    No that’s not complete but broken since they can’t parse XML that meets the XML spec. The language cannot be considered superior to XML since there is no definition for it.

    In any case you are missing the point by arguing the extremes. In my opinion both positions are valid in different cases. My rule of thumb is that user facing APIs tend toward more fluffy (ie Rest) while system to system APIs will tend toward the stricter side (SOAP, Corba, etc).

  97. @Random832:

    > I’m not sure the distinction between a parser and the language it accepts has any real-world relevance.

    Sure it does.

    > Certainly two parsers that differ in what they accept are in fact parsers for different languages, even if those languages are related and have the same name.

    If a C compiler doesn’t compile valid C properly, and the compiler author refuses to fix it because he thinks that corner case of the C language isn’t worth adding to his compiler, then he needs to rename his compiler to be “small/mini/tiny/incompleat/whatever” C.

    But even if he renames his compiler to make it clear it’s not C, he also should make it noisily reject programs that it won’t give the same semantic meaning to as any “real” C compiler would, lest he face significant opprobrium and ridicule.

  98. Patrick Maupin: “But even if he renames his compiler to make it clear it’s not C, he also should make it noisily reject programs that it won’t give the same semantic meaning to as any “real” C compiler would, lest he face significant opprobrium and ridicule.”

    Funny, weren’t we just talking about C++?

    int x = 1//**/2
    ;

    At least, the above worked back in the pre-c99 days. I’m sure there’s some way to get modern compilers to do something similar.

  99. Alternately, imagine a system where /etc/fstab were required to separate fields by exactly one space, rather than an arbitrary amount of whitespace.

    That would arguably be more consistent and easier to parse.

    But why do we even need an /etc/fstab at all? It’s yet another 1970s relic that needs to be done away with. It hails from an era where a system’s disk configuration was mostly static, and you always knew which disk was connected to which port. These days, particularly on desktop machines where plugging in USB sticks or SD cards is a common occurrence, no such guarantees can be made — and we’ve seen no end of hacks and workarounds for the crufty old Unixisms that keep us from moving forward.

  100. @Random832:

    > Funny, weren’t we just talking about C++?

    Yes, and there’s a difference between a standards body deciding to migrate a language standard, and a compiler vendor deciding to do so on its own.

    A compiler vendor deciding to do so on its own usually involves adding new features (being liberal in what they accept), not arbitrarily deciding to not support defined language features, and almost always comes with a kill switch to disable any extensions or modifications.

    Your contrived snippet, which operates differently on pre-1999 C, is extremely unlikely to occur in real life, but you knew that. FWIW, gcc will return 0 for that snippet if you use -ansi (with a .c extension) or -std=c90, or a 1 if you use -std=c99.

    So, yes, it is impossible to rev a standard without creating edge cases that will work differently than with the previous version. That doesn’t obviate the need to a compiler vendor to explain which standard they are adhering to and then adhere to it, and to explain any discrepancies in excruciating detail.

  101. A compiler vendor deciding to do so on its own usually involves adding new features (being liberal in what they accept), not arbitrarily deciding to not support defined language features,

    I’ve heard stories about most compilers that suggest the latter is pretty prevalent, and I’ve got a personal grudge against gcc for that specific reason (“volatile register”).

  102. and we’ve seen no end of hacks and workarounds for the crufty old Unixisms that keep us from moving forward.

    I dunno…OSX seems to work fairly well. Doesn’t feel very crufty but honestly the only time I drop to shell is because of some svn or git oddity that I need to fiddle with by hand or some ports package I need so I don’t have to deal with linux.

  103. @Christopher Smith:

    IIRC, the C standard only requires “register” for automatic variables, so it’s an extension to allow them for globals.

  104. @Christopher Smith:

    And even then, I don’t think a compiler is actually required to honor “register” — it’s more of a hint.

    So you can be annoyed with GCC for not doing what you want, but I don’t think this makes it noncompliant.

  105. I dunno…OSX seems to work fairly well. Doesn’t feel very crufty but honestly the only time I drop to shell is because of some svn or git oddity that I need to fiddle with by hand or some ports package I need so I don’t have to deal with linux.

    From what I can tell, Mac OS X doesn’t use /etc/fstab by default. Linux still does, and it offers your choice of mounting daemons to listen for external drive connects and mount them in semi-standard places.

    Although, when systemd finally wins, this will probably be rolled into systemd.

  106. @Patrick: I read through the C spec to figure it out, and “register” is permitted as a modifier on any variable, auto or otherwise. The gcc toolchain for AVR has an extension (all embedded C compilers do) to hard-bind a variable to a specific register, but apparently a badly-designed internal pipeline for variable tracking throws away the struct with the “volatile” flag for register variables before it gets to the optimizer, which then feels free to do the usual eliding of checks. It’s actually objectively incorrect handling, because it preserves the (technically optional) register handling while disregarding the volatile handling. This has been raised as a bug and marked WONTFIX because it’s too much trouble to fix the pipeline.

  107. @Jeff Read:

    Postel’s law has been discredited? Really? So when a browser encounters a non-compliant Web page it just gives up? MUA’s just throw a fit because there are multiple, subtly incompatible MTAs do things a little different? (Careful, our host wrote an entire series of books about what is essentially an MUA.)

    An example of a program that does not conform to Postel’s law is Gedit, which throws its hands up if it encounters a single text character that doesn’t seem to match the encoding it thinks the text file is in. You can’t view the file, you can’t edit the file, you can’t do anything to it. That’s a big reason why Vim and Emacs continue to hold such prominent positions in hackerdom while Gedit is often snickered at as wannabe fare, despite having a similar feature set.

    Are you really that clueless or are you trolling?

    1. >Are you really that clueless or are you trolling?

      Any sufficiently advanced level of cluelessness becomes indistinguishable from trolling.

      Jeff Read demonstrates the truth of this adage-which-I-just-made-up on a regular basis.

  108. @Morgan Greywolf

    Postel’s law has been discredited? Really? So when a browser encounters a non-compliant Web page it just gives up?

    The lax parsing in web browsers is a fucking disaster. And your argument is ridiculous; just because something persists doesn’t mean it’s good. Since most computer types are idiots you would expect much of what’s survived in the ecosystem to be idiotic. Indeed, that is the case; see C, Unix, XML, JavaScript, …

  109. @Christopher Smith:

    I read through the C spec to figure it out, and “register” is permitted as a modifier on any variable, auto or otherwise.

    I don’t think so. It could appear that way if you just read section 6.7.1 (which mentions also, as I said, that exactly what the compiler does with “register” is implementation-defined in any case.)

    However, section 6.9.1 says “The storage-class specifiers auto and register shall not appear in the declaration specifiers in an external declaration.”

    What is an “external declaration”? It doesn’t seem to be the same as an extern definition. If I read the spec right (a big “if”, to be sure), a “translation-unit” consists of one or more “external-declarations”, where an “external-declaration” is a “declaration” or a “function-definition”.

    So to me, 6.9.1 says that “register” could only occur inside a function definition. And 6.7.1 explicitly says that “At most, one storage-class specifier may be given in the declaration specifiers in a declaration” which would seem to explicitly rule out being able to declare something both “static” and “register”, so that’s not a workaround, either.

    (Actually, disallowing “static register” would disallow register globals even if the meaning of “external declaration” was “non-static”, rather than “top level”.)

    The gcc toolchain for AVR has an extension (all embedded C compilers do) to hard-bind a variable to a specific register, but apparently a badly-designed internal pipeline for variable tracking throws away the struct with the “volatile” flag for register variables before it gets to the optimizer, which then feels free to do the usual eliding of checks.

    Sounds like an enhancement to me, if it works on globals.

    It’s actually objectively incorrect handling, because it preserves the (technically optional) register handling while disregarding the volatile handling. This has been raised as a bug and marked WONTFIX because it’s too much trouble to fix the pipeline.

    GCC definitely tries to do a lot more than the standard, and in fact (at least at one point) was one of the drivers helping to push the standard, so I can see how this annoyance could be considered a bug some would like to fix. OTOH, I have to believe that if the standard actually required this behavior, you’d get a lot more people on-board.

    I would agree that if GCC is going to allow “register” in a global variable, and not do the right thing for “volatile register” then that should at least be a warning, but it appears to me that the allowance of “register” in that setting is not mandated by the specification, so there is no spec-mandated ability to correctly translate “volatile register” into “volatile” for a global variable.

  110. Postel’s law has been discredited? Really? So when a browser encounters a non-compliant Web page it just gives up?

    Obviously that’s not the case now, which is why browsers are such miserable complications of layer upon layer of legacy cruft.

    Jessica addressed this. If browsers had just given up when encountering non-compliant Web pages it would have forced HTML authors to write better markup. It would have also spurred the development of better authoring tools early on. We have powerful machines and access to oodles of open source now; there is no reason to be afraid of special tools to handle specialized formats. It’s the 21st century and we can do better than loosey-goosey, vaguely-specified text formats and generic text-munging tools.

    Like I said, not even Linux really adheres to the old Unix philosophy anymore. There’s a reason why D-Bus — a binary, structured-data protocol — is curbstomping most other forms of IPC in the open-source ecosystem.

    MUA’s just throw a fit because there are multiple, subtly incompatible MTAs do things a little different?

    The answer to the depressing panoply of mail-handling software and the finicky configuration thereof that has been adopted by the corporate world has been to standardize on one MUA and one MTA — Outlook and Exchange. This has simplified many an admin’s life compared to dealing with the combinatoric explosion of MUAs, MTAs, and MDAs that prevails in Unix-land.

  111. @Morgan:
    >An example of a program that does not conform to Postel’s law is Gedit, which throws its hands up if it encounters a single text character that doesn’t seem to match the encoding it thinks the text file is in. You can’t view the file, you can’t edit the file, you can’t do anything to it. That’s a big reason why Vim and Emacs continue to hold such prominent positions in hackerdom while Gedit is often snickered at as wannabe fare, despite having a similar feature set.

    As of Gedit 3.4.1, Gedit no longer does this. Previously it *almost* did the right thing. If it couldn’t determine the encoding, it would inform the user and ask the user to select an encoding. So far, it was doing exactly the right thing. The problem is that if any characters didn’t match the selected encoding, it would display the encoding selection dialog again, instead of just displaying the file in the encoding the user asked for.

    The weird thing is that it wasn’t *just* an issue of finding characters that didn’t fit the encoding. Sometimes it would happily display a file with a ton of non-printable characters in it, and sometimes it would choke on just one.

    @Jeff:
    >We have powerful machines and access to oodles of open source now; there is no reason to be afraid of special tools to handle specialized formats. It’s the 21st century and we can do better than loosey-goosey, vaguely-specified text formats and generic text-munging tools.

    The “powerful machines” bit works against what you’re trying to argue (at least the text part, not the “vaguely specified” part): Binary formats are best when you’re operating under tight memory and processing power constraints.

    Now, I’m willing to agree with you a fair bit on the value of tightly specified formats, but that’s orthogonal to human readability. Ambiguities in binary formats can and have bitten people in the hindparts before. For example, contrast the ways that Motorola and AMD handled unimplemented address bits on the 68k and x86_64, respectively. Motorola had the 68k ignore unimplemented address bits, so programmers used them as flags, which caused trouble when later processors actually implemented those bits. AMD had the x86_64 throw an exception when trying to access an address where the unimplemented bits aren’t either all zero or all one.

  112. From what I can tell, Mac OS X doesn’t use /etc/fstab by default. Linux still does, and it offers your choice of mounting daemons to listen for external drive connects and mount them in semi-standard places.

    LOL…out of curiosity I looked at the /etc/fstab on my machine. This is the contents:

    IGNORE THIS FILE.
    This file does nothing, contains no useful data, and might go away in
    future releases. Do not depend on this file or its contents.

  113. The “powerful machines” bit works against what you’re trying to argue (at least the text part, not the “vaguely specified” part): Binary formats are best when you’re operating under tight memory and processing power constraints.

    “Vaguely specified” tends to happen in an environment where humans are expected to produce the data without tool assistance, the human propensity for mistakes being compensated for with smarter and more forgiving readers. So the Unix tradition of “text files that can be read with the Mark I eyeball and edited with generic text editors” has a lot to do with the ad-hoc-ness of many formats and protocols.

  114. Postel’s law has been discredited? Really?

    See rfc-3117. Discredited is probably overly strong but it’s also not a “law” that doesn’t have detractors or significant limitations.

    “Counter-intuitively, Postel’s robustness principle (“be conservative in what you send, liberal in what you accept”) often leads to deployment problems. Why? When a new implementation is initially fielded, it is likely that it will encounter only a subset of existing implementations. If those implementations follow the robustness principle, then errors in the new implementation will likely go undetected. The new implementation then sees some, but not widespread deployment. This process repeats for several new implementations. Eventually, the not-quite-correct implementations run into other implementations that are less liberal than the initial set of implementations. The reader should be able to figure out what happens next.”

    As I said, when you look at system to system ICDs they tend toward not being very lenient partly because of these reasons. When you look at programmer facing APIs there’s a lot more leniency simply because things moves more quickly. If your app or service falls over too easily it’s too brittle to keep up with the leading edge of the market.

    That’s a big reason why Vim and Emacs continue to hold such prominent positions in hackerdom…

    The big reason is because many old hackers are reactionary…especially against things that make computing less arcane and their knowledge obsolescent.

  115. The “powerful machines” bit works against what you’re trying to argue (at least the text part, not the “vaguely specified” part): Binary formats are best when you’re operating under tight memory and processing power constraints.

    If you work with large amounts of data then you’re almost always operating under tight memory and processing power constraints in addition to how much time it takes to load a huge text file vs a small binary one off even a SSD or RAID.

    I was doing something with global weather data. That’s not something you want to deal with in text and much less XML.

    As powerful as our machines get we still manage to write ever more complex and demanding software to tax them. And it’s not (just) because of bloat or abstraction but because we solve ever harder problems.

  116. That’s a big reason why Vim and Emacs continue to hold such prominent positions in hackerdom while Gedit is often snickered at as wannabe fare, despite having a similar feature set.

    And there’s a whole new set of hackers who snicker at Vim and Emacs as neckbeard fare, and swear by Sublime Text or Eclipse. Intelligent people want easy-to-use systems, too, you know.

    Although really — compared to Visual Studio, the state of editors and development tools on Linux is still at the stone-knives-and-bearskins phase. The debugger alone makes it worth the purchase price, being far superior to anything the open source world has developed, especially in terms of integration with the IDE. Put even an extremely smart Windows game developer in front of gdb and they’ll give you a stink-eye like, “what are you, a sadist?”

  117. @Jeff Read
    >Although really — compared to Visual Studio, the state of editors and development tools on Linux is still at the stone-knives-and-bearskins phase.

    It has been a while since I used Unix dev tools, but I did have the misfortune of using XCode fairly recently, and I have used eclipse a little (on windows.) It really is like going back four versions of Visual Studio. Especially so when Visual Studio is enhanced with a tools like CodeRush or Resharper, along with myriad other amazing tools (mostly beer free.)

    Of course part of the reason why it is hard to do this in Eclipse, XCode, Emacs etc. is that C and its children C++ and Objective C, are just so amazingly intractable that it is hard to provide good quality tools because on the fly code analysis is extremely hard to do. Consequently, for example, the types of code refactoring you can do is very limited, or the automatic mocking of classes for IoC or DI are really hard problems to solve. And of course Eclipse is written in Java. Enough said on that one.

    But it is also possible that I am super out of date. Visual Studio support for Javascript used to suck too, but it is much better now (though being Javascript — perhaps the only worse language than C in common use) the tool support is still pretty limited.

  118. @nht:
    >As powerful as our machines get we still manage to write ever more complex and demanding software to tax them. And it’s not (just) because of bloat or abstraction but because we solve ever harder problems.

    Yes, but for a given difficulty level, human-readable formats become more advantageous as machines improve, not less. Jeff’s position of “Since we have more powerful machines, lets move stuff that used to be plaintext into binary formats” is nonsense, even if the general position of “binary formats have advantages A, B, and C, so lets move stuff out of plaintext formats into binary” is sensible (I’ll let other people argue that one out). If machines N years ago could handle the use of a text format for job X, then they could have handled job X with a binary format at least N years ago (and probably N+5 or N+10 years ago).

  119. @Jon Brase
    > Yes, but for a given difficulty level, human-readable formats become more advantageous as machines improve, not less.

    The only reason that “text” formats are more human readable, is that Unix hackers have spend the past thirty years developing tools to make them readable. After all, text is just a particular kind of binary encoding.

    Why, for example does cat display a letter ‘A’ for the binary number 0100 0001? Why does it start displaying letters on a new line after the binary number 0000 1010? It is just a special encoding that has been built into tools for years, and the assumptions are hidden from most people based on a shared cultural convention. Look at tools like cut or sh which have a different special encoding saying that sequences of binary numbers separated by a different group of binary numbers (collectively called white space) are field separators.

    For the most part it is a bit of an ugly hack to force structured data into that format, though it works really well for certain domains of data.

    An alternative history that the data streams could have been encoded differently and some sort of encoding definition been used by the tools to display the “binary” data in a human readable form. Imagine “more” read a file from /etc/encoding which described the data stream based on an initial magic number. Now extend to other tools.

    In fact this isn’t just a theory. Unicode and alternative UTF encodings demanded it that this assumed cultural convention about encodings be radically changed.

    Which is to say, you are assuming your conclusion.

  120. As I’ve been following this thread, I’ve also been thinking a bit about Postel’s Law.

    I had to look it up to make sure it was what I thought it was, and one of the first things I ran into was “Postel’s Law must always be obeyed”. The post gave the example of how an ampersand in a company name could unexpectedly break computer-generated XML, and so you had better make sure your system could handle it, or else someone else’s system will.

    I’ve had my own run-in with this, where I was attempting to parse broken XML. I tried fixing the XML so we could process the document, and found myself writing my own XMLish parser, because I kept on running into errors that contradicted each other. In the end, we had to talk to the vendor to send us well-formed XML, and request bug fixes when we ran into a broken XML file.

    So, while there may be some truth in the idea that Postel’s Law must always be obeyed, I agree with the sentiment expressed, that life becomes so much simpler when you decide to “break” it, and just demand that things conform to the spec, or ignore it!

    Granted, that’s not always possible…but I would nonetheless go so far as to propose that Postel’s Law must always fail: there’s only so much “liberality” you could accept before you throw up your hands and say “Enough! Just give me something I can understand!” And you can seed this even in HTML, where there’s a proliferation of script tags that do one thing for Firefox, and another for Safari, and another for IEv10, and another for IEv8, and so forth–because each browser is liberal in its own way, and each browser has its own bugs that need their own workarounds because they won’t be fixed any time soon…

    So I would go so far as to say that Postel’s Law is more like breaking the speed limit: go ahead, everyone’s doing it, but you’ll be pulled over every once in a while, and every so often you’re going to crash–and perhaps die–because you were going a little /too/ fast!

  121. Fluffy Girl: The only reason that “text” formats are more human readable, is that Unix hackers have spend the past thirty years developing tools to make them readable. After all, text is just a particular kind of binary encoding.

    I spent a couple of months helping to write a SIP parser that was “merely” standards compliant. As we delved into the standard, I couldn’t help but notice that a portion of the standard–the first five bits of ASCII–the first 32 characters–seemed to be specifically designed to do what we were doing: marking where headers started and ended, separating data, indicating where a transmission ended, and so forth. It’s as if, back when ASCII was designed, the creators anticipated data being sent back and forth! Well, actually, that’s what it was designed for, and it’s a pity that we’ve allowed these characters to fall out of use. We don’t have to resort to C structures to create “binary” data: we could just sprinkle readable text with a handful of control characters, and it would be both readable and easily parsable!

  122. We don’t have to resort to C structures to create “binary” data: we could just sprinkle readable text with a handful of control characters, and it would be both readable and easily parsable!

    That would be the worst of both worlds, not the best: You’ve still got to iterate over it one byte at a time, just like plain text, and now to display it to a human you’ve also got to have special support in a text editor or viewing program (at the very least, something like cat -v; most text editors provide something, though without color hilighting you can’t tell whether you’ve got a byte 31 (unit separator) or a literal “^_” or “” or “27” or whatever else your editor wants to show you for it, unless it’s _also_ going to escape every caret or less-than or backslash.

    If you’re going to assign a special meaning to some bytes in your ASCII format, those might as well be bytes that a human can look at and figure out when not in possession of a program that has those special meanings. And that is most likely why they fell out of use.

  123. *a literal caret-underscore or lessthan-1-F-greaterthan or backslash-0-3-7

    (The last one was mistyped having 2 instead of 3 in my previous post. That’s some quality blog software, removing backslash-zero even though it has no significance whatsoever in html.)

  124. Fluffy Girl: The reasons for ASCII text formats is more than just “convention.” This post on endianness is just a single example of many reasons why ASCII text formats came to be.

    We might live today in a virtual monoculture so dominated by compatibility with the Intel x86 that even the ARM architecture now has the ability to switch endianness on the fly and Apple is using Intel chips on their PCs, but this did not use to be the case. We not only had to deal with endian issues, but also consider that we used to have not just 16-bit, 32-bit and 64-bit architectures, but weird ones like the 18-bit addressing and 36-bit words of the PDP-10.

    And even text. Ha! You might take for granted that ‘A’ is 0x41, as you’ve noted above, but that’s because you’ve never lived in @Jay Maynard’s world where ‘A’ can be 0xC1 or even some other number entirely.

    Plaintext formats were built to deal with a world in which the underlying hardware was not a guarantee. Text formats survived because the same /etc/hosts file you had in 1986 (just throwing a year out there) will work just fine today, unmolested . This is shockingly forward thinking because it means that plaintext formats will survive even this era of computing. OTOH, name one binary format that has endured in this way besides, say, TCP/IP (which is a special edge case itself).

  125. Random832: That would be the worst of both worlds, not the best: You’ve still got to iterate over it one byte at a time, just like plain text, and now to display it to a human you’ve also got to have special support in a text editor or viewing program (at the very least, something like cat -v; most text editors provide something, though without color hilighting you can’t tell whether you’ve got a byte 31 (unit separator) or a literal “^_” or “” or “27? or whatever else your editor wants to show you for it, unless it’s _also_ going to escape every caret or less-than or backslash.

    I would personally like to create a special font for these characters that would be distinct, and indicate their meaning. It’s rather sad that we haven’t formalized yet something even as simple as this! And I find it deeply annoying when I accidentally cat a text file, and it messes up my command line: cat shouldn’t be producing output that amounts to executable code (although cat has certainly improved over the years in this regard).

    And it isn’t as though we completely ignore the first 32 values of the ASCII standard anyway: we make use of line-feed and carriage-return, and sometimes make use of tab. Emacs in simple-spreadsheet mode separates text from its cells code with a ‘^L’ (a form feed/new page). Why not have every header start with an “SOH” (Start of header), separate the header name with an “ACK” (acknowledge), and then insist that the remainder of the header, up to the next “EM” (“End of Medium”) be the values contained in the header. If you insist on having “EM” in that block, just use “ESC” to escape it, and use “ESC” “ESC” to escape that, if necessary. Any data block can be stuffed into an “STX”…”ETX” block (“Start of text”…”End of text”), again using “ESC” to escape any control value (so we could include even binary data in that block), and leave the entire packet of data at that.

    I have difficulty seeing how this is more complex than insisting that this is all that more complex than what we currently have for HTTP or SIP headers. By insisting on using only ASCII characters, we run into all sorts of confusing corner cases, because we’re using the exact same characters for markup, that we do for creating text.

  126. @Morgan Greywolf
    > The reasons for ASCII text formats is more than just “convention.” This post on endianness …

    But as you yourself point out this same issue occurs on different hardware that are not 8 bit bytes. In fact ASCII isn’t even an 8 bit protocol. So the point is that hardware that is not optimized for 7 bit ascii still handles it just fine because of the shared convention. There is similarly no reason why hetero-endian systems can’t handle the same formats of predefined endian-ness.

    Don’t you just love that word hetero-endian?

    > And even text. Ha! You might take for granted that ‘A’ is 0×41,

    But you are making my point entirely here. Namely that the tools have to underly the byte representation anyway, and that is not qualitatively different than them interpreting a more complex binary format.

    We do in fact have an example of a binary format used by Unix. It is called the file system. Over time different binary formats of file system have been introduced, and in fact you can run multiple different binary formats of a file system on the same computer. Nonetheless, there is no cat-extfs4 and cat-fat16. The tools handle the different formats through various libraries and present the user the same data regardless of the underlying binary format. The /etc/hosts file was stored on the disk in different ways depending on the underlying filesystem.

    I’m not an expert in these things at all, but if I want to name my hosts with Chinese characters or some other non ASCII format, come to that even accented French, can I? Which is to say does your hosts file support Unicode? If no then you are trapped by your supposedly flexible text format, If yes then your point in incorrect, because the same tools you used to read that hosts file in 1987 don’t always work today for the hacker in Beijing or Paris.

    And there are lots of text formats that have maintained backward compatibility, such as GIF, and compressed files. Even the one you mention TCP/IP has changed in support of IPv6.

    Once again, text encoding is just a binary encoded format, and it is easier to use because we have the tools to look at it. Someone else said:

    > those might as well be bytes that a human can look at and figure out when not in possession of a program that has those special meanings

    Humans can’t look at bytes on a disk — we don’t have the ability to detect magnetic domains. Humans ALWAYS need a program to look at files, and those programs always apply an interpretation to that, and present it in a readable format. The tools support one particularly simple methodology. It has pros and cons, just as a different toolset more oriented to a more complex binary representation has pros and cons.

    I have seen people go through horrendous machinations to build AWK programs or funky little text filters to munge the fact that data with a slightly complicated structure, doesn’t fit well in the “one record per line, columns separated by whitespace” paradigm that underlies a lot of Unix text tools. it doesn’t have to be that way. Look at the Windows PowerShell.

  127. Consequently, for example, the types of code refactoring you can do is very limited, or the automatic mocking of classes for IoC or DI are really hard problems to solve. And of course Eclipse is written in Java. Enough said on that one.

    Not a lot of C or C++ code is written with the patterns you describe (IoC, DI, etc.). In fact I would go so far as to say very little C is. Which is, of course, part of the problem — the languages and tools which prevailed amongst hackerdom simply haven’t kept pace with modern development practices. Which is just fine for system-level stuff, mind you, but at the application level developers really have come to expect more.

  128. Jeff’s position of “Since we have more powerful machines, lets move stuff that used to be plaintext into binary formats” is nonsense, even if the general position of “binary formats have advantages A, B, and C, so lets move stuff out of plaintext formats into binary” is sensible (I’ll let other people argue that one out).

    That’s not really my position at all. A disadvantage of binary formats is that they are difficult to examine without specialized tools, and those tools often come from only one vendor. Point taken, especially in the case of things like Word documents. But in the open-source realm, where the specs are open and the tools used to read and interpret the binary stuff is open source, that doesn’t apply. And since we have machines with gobs of memory and hard disk space now, the advantages of using a plain-text editor for everything are a whole lot less clear than they were in, say, the 1970s.

    So my point is that we really need to reassess the benefits and drawbacks of both text and binary. In a world where RAM and storage are cheap, plain text formats that are manageable with generic tools have less of an advantage over binary formats with specialized tools. As an example from the Linux realm, systemd-journald keeps system logs in an encrypted binary format. This violates the neckbeard commandment “Thou shalt keep thy logs in plain-text format, so that they may be easily read, grepped, and analyzed by the acolytes for signs of ill portent”. But, in fact, the journald binary format is objectively superior to plain text because it provides tampering-attestation features that plain text simply cannot provide, ever. And using the journalctl tool you can dump the logs in readable format, and still read and grep them and examine the entrails as you used to. Nothing of value was lost, and there are significant security benefits.

    Entire clans of hackerdom have been built around systems which lack the assumptions that the human-readability and human-authorship of plain text data formats are inherent Good Things. Consider the Amiga. In the Amiga world, you couldn’t expect to call yourself a programmer if you were afraid to fire up a hex editor and examine some raw bits in order to track down a problem. And most programmers knew how to use 68k assembly, and used it themselves quite often. Which means that — yes — they had to worry about endianness issues, but only sometimes. You see, sane people — and sane system architectures — used big-endian; little-endian data tended to come from the world of strange, weak computers with no custom chips and insanely limited graphics and sound capabilities. :) From that realm also came the fear-and-loathing mentality of “I know ASM but I also know enough not to use it”. If you had a decent CPU architecture with actual registers and usable addressing modes, writing ASM was very much akin to writing C.

    From the Amiga world also came the IFF file format, which forms pretty much the basis of most decent binary formats used today, including .WAV audio files and .PNG graphics.

  129. I’m not an expert in these things at all, but if I want to name my hosts with Chinese characters or some other non ASCII format, come to that even accented French, can I? Which is to say does your hosts file support Unicode? If no then you are trapped by your supposedly flexible text format, If yes then your point in incorrect, because the same tools you used to read that hosts file in 1987 don’t always work today for the hacker in Beijing or Paris.

    Hostnames only support the ASCII letters, decimal digits, and dash. There is a hacky encoding called Punycode that can be used to represent a Unicode hostname in just these characters, and it is also used to represent DNS domain names (so for example you could have a Web site domain in Cyrillic or Arabic or Chinese characters), but it’s just that — a hack. Even more of a hack than UTF-8, which is exceedingly spendthrift with space except on the off chance that the UTF-8-encoded data is mostly ASCII.

    And this is why so-called “plain text” must ALWAYS, ALWAYS, ALWAYS carry encoding info in the 21st century. Because no one encoding fits all.

    In fact, if you generalize this principle you arrive at the conclusion Alan Kay reached in the late 60s that led to the genesis of modern object-oriented programming: that data can never stand alone but must always carry with it information on how to interpret it.

    In short, objects > “plain text”.

  130. If you’re going to assign a special meaning to some bytes in your ASCII format, those might as well be bytes that a human can look at and figure out when not in possession of a program that has those special meanings. And that is most likely why they fell out of use.

    Many younger Linux users these days are frustrated that their chosen OS contains an outdated notion of “terminals” — real or virtual. Yet another Unix relic from the 1970s, albeit this one still has some use: when all you have to talk to your machine is a serial link, the ubiquity of software that talks VT100 out there means that you can easily attach a “terminal” to the embedded Linux box or whatever and interact with it in a rather sophisticated fashion. But there’s still a case to be made that you shouldn’t have to pay for what you don’t use, and that ttys should be optional in the kernel. And, if we were starting from scratch, a different methodology for presenting text would be preferred; perhaps some sort of IFF-like chunk (or packet) binary encoding in which chunks of text to be displayed had one tag and chunks of control information had different tags. The in-band signalling of ASCII control characters is a messy relic of the days of actual TTYs and easily leads to things like command prompts that now display garbage and beep at you because you cat’ed the wrong file.

  131. Jeff Read on 2013-11-26 at 14:59:30 said:
    ?>In fact, if you generalize this principle you arrive at the conclusion … that data can never stand alone but must always carry with it information on how to interpret it.

    Well that is different than object oriented programming. It is more an indication that data must contain some indication of its semantics either within the data itself — such as a magic number — or from the context — such as a conventions based file extension, or the Unix-y assumption that is is all ASCII text.

    I think that is generally good. Attaching functionality to data, which is the object oriented way, is taking things too far. One need only compare REST to SOAP to understand why semantic tagging is superior to functional panoplies,

    It would have been interesting indeed had the founding fathers of Unix settled on JSON-like as a universal text transmission format instead of the record per line, white space separated columns approach (though it would be nice had JSON included some short cut mechanism or meta information to save the constant repetition of the semantic tags.)

  132. @Jeff Read

    Is the punycode hack a hack on hostnames, or is it a hack on DNS, to support hostnames that contain unicode characters? What RFC are hostnames defined in?

    Are non-internet-related uses of hostnames (for example the gethostname() and sethostname() functions, or the hostname utility which calls them) bound by these limitations? Looking at the SUS spec for gethostname, and old manpages for sethostname, these seem to be defined as byte strings rather than sequences of (unicode or otherwise) characters, but there’s nothing obvious requiring them not to set the high bit. I’m not even sure if null or dot are forbidden. The API itself can in principle represent a name with an embedded null byte, though the rationale given was to not require null termination.

    RFC 1034 indicates that internet domains are ASCII (and case-insensitive), but contains a somewhat cryptic rule “When you receive a domain name or label, you should preserve its case. The rationale for this choice is that we may someday need to add full binary domain names for new services; existing services would not be changed.” – obviously not enough people followed this rule for it to actually be possible to do so, or UTF-8 would have been used in preference to punycode.

    We are fortunate, though, that no-one dared to actually register any non-ASCII domain names in the pre-UTF-8 days, though it’s likely that this only happened for the same reason that UTF-8 itself wasn’t viable – that not everyone was willing to silently pass 8-bit domain names. The idea that you can go from (effectively) forbidding something to allowing it and not cause any problems (with systems that noisily rejected the forbidden thing) seems hopelessly naive.

  133. Well that is different than object oriented programming. It is more an indication that data must contain some indication of its semantics either within the data itself — such as a magic number — or from the context — such as a conventions based file extension, or the Unix-y assumption that is is all ASCII text.

    You’re right. Magic numbers and file extensions tell you what it is, but not how to interpret it. And that’s as far as most even modern operating systems go.

    But the Amiga gave us the notion of datatypes — a powerful feature which to date no other OS except BeOS has thought to provide (Android intents come sort of close, but not all the way there). If a program uses its own file format, it can register the format with the OS as a datatype and itself as a reader and writer for that type. Any other programs on the same system which handle images, text, rich text, MIDI music, etc. can then call into that program to read or write files in its particular format, without having to know about the format itself. So an additional level of power was added by supplying information about how to interpret the data.

  134. Yet another Unix relic from the 1970s

    Try 1920s. TTYs were already mature a decade before Unix came along, and they were the bleeding edge of high-tech human-machine interfaces for half a century before that.

    And, if we were starting from scratch, a different methodology for presenting text would be preferred; perhaps some sort of IFF-like chunk (or packet) binary encoding in which chunks of text to be displayed had one tag and chunks of control information had different tags.

    In other words, an X terminal, or some sort of structured markup language.

    Before Unix, when host CPU time was far too expensive to waste on human interaction, terminals were expected to support the layered signalling you are describing. They’re mostly extinct now–the protocols they used were too rigid and inflexible, even by terminal standards of the mid-to-late 20th century, and customers fleed them as soon as they could.

    There are some interesting early RFCs that describe elaborate terminal signalling structures for interactions between arbitrary hosts and terminals. Users and vendors overwhelmingly picked VT100-over-telnet instead.

    things like command prompts that now display garbage and beep at you because you cat’ed the wrong file.

    Don’t use cat(1) to present files to humans (text or otherwise). That’s not what cat is for. There are numerous tools that do a much better job of presenting arbitrary file contents to humans. It’s only a happy accident that cat works in that capacity for some people and some files some of the time.

    Legions of users are erroneously taught that cat is for reading text files, as opposed to reprogramming your terminal or sending garbage and parts of your terminal session to the nearest printer (i.e. things which cat can also do if you combine it with the wrong file and terminal). One might as well tell people to use dd to read their email–sure, it’ll work some of the time, and it can even convert EBCDIC to ASCII should that be required, but the tool is designed for a very different job.

  135. @Jeff:

    >And since we have machines with gobs of memory and hard disk space now, the advantages of using a plain-text editor for everything are a whole lot less clear than they were in, say, the 1970s.

    This is where I lose you. How does more memory and better performance make binary formats *more* advantageous? How does “Foo = TRUE” take up less space or use fewer cycles to process than setting a single bit in a binary format to 1? Human readable formats tend to have horrendously low information density.

    >So my point is that we really need to reassess the benefits and drawbacks of both text and binary. In a world where RAM and storage are cheap, plain text formats that are manageable with generic tools have less of an advantage over binary formats with specialized tools.

    And my point is that you have it backwards, whether text or binary has the advantage for a given job, more memory, more disk, and more cycles make things better for text, not the other way around. The advantage of generic tools is in their genericness itself, not in anything having to do with performance. (Sure, a text editor may not put a huge load on the system, but the program that processes that file will put a bigger load on the system than one processing a binary format).

  136. @Jeff Read:

    You see, sane people — and sane system architectures — used big-endian…

    The seminal paper on this, is of course:

    http://www.ietf.org/rfc/ien/ien137.txt

    He does a reasonable job of appearing to be unbiased, but his bias actually shines through very brightly:

    To the best of my knowledge only the Big-Endians of Blefuscu have built systems with a consistent order which works across chunk-boundaries, registers, instructions and memories. I failed to find a Little-Endians’ system which is totally consistent.

    He says this after admitting that many, if not most, “big-endian” systems are actually little-endian in bit ordering, so in this statement he completely hand-waves this away (as if it weren’t a big enough giveaway that he doesn’t actually _name_ one of these “consistent” system).

    At the end of the day, network byte order is reasonably internally consistent because the bit ordering is also MSB first, but I often work at a hardware level where the inconsistency between little-endian bit ordering and big-endian byte ordering shows that, no, big-endian byte ordering really isn’t all that.

    Yet another example of people working at higher levels not necessarily seeing or caring how the sausage is made.

  137. @Jessica:
    >The only reason that “text” formats are more human readable, is that Unix hackers have spend the past thirty years developing tools to make them readable. After all, text is just a particular kind of binary encoding.

    It has nothing to do with Unix hackers, and it goes deeper than that: Any operating system designed to interact with humans *will* have an encoding (not necessarily ASCII) for some form of human writing, and *will* have tools for reading and writing files consisting of streams of characters in that encoding. And except for things like images and sound (and even for them if the user wants details on a particular pixel or audio sample, rather than just wanting to see or hear the file as a whole), pretty much any program that interprets the contents of a binary file for human users (especially if it lets them edit those contents) will use the system text encoding to display data to the user and take user input (even if it’s a GUI program and is letting the user choose a numerical value with a slider, it will most likely display the chosen numerical value next to the slider). So a bytestring in the system text encoding will always be the most basic format on the system and almost all data viewed by a user, whatever format it’s actually stored in, will at some point, whether for input or display, take the form of a bytestring in the system text encoding.

  138. >as opposed to reprogramming your terminal or sending garbage and parts of your terminal session to the nearest printer (i.e. things which cat can also do if you combine it with the wrong file and terminal).

    Or, you know, combining two files together, which is what it actually does and is named for. Any of the things you named could be done with cp, which is also just as good as cat (without options) for reading text files and, for that matter, for half of what people use dd for.

  139. @Jon Brase
    > It has nothing to do with Unix hackers, and it goes deeper than that

    Your argument — basically humans read text therefore a text encoding is, shall we say, natural, is fine as far as it goes. However, it is far from obvious that the encoding would look like ASCII. For example Samuel Morse used an entirely different encoding, and computer hackers could have used that, and save a lot of disk space in the process. And of course the encoding proved inadequate as soon as UNIX left New Jersey.

    But the argument isn’t really about ASCII it is about how structured data is encoded. UNIX has long had a philosophy of “one record per line, with ws separated columns” the argument being that this is more “readable”, but it is only readable insofar as there are tools to make it readable, and it is only processable insofar as there are tools that work with that format. There is no reason why alternative encodings could have been used, including variable encodings.

    Once again, this is not a pipe dream (excuse the pun); an example of this can be found in the Windows Powershell or the command line interface of most SQL databases, or many other interactive shells.

    1. >UNIX has long had a philosophy of “one record per line, with ws separated columns”

      OK, I have to step in at this point, put on my guy-who-wrote-“The-Art-of-Unix-Programming” pope hat, and speak ex cathedra.

      What you are pointing at is not the Unix philosophy about file formats. It is, rather, a common application of that philosophy. The actual philosophy is more like this:

      1. File formats should be textual, so they can be read by the Mark One Eyeball and edited without specialized and fragile tools. (This choice also avoids endianness problems and other similar issues.)

      2. File formats should be unambiguous and self-describing. That is, it should be possible for a human being looking at a representative example of the format to see how to write a parser for at least common cases without having to read documentation.

      3. File formats should be lightweight and clean, not overwhelming the actual payload with markup.

      4. File formats should be conservative, re-using tropes that the reader will be familiar with from other formats with similar roles. Example; in Unix DSV files with field values that can contain whitespace, it is traditional to use ‘:’ as a field separator.

      Yes, it’s common to use LF as a record separator and whitespace as a field separator – but you have to understand this practice as a consequence, not a cause. When, for example, the shape of the data demands more than two levels of structure, it’s not the Unix way to try to cram it into that simple a frame.

      JSON partakes of the way of Unix. Many (though not all) XML applications do not. When you comprehend the difference, Grasshopper, you will be ready to leave the monastery.

  140. File formats should be textual, so they can be read by the Mark One Eyeball and edited without specialized and fragile tools.

    This is really the heart of the argument, but to at least the extent of text encodings it’s begging the question. The Mark One Eyeball most certainly can’t read /etc/fstab off of the disk: It requires a whole stack of underlying technologies that range from magnetic or capacitance sensing to ECC to LBA to […] to, at the minimum, a hex editor, and nearly all humans still require a text editor on top of that.

    The debate is over what level should be considered the system’s lowest common denominator for “standard general-purpose tool”, and a particular text encoding and format doesn’t get a privileged place in se. While the Windows Registry is a disaster in actual implementation, some sort of tabular or key-value store could make at least as much sense as the standardized storage format. Logs are a notorious example of this; a flat text file requires substantial wizardry to extract the results of complex queries, whereas services such as Loggly (which essentially store log messages as Mongo/JSON objects) can support powerful, easy-to-understand queries without sacrificing flexibility and universality.

  141. This is really the heart of the argument, but to at least the extent of text encodings it’s begging the question. The Mark One Eyeball most certainly can’t read /etc/fstab off of the disk: It requires a whole stack of underlying technologies that range from magnetic or capacitance sensing to ECC to LBA to […] to, at the minimum, a hex editor, and nearly all humans still require a text editor on top of that.

    The filesystem layer is totally separate thing: the same contents can be stored on different filesystems (ext3, btrfs, NFS, …), or even received from network. Worrying abour “reading /etc/fstab off disk” is a layering violation ;-)

    About requiring text editor: there are tons of text editors and text tools (like pagers, e.g. more, less), and it is very easy to write one. Not so with tools to display binary data (especially one-off binary data format). And that is the crux of the matter…

    1. >Not so with tools to display binary data (especially one-off binary data format). And that is the crux of the matter…

      Indeed it is. Any argument against textual formats that pretends there isn’t a large jump in cost and complexity when you go to binary is at best laugh-out-loud stupid. Sometimes that cost may be justified by a performance requirement (as in, say, graphics and video formats) but pretending it’s not there is dishonest.

  142. Worrying abour “reading /etc/fstab off disk” is a layering violation ;-)

    Only if you start from the premise that the interpretation of the bytes in a file holds a specially privileged position in system design. Jessica argues, and I am willing to entertain, that the “bag of bytes” abstraction level is at least past the point of its general usefulness, if it ever really was the right solution.

    About requiring text editor: there are tons of text editors and text tools (like pagers, e.g. more, less), and it is very easy to write one. Not so with tools to display binary data (especially one-off binary data format). And that is the crux of the matter…

    Historical accident, and again begging the question. If the standard abstraction is something more structured than “bag of bytes” and is supported as a core service of the system in question, there’s no reason that the text editor should be easier to use than an equivalent tool. Two examples that pop immediately to mind are the Mongo document store (where documents are presented as JSON and can be manipulated as if they were JSON objects, but are persisted and indexed in an optimized binary format) and the Plan 9 resource structure (and Reiser4), where many of the traditional Unix bags of bytes were broken out into a pseudo-directory structure much like the /proc filesystem.

  143. @esr
    >ndeed it is. Any argument against textual formats that pretends there isn’t a large jump in cost and complexity when you go to binary is at best laugh-out-loud stupid.

    Your hilarity notwithstanding, you are only looking at one side of the coin. Attempts to force complex data into text formats is often tortuous. Backslash escaping or backslash escaping a backslash escape of a backslash escape, * in a single quote or a double quote, regular expressions, etc. etc. etc. These are all examples of shoehorning non textual data into a text format. My memory of shell programming is hacking together tools to make ersatz parsers out of cut and awk and sed. This is non trivial, and it all derives from encoding data into a text stream.

    Which isn’t to say text isn’t useful and maybe even superior. But it is a mistake to think it doesn’t have a lot of challenges.

  144. @(oops, almost typed a non-pseudonym):

    Your hilarity notwithstanding, you are only looking at one side of the coin. Attempts to force complex data into text formats is often tortuous.

    There is no question that when the same program is the producer and consumer, on homogeneous systems, the code to read and write binary formats is simpler.

    But even in the homogeneous case, text formats can be easier to debug, because it is extremely easy for humans to read the output and generate input testcases. I have at my disposal literally millions of dollars of test equipment, yet I would say my main two debugging tools are a text editor and an oscilloscope.

    And in the heterogeneous case, you can do text. Or you can do something like CORBA…

    Jeff Read’s discussion of D-Bus winning the IPC wars is not really all that relevant; mouse clicks, system update status, and keystrokes are orthogonal to the Unix concept of stringing programs together via stdin/stdout. Most of the IPC mechanisms DBUS is used for were already being done with binary data, and it is a homogeneous system in that all communicating clients are running on the same host and using the same library.

  145. @Christopher Smith:

    I know others use it, but “bag of bytes” is a silly term. If it really were a bag, then we could determine nothing about its contents, save perhaps using some statistical analysis software that might conclude, for example, that there is a high probability that the contents are written in English and encoded in ASCII, because around 12.7% of the bytes in the bag are either 0x45 or 0x65.

  146. @Jakub Narebski
    > Not so with tools to display binary data (especially one-off binary data format). And that is the crux of the matter…

    But that is entirely begging the question. The reason why is because there is no cultural infrastructure for doing so, in the way that there is for text. There are a a lot of different control codes to move cursors about on a terminal screen. Each format is complex and turgid. Yet a culture for handling this exists, and so anyone can write a program that uses a curses based interface pretty easily.

    Imagine if you will a world in which there was a directory called /etc/schemas that contained a per magic number of file extension schema to interpret and type data. Now imagine a simple library to apply that schema to miscellaneous binary data. You now have pretty much the same facility to manage binary data as you have flat text.

    As I mentioned already we need not imagine what this would be like. Filesystems are big binary blobs that have a structure imposed on them. No-one would seriously think of treating a disk drive as a text interface. It is binary, and a program interprets the meaning of that binary. Even though there are dozens of types of file system, they all present with an abstracted interface to allow the easy creation of programs that manipulate it, including all the aforementioned text tools.

    MySQL does not store its data as a text file. It wouldn’t make sense to do so. It stores it as a big binary blob, which has an abstraction layer stored on top, and from which we can easily build tools.

    Using the /etc/schemas paradigm would probably be a little harder than text for some jobs, but given that it would be easier to manipulate the data in a typed way without the complexity of awk type stuff, it may be a wash.

    However, that /etc/schemas culture doesn’t exist in the Unix world, and consequently we are embroiled in a text oriented paradigm with all its pros and cons.

  147. @Fluffy_Girl
    > @Jakub Narebski
    >> Not so with tools to display binary data (especially one-off binary data format). And that is the crux of the matter…
    >
    > But that is entirely begging the question. The reason why is because there is no cultural infrastructure for doing so, in the way that there is for text.

    No, it isn’t. The infrastructure for plain text is here because *documents* at their heart are plain text (see success of plain text + lightweight markup: HTML, Markdown et. al., JSON). Therefore there were tools to view and display text, so to ease manipulation and debugging programs used text for configuration, logging, and output.

    Binary formats are usually one-off affairs, with one program to manipulate and display. Text is universal (though details of formatting may differ), eyeball debuggable, and extensible.

    > As I mentioned already we need not imagine what this would be like. Filesystems are big binary blobs that have a structure imposed on them. No-one would seriously think of treating a disk drive as a text interface. It is binary, and a program interprets the meaning of that binary. Even though there are dozens of types of file system, they all present with an abstracted interface to allow the easy creation of programs that manipulate it, including all the aforementioned text tools.

    Filesystem is a separate layer (and a layer where performance matters – then binary formats are a must), just like there is a separate block device layer underneath it.

    > Imagine if you will a world in which there was a directory called /etc/schemas that contained a per magic number of file extension schema to interpret and type data. Now imagine a simple library to apply that schema to miscellaneous binary data. You now have pretty much the same facility to manage binary data as you have flat text.

    But you have to have a schema to interpret it (see 6000+ pages of OOXML _incomplete_ documentation), while you can grok text without any “schema”. So your “imagine” is pie in the sky wishful thinking…

  148. @Jakub Narebski
    > No, it isn’t. The infrastructure for plain text is here because *documents* at their heart are plain text

    Have you ever written an nroff macro? How many backslashes do you need? Once again, plain text is just a particular simple binary encoding.

    > Binary formats are usually one-off affairs, with one program to manipulate and display. Text is universal

    That isn’t true at all. There are lots of general binary formats. For example, it is common in configuration files in UNIX to have essentially key value pairs. That is a very common binary format that can be built with indexing to make it much quicker and more efficient, and generalized libraries (generalized to /etc/schema) can take advantage of that.

    With the right infrastructure, binary formats are eyeball debuggable and extensible too.

    > Filesystem is a separate layer

    You say that as if it is intrinsically a separate layer. It was chosen so because binary is necessary here and the common user space of UNIX doesn’t have a good culture to support binary formats. However there are several operating systems with user space filesystems, so your point is moot.

    > But you have to have a schema to interpret it (see 6000+ pages of OOXML _incomplete_ documentation)

    Have your seen the specification for Unicode? Very long and by definition incomplete. Just because some edge cases are difficult doesn’t mean the general principle can’t apply. For example, early UNIX could have provided XPath and XSLT type functionality against a binary XML format and had largely the same simplicity while eliminating some of the torturous shell scripts I mentioned before.

    > while you can grok text without any “schema”.

    Again, no you can’t. The text schema is just inculcated into the culture rather than explicitly stated.
    This schema says, for example, that these bits 01001010011001010111001101110011011010010110001101100001 should be displayed with the glyphs “Jessica”. BTW, when applying this schema to these bits, you group the bits into octets and have to then choose which of the bits is the MSB and LSB, much as you have to choose the order of significant bytes in a four byte encoded integer.

    > So your “imagine” is pie in the sky wishful thinking…

    Actually, not. There are several shells in other operating systems that do things pretty similar to what I described.

  149. @Fluffy:

    Filesystem is a separate layer

    You say that as if it is intrinsically a separate layer.

    Well, if it’s not, and if we do this:

    Imagine if you will a world in which there was a directory called /etc/schemas that contained a per magic number of file extension schema to interpret and type data. Now imagine a simple library to apply that schema to miscellaneous binary data. You now have pretty much the same facility to manage binary data as you have flat text.

    then at a minimum, we have an interesting bootstrap problem.

    There are lots of general binary formats. For example, it is common in configuration files in UNIX to have essentially key value pairs. That is a very common binary format that can be built with indexing to make it much quicker and more efficient, and generalized libraries (generalized to /etc/schema) can take advantage of that.

    If you need high efficiency, by all means use a binary format. Most configuration files don’t, and many programmers prefer the ability to use tools (including the Mark I eyeball) that are easy and near-universal. You seem to be trying to sell us on the concept that we could use a generalized library to write a script to manipulate a file, and that then **somebody** will create a tool built on the library to manipulate the file so I can directly interact with the file without writing a script.

    This is an interesting recipe for getting even the most radicalized vi and emacs users to set aside their differences and drive out the blasphemous interlopers. But seriously, if people can’t agree how to edit text, how are you going to get them to agree on the right interface for your specialized binary key editor program? What if your schema allows a way to do lists, but the program I use that reads the schemas and allows me to edit files blows up when presented with this schema it hasn’t seen before?

    Have you ever written an nroff macro? How many backslashes do you need?

    The problem with nroff macros is more one of the general problem of building stuff incrementally, as opposed to having a good programmer and a good knowledge domain expert sitting down and doing a clean green field design. It is entirely possible to have text files that require no escaping for most cases, and allow extremely easy escaping for the other cases. I have done this with my RSON project (and any well-formed JSON is also valid RSON). RSON also lets you do Windows-registry style keys if that’s what you want.

    On a more positive note, even with nroff, you can, in fact, experiment with the number of backslashes you need using nothing more than a simple text editor.

    Actually, not. There are several shells in other operating systems that do things pretty similar to what I described.

    Tools that let you manipulate arbitrary data are awesome, because they can help to mitigate the suckage caused by programs that store small amounts of data, such as configuration records, in formats that require specialized tools to manipulate them.

    OTOH, the tools also suck, to the extent they encourage the storing of configuration data in binary formats.

  150. > Have you ever written an nroff macro? How many backslashes do you need?

    Who writes nroff nowadays, instead using e.g. asciidoc lightweight (and human-readable) markup language and convert (“print”) it to nroff?

  151. >OTOH, the tools also suck, to the extent they encourage the storing of configuration data in binary formats.

    Text-format configuration files are awesome because they can be edited with Your Favorite Text Editor.

    They also suck, because they encourage the programmers who create the files to just leave them to be edited by Your Favorite Text Editor, rather than writing specialized tools that allows them to be manipulated with The Luxury of Ignorance.

  152. @Deep Lurker:

    Believe it or not, the luxury of ignorance is often better supported via a text configuration file than via a confusing forest of menu choices and clickboxes.

    The windows admin where I work is a lot busier than the linux one, and most of her job seems to be going around and clicking at things for people.

  153. @Jessica/Fluffy Girl: The use of cut, awk and sed in shell scripts is a quick-and-dirty way of getting data. Is it the best way? No. But it means nothing about the utility of text formats; if you want to parse a config file, sure you could use awk and sed to get what you want, but if you are doing something non-trivial, it’s best to use a full-blown parser. Perl and Python are chock-full of parsers for common Unix file formats.

    Consider a script that totals up the resident memory size for a particular process. Sure, you could do something like ‘ps -ely | grep foo | awk ‘{ print $8 }’ | (x=0 && while read r; do ((x=x+r)); done && echo $x)’ on Linux using bash to print a total memory usage of ‘foo’ processes, for example. Of course, this won’t be portable. BSD’s ps command doesn’t support this syntax or report format. SysV-derived OSes like Solaris will support the syntax, but the report layout is subtly different. But if the script doesn’t need to be portable outside of the use GNU ps and bash or ksh93, then you’re done. It’s good enough. Of course, you could always just write some C or Python or Perl code that makes appropriate Unix API calls to peek at the process table and then add up the total that way.

    The reason TMTOWTDI is that there are different tools to do different jobs. It’s up to you to pick the best way. On Windows, your only choices to accomplish this same task are PowerShell and Windows Scripting Host, which are nearly identical. They use objects. Or you could pay someone for a specialized tool….

  154. @Patrick:
    >Believe it or not, the luxury of ignorance is often better supported via a text configuration file than via a confusing forest of menu choices and clickboxes.

    Indeed. I can support this from personal experience playing around on Windows boxes as a kid. An exception to this is image data, where I’d rather have a bmp and Paint or GIMP than a PPM and a text editor.

  155. @Jon Brase:

    An exception to this is image data, where I’d rather have a bmp and Paint or GIMP than a PPM and a text editor.

    Absolutely. OTOH, some non-insignificant portion of all publicly available images seem to be screenshots showing how to do things with a GUI…

  156. @Morgan Greywolf
    > Is it the best way? No.

    Hold on there Morgan. The whole argument here is that the primacy of text derives from the utility of the tools. If the tools are not the best way then the argument is without foundation. My argument is that text is only primary because only text tools have been created, and a different ecosystem is possible. That ecosystem has a different set of pros and cons than text, but I see no evidence that might lead to Eric’s derisive laughter at imagining the different world.

    > Consider a script that totals up the resident memory size .. this won’t be portable.

    Not portable because the data is untyped and untagged with a semantic. Typing the data allows for better “little utilities” that can be readily composed.

    > On Windows, your only choices to accomplish this same task are PowerShell and Windows Scripting Host,

    Why? Because there are no compilers or interpreted languages available on Windows? Python works just fine on Windows, Visual C# is beer free. Anyone who is capable of writing code in those languages has the tools on their computer. So I’m not exactly sure what your point here is. There are also many ways to get the job done in Windows. However, there is also an extra way — a scripting language with properly typed objects, that is deeply embedded into the operating system and most common applications. That seems to me to be another, superior way to get the job done.

  157. Believe it or not, the luxury of ignorance is often better supported via a text configuration file than via a confusing forest of menu choices and click boxes.

    Mostly only when you’re doing it wrong. Like Eclipse does. Visual Studio, at least back when I used to use it, made a hell of a lot more sense than either Eclipse or emacs.


  158. And in the heterogeneous case, you can do text. Or you can do something like CORBA…

    While never a huge fan of CORBA it is simply amusing that it was abandoned right around the time it finally stopped sucking so badly in favor of SOAP. And now protocol buffers.

    I like the self describing Fudge binary messaging format more than protobufs but it appears to be somewhat dead now. I guess that’s because I use protocol buffers because many others do too instead of fudge.

  159. The use of cut, awk and sed in shell scripts is a quick-and-dirty way of getting data. Is it the best way? No. But it means nothing about the utility of text formats; if you want to parse a config file, sure you could use awk and sed to get what you want, but if you are doing something non-trivial, it’s best to use a full-blown parser. Perl and Python are chock-full of parsers for common Unix file formats.

    Windows PowerShell uses objects where Unix shells use text streams. With PowerShell there is no parsing step; you simply extract the fields you’re interested in. You can also filter by arbitrary criteria.

    Things which require contortions using awk, grep, or perl in Unix shell become straightforward one-liners in PowerShell.

  160. While never a huge fan of CORBA it is simply amusing that it was abandoned right around the time it finally stopped sucking so badly in favor of SOAP. And now protocol buffers.

    Don’t forget D-Bus. Perhaps CORBA’s biggest use case was inside GNOME’s Bonobo object model. Why, I don’t know. Open source is always pulling boners like trying to come up with a COM replacement while not grokking what made COM so important to the Windows ecosystem.

    Snigger all you want about COM, it provided a unified framework for access to in-process, out-of-process, or remote objects, was fast and efficient, and was much more developer-accessible than CORBA. Much like Internet Explorer in the 90s, the fact that the Microsoft solution, janky as it was, was significantly better than any of the alternatives, goes overlooked by fosstards. Although, in the object-model case you can argue that NeXTSTEP was and is superior to even COM.

  161. I don’t snigger about COM but it was moderately annoying to use. Frankly, until C# and .NET the entire windows stack was moderately to very annoying to use. I preferred Tooltalk and/or Centerline IPC and Unix over COM and Win32/MFC.

    CORBA sucked big time…especially when you had to do anything cross ORB or even just supporting multiple ORBs. Today (well quite a few years ago now), it finally more or less worked as advertised way back when. Kinda like Java. Finally it more or less works as advertised way back when…about a decade late (it more or less met it’s promised capabilities somewhere around Java 1.4-1.5 IMHO).

    We have a tendency to move to a new, and currently sucking, replacement technology pretty much right when the old technology is finally stable enough to not suck as hard anymore and meet some of the promise the early adherents were crowing about…who are invariably crowing about how great the new immature replacement is.

    Either that or I’m just getting old. Or perhaps both.

  162. Windows PowerShell uses objects where Unix shells use text streams. With PowerShell there is no parsing step; you simply extract the fields you’re interested in. You can also filter by arbitrary criteria.

    The little I’ve used powershell I like it…much more than traditional unix scripting. But it’s still quite a bit awkward for me since I’m used to unix. Then again traditional shell scripting is also awkward these days since I’ve forgotten most of it.

    I don’t even do much applescript.

    It’s something of a shame that pash was abandoned but it’s a huge task if you need to replicate COM, OLE, WMI, etc. Last checkin appears to be 2008 but someone did fork on github. We’ll see how that turns out with a new maintainer.

  163. @nht:

    We have a tendency to move to a new, and currently sucking, replacement technology pretty much right when the old technology is finally stable enough to not suck as hard anymore and meet some of the promise the early adherents were crowing about…who are invariably crowing about how great the new immature replacement is.

    Counterexample: C. It was losing market and mindshare quickly until the ANSI effort went a long way towards its desuckification.

  164. We have a tendency to move to a new, and currently sucking, replacement technology pretty much right when the old technology is finally stable enough to not suck as hard anymore and meet some of the promise the early adherents were crowing about…who are invariably crowing about how great the new immature replacement is.

    That is because often during the desuckification process, severe architectural flaws are revealed which were masked by the general suckiness of the product. Then comes the realization that in order to truly desuckify it, it needs to be rewritten from the ground up making vastly different architectural choices. Add to that the fact that much of open source is written and managed by whiny twenty-somethings, and you can expect to see much of your full stack get ripped out and replaced once every n years, for n smaller than 10.

    As a readily accessible example, we’re seeing this happen with X11 and its replacement, Wayland.

  165. That is because often during the desuckification process, severe architectural flaws are revealed which were masked by the general suckiness of the product.

    Maybe. Why the hell we need to reinvent RPC for every new technology escapes me.

    I think that folks prefer the next BSO in the hopes that it really is a silver bullet because, you know, it’s shiny.

  166. Re gcc -whacky

    Yeah I have longed for that too.
    Actually I have longed for a /term/ describing this behaviour – when implementing standards, deliberately not going the default route, maybe even randomize behaviour, to weed out lazy reliers-on and standard creep (extension by worked-for-me principle).
    Still have no good term for that.

    gcc -whacky should be easy to implement. Since there is no significant C code without undef behaviour, you need to parse the -o target and replace it with

    #!/bin/sh
    echo “Undefined behaviour”

  167. It is sad.
    . The statement does indeed generate code. It does not resolve to a constant at compile time.
    . Hence, it is useless for conditional compilation, which is the only purpose of all this stuff.

    I wonder me, isn’t it evident ?

  168. Many interesting topics.
    Started talking about endianness then branched to other topics.
    Please open new threads for new topics. This page became unreadable.

    I originally wanted do say to Eric Rymond that I know that those lower-level programmers use more cryptic and tricky code, because they often think in how to generate an optimal assembly code.
    But the beauty of C language is that it is designed to work on low-level programs like hardware controllers, communications protocols, etc. At the same time it looks like a higher level language. Maybe I am wrong, because I rarely go to the low-level, but I have learned in almost 40 years. Programmers should write clear code always. Avoid or hide the tricks using the abstraction mechanisms in the language. Commenting just adding sparse comments on the code, is not sufficient. Many programmers do not even clearly state the input and output of each procedure. What is better? write literate programs.
    Those programs explain more about the knowledge domain which is important if we want to explain why we decided to implement in the way we did, and later us or other programmers can continue maintaining the programs. Donald Knuth proposed that marvelous way to write programs. Take a look on that, and please, do not write cryptic code, but if you can avoid it, please explain why the tricky code generates better assembly code, if such is the case.
    The trick proposed by Eric Rymond, did not work in 2017, a date after 2013, so it was not as portable as he supposed by that time.
    Like Samuel Tardeau I also obtained the message:

    error: operator ‘*’ has no left operand

    I only wanted to upgrade a Free Software program, but I do not understand the code. It has not even comments. In that way I can not help, because I am not expert in the domain, I do not know the used standards. Although I tried to organize the modular structure of that program, I quit, because I can not dedicate hundreds of hours to learn from undocumented cryptic code. Is like disassembly a binary file to obtain the specification. I know that there are very skilled reverse engineering experts, but not me.

Leave a Reply to Fluffy Girl in a Man's World Cancel reply

Your email address will not be published. Required fields are marked *