47 comments

    1. >Note that C11 & C++11 provide tools for determining alignment of types.

      I’m vaguely aware these exist. Reference?

  1. @esr: “It is more likely that c will be mapped to the first byte of a machine word. In that case M will be whatever padding is needed to ensure that p has pointer alignment – 3 on a 32-bit machine, 7 on a 54-bit machine.”

    I think you mean 64 bit machine above…

  2. A few comments:

    Section 2:
    Optimizing packing in complex data structures can frequently result in organization which is less human-comprehensible, too. Given the cost of people vs. the cost of computer equipment, it’s almost always better to optimize for human readability.

    Section 3:
    #pragma packed can also be used to map a fixed-format data structure such as over-the-wire packet formats or on-disk structures. This is less safe than writing your own marshal/demarshal routines, but for known cases the tradeoffs may be worthwhile.

    Section 5:
    The stride address is that of p[2]. <– Should read p[1]
    Also, it should be noted that bitfields cannot be used if you need to take the address of the member.

    Section 7:
    You can also strip integers down a lot, too. If you know you are limited to a 100-character length, you can strip the last bit off a uint8_t to use it as a flag.

  3. Using parallel arrays as opposed to an array-of-structs is another technique which can be used to improve memory use; for some reason it doesn’t seem to be mentioned in the article. Be careful, though, as it can actually make cache performance a lot worse when you have sequential accesses that touch multiple fields of the struct.

    If you _are_ dealing with this kind of access and your struct padding is bad enough, one interesting alternative is to manually unroll adjacent array elements into “fat” structs with lower wasted padding. Of course, accesses become quite a bit more complicated, but better cache usage could still make it a win.

  4. I’m not an advanced C programmer, or a master of C, and I could not have written this document myself. But I can criticize it intelligently. Or at least find an error.

    struct foo6 {
    char c; /* 1 byte*/
    char pad1[7]; /* 7 bytes */
    struct foo6_inner {
    char *p; /* 8 bytes */
    short x; /* 2 bytes */
    char pad2[6]; /* 6 bytes */
    } inner;
    };

    This structure gives us a hint of the savings that might be possible from repacking structures. Of 24 bytes, 11 of them are padding. That’s nearly 50% waste space!

    Actually, 7 + 6 = 13 of the 24 bytes are padding – over 50% waste space. You might want to fix that.

    Incidentally, though I am not an advanced C programmer, I would consider the techniques described here fairly obvious, once one thinks about the issue. Also, perhaps there’s a market opportunity here: a tool which sucks in C source, and produces the same source with the padding made explicit. One wouldn’t want to automate structure declaration rewriting to minimize padding; the structure’s element order might have declarative meaning.

    However, it seems possible that an optimizing compiler could do it. How hard would it be for a compiler to reorder the components of a structure in descending size in the internal representation used when it is actually created? If there’s no union involved, no one cares what the physical order in memory is. Unless one is looking at a physical memory dump, I guess.

    Just to show how non-advanced a C programmer I am, I had to look up whether the value returned by sizeof(struct foo) includes the padding. Sometimes yes, sometimes no, it seems.

    The larger problem is of course a variant of “Hardware boys think it up, software boys piss it away.”

  5. I’m not sure this is a valid comment … but in the last two sentences in “5. Structure alignment and padding” you state that 11 bytes of padding amount to almost 50% of the 24 — however, my basic math skills sum 6 plus 7 bytes of padding in ‘foo6’ to a total of 13 bytes (which would equate to over 50%).

  6. @esr: regarding the references for alignment support in C11, see:
    http://www.drdobbs.com/cpp/cs-new-ease-of-use-and-how-the-language/240001401

    There’s also some compatibility between C11 and C++11 here:
    http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3093.html

    Hence, perhaps the following (C++11, but see above) examples may also be of use:
    http://en.cppreference.com/w/cpp/language/alignas
    http://en.cppreference.com/w/cpp/language/alignof
    http://en.cppreference.com/w/cpp/types/alignment_of

  7. Good article. Reminds me of my IBM 370 assembler programming days. Except that it was a lot easier to figure this stuff out for yourself when the assembler listing threw it in your face.

  8. Thanks Eric on this one, it was fascinating even for a “not being beyond helloworld.c” person like me :-).
    As a curious one I’ve a simple question: that padding lead to waste memory space, isn’t It? So, if most of the programmer aren’t aware of this topic, is It mean that we could possibly running wasteful software? Am I wrong?
    Now I’m wondering how for languages like Python, Perl, …, is this padding stuff working…

    Cheers,
    Alessandro

    @Justin Andrusk: http://www.catb.org/esr/structure-packing/packtest.c ;-)

  9. Yes, you’re probably running software that could “benefit” from better packing like this, but like the doc said, it’s not going to matter noticeably except in certain extreme circumstances.

    As for higher-level languages like Python and Perl, well… if you’re using one of them but care about the code’s memory footprint enough to wonder how you’d do this there, you’re Doing It Wrong.

  10. Hello Eric,

    This needs some revisions. First, the use of the term “alignment requirements” is incorrect here. You’re talking about how you think the compiler would lay out the structs for efficiency. The alignment imposed by the padding scheme for structs (and malloc etc) is always stronger than the alignment requirements (provided the compiler works). Violating the alignment requirements causes your program’s behavior to become undefined.

    Second, you are wrong on doubles. On GCC Linux/x86 they are aligned to 4 bytes inside structs. Also what about long double?

    1. >First, the use of the term “alignment requirements” is incorrect here. You’re talking about how you think the compiler would lay out the structs for efficiency. The alignment imposed by the padding scheme for structs (and malloc etc) is always stronger than the alignment requirements (provided the compiler works). Violating the alignment requirements causes your program’s behavior to become undefined.

      There is a can of worms here. I’m not sure I want to revise the article to open it.

      You are correct that my use of “requirement” is debatable on x86/ARM. When the x86 e18 bit is off it isn’t a requirement – it’s an optimization, one you can prevent or circumvent with pragma pack. I did think about this.

      The thing is, the purpose of the article is not to be an x86 processor reference or be too tied to the features of any single processor. There are machines still in production use that will SIGBUS or SIGILL if you violate self-alignment. I made a strategy decision in writing the piece not to get bogged down in details about where he requirement is hard vs. where it is soft – rather, I chose to focus on the normal behavior of compilers while dropping some clues like the discussion of e18 and pragma pack to the fact that there’s another layer of complexity and variation underneath it.

      >Second, you are wrong on doubles. On GCC Linux/x86 they are aligned to 4 bytes inside structs.

      But only on x86, and maybe not then depending on what compiler option you choose. I knew this and deliberately didn’t go there – this particular detail was in fact the single preponderant reason for the disclaimer at the end that you can find violations of these rules if you look hard enough.

      >Also what about long double?

      Given the different semantics and alignment requirements this type can have (x86 80-bit vs 128-bit double double on some other architectures) this is another detail I felt should be judiciously ignored in a synoptic introduction.

  11. Note that C11 & C++11 provide tools for determining alignment of types.

    I’m vaguely aware these exist. Reference?

    The Standard, or its April 2011 draft at http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf (modulo the syntax `_Alignof unary-expression`; see http://stackoverflow.com/a/15737472/95580).

    For the discussion leading to this feature’s inclusion, see N1335: Adding Alignment Support to C and N1507: C and C++ Alignment Compatibility.

  12. SSE vectors also should be self- (16-byte) aligned. This case requires some manual attention than ordinary integers and pointers don’t, because most malloc() implementations only produce 8-byte alignment.

  13. More generally, look for ways to shorten data field sizes. In cvs-fast-export, for example, one squeeze I applied was to use the knowledge that RCS and CVS repositories didn’t exist before 1982. I dropped a 64-bit Unix time_t (zero date at the beginning of 1970) for a 32-bit time offset from 1982-01-01T00:00:00; this will cover dates to 2118.

    Well, some time ago on git mailing list there was a person asking about support of pre-epoch dates, because he wanted to put U.S. Constitution in a [git] repository, using date of writing as authordate… There are similar efforts in different “open government” projects, e.g. putting German Law on GitHub, though I think most of them are about current laws.

    Though it is more abuse than use of version control system, and I don’t thing anyone is doing something similar with CVS. Just FYI.

  14. Thank you for the posting on structure packing. It was a good read, and I found it a clear explanation of the technique. I still won’t claim “Advanced Programmer” credentials, but I do have flashes of intense adequacy more and more frequently.

    I had to learn this technique on a device driver project with a similar problem profile: multiple instances of a struct that was seemingly larger than it should have been.

    The project was an HP-RT device driver for a VME card that did “memory reflection” to other, similar cards on the fiber network. This created a shared memory pool from which applications on the various systems could work. The most pain and fun I’ve ever had while coding.

  15. See also Arnaldo Carvalho de Melo’s “7 dwarves” tools from years ago, which consume DWARF debuginfo from object files. One of the tools, pahole, exactly shows structure padding.

  16. @phlinn:

    Having a compiler magically re-order things can lead to Bad Things. When you start dealing with more complicated data structures, you will frequently find yourself doing casts and pointer arithmetic. Having data structure elements get shuffled around doesn’t help. There’s a reason that offsetof() exists:

    http://en.wikipedia.org/wiki/Offsetof

    Also, if you are dealing with an API where you are linking against a binary library, both sides *must* use the same in-memory data format for structures or else there will be no way to know what data field is in what location.

  17. Both clang and gcc have “-Wpadded”, which isn’t included in either “-Wall” or “-Wextra”.
    There’s a “-Weverything” in clang, but it’s very noisy — you’ll need at least “-Wno-disabled-macro-expansion” or the glibc headers will make you insane :)

  18. I’d love to see some discussion of the current state of array-of-structs vs struct-of-arrays data structures, not just for memory usage. My current (possibly incorrect) impression is that for a decade or so, cache performance was commonly better on arrays-of-structs data (letting us C/C++ geeks lord it over Fortran-style programming, and even look forward to the day when *we’d* finally have the fastest numerical language(s) in every situation once compiler optimization was a little better and the restrict keyword was sorted out), but I’m told that this is no longer so true and that any remaining advantage is becoming swamped by better automatric vectorization possibilities in structs-of-arrays data.

  19. > You are correct that my use of “requirement” is debatable on x86/ARM. When the x86 e18 bit is off it isn’t a requirement – it’s an optimization, one you can prevent or circumvent with pragma pack. I did think about this.

    My point is that this is not a property of the processor. It comes from the C implementation. If you look in the C standard the term “alignment requirement” appears and it has a specific meaning. This may or may not be derived from some property of the processor.

    > The thing is, the purpose of the article is not to be an x86 processor reference or be too tied to the features of any single processor. There are machines still in production use that will SIGBUS or SIGILL if you violate self-alignment. I made a strategy decision in writing the piece not to get bogged down in details about where he requirement is hard vs. where it is soft – rather, I chose to focus on the normal behavior of compilers while dropping some clues like the discussion of e18 and pragma pack to the fact that there’s another layer of complexity and variation underneath it.

    The issue is you make it sound like you’re talking about the rules of the C language, when you’re talking about the rules of very particular implementations. And if you think C in general works like this you’re going to shoot yourself in the foot.

    > But only on x86, and maybe not then depending on what compiler option you choose. I knew this and deliberately didn’t go there – this particular detail was in fact the single preponderant reason for the disclaimer at the end that you can find violations of these rules if you look hard enough.

    This is fine as an optimization manual but the reader has to understand that the semantics of his program can’t depend upon assumptions from this article regarding how structs will be laid out. If he doesn’t know this he’s going to shoot himself in the foot at some point. That’s why all the pedantry regarding terms and the guarantees of the standard are important.

    > Given the different semantics and alignment requirements this type can have (x86 80-bit vs 128-bit double double on some other architectures) this is another detail I felt should be judiciously ignored in a synoptic introduction.

    My point is not about long double as such, it’s that you may be damaging peoples’ mental models of how C works in a way that causes them to shoot themselves in the foot.

    1. >My point is not about long double as such, it’s that you may be damaging peoples’ mental models of how C works in a way that causes them to shoot themselves in the foot.

      You can send me a patch if you like. I’m not opposed to changing the terminology in the article.

  20. Garrett said: Optimizing packing in complex data structures can frequently result in organization which is less human-comprehensible, too. Given the cost of people vs. the cost of computer equipment, it’s almost always better to optimize for human readability.

    True and granted for every normal case.

    But the document and the technique is aimed at extremely constrained things like embedded systems, where one often can’t sensibly just throw another gig of ram at the problem, or you’re so performance constrained that you need to make sure you cache-miss as little as possible…

    (Snark: “If you want the code to be human readable, why are you writing it in C?”)

  21. @Sigivald:

    > (Snark: “If you want the code to be human readable, why are you writing it in C?”)

    I have both read and written lots of readable C. C can be eminently comprehensible, especially for small programs that don’t require memory management.

  22. Today’s (Fri 1/3/2014) XKCD comic is relevant to this discussion:

    “Code written in Haskell is guanteed to have no side effects.”

    “… because no one will ever run it?”

  23. @Sigivald:
    My day job involves writing software for data storage appliances. We *can* throw a gig of RAM at the problem (or several dozen), and a Terabyte of flash as cache on top of that, and struct packing *still* matters. Most of what I write is in-kernel. Among other things, the native page size we’re using is 4k. If we exceed that 4k for any data structure, the memory manager needs to remap two physical pages into a virtual address, causing about a 10% performance hit, not to mention problems when you approach out-of-memory conditions. If it’s more than 4028 bytes a separate tag needs to be allocated to track the memory for the allocator.

    One of our key data structures is union-ized to hell and back. The definition for it is over 4k text file lines long, not to mention the type definitions which are included from other files.

    @roystgnr: Array of structs vs. struct of arrays performance issues are likely to be based upon the access patterns of the memory involved. If most of your work involves processing inside of a single array element (or closely-related array elements) and little between elements you’ll almost certainly be better with an array of structs.

    Struct of arrays (even if you just break large data structures into two or more pieces) are very common in a lot of areas. Think of a hash table or similar, where you are performing a lookup, possibly by scanning a whole (small) array, which then gives you an index into a much larger set of data. B+ trees are especially cache-friendly in this way, too. I’ve spent days with a spreadsheet trying to figure out the optimum hash size based on density, collision cost and block read cost.

  24. “To read it, you will require basic knowledge of the C programming language.”
    No you don’t. It just takes a little longer as in my case. It was very well written and the point comes through.

  25. @kb: ““Code written in Haskell is guanteed to have no side effects.”
    “… because no one will ever run it?”

    See http://johnmacfarlane.net/pandoc/ for Haskell code lots of people *do* run.

    (It’s a document converter to change from one markup format to another.)

  26. Garrett: Indeed, but you’re still in ESR’s target domain of “embedded or kernel” – I didn’t want to make my reply too prolix by including all his cases.

    Patrick: Thus my “snark” header. I know C can be more-or-less as readable as anything else, if you put your mind to it when writing it (and it’s damned good idea for maintenance reasons).

  27. Out of curiosity: Why does C structure padding have to be a trick of the master-C-programmer’s trade? Why can’t it be a compiler optimization? Or why can’t a Unix filter read C code, rearrange the structs in it, and pass on the result to the compiler proper? On the face of it, packing struct elements under alignment constraints looks like a similar problem as breaking paragraphs into lines (Knuth 1981), or perhaps defragmenting a filesystem that needs it (FAT and friends). Both these problems have venerable algorithmic solutions. So while I’m happy to have read Eric’s article, I don’t understand what it is about padding C structures that requires the attention of a master C programmer. Why not have the machine do this work? What am I missing?

  28. @Thomas Blankenhorn
    I think it’s not really a master-level trick. I consider myself intermediate, and it’s something I’ve known about and focused on since I was a beginner (but I’m OCD about memory allocation, and always keep a table of struct sizes in my large projects). The main point is that it requires a little bit of low-level knowledge that many people who are more comfortable and/or have spent all their time in the higher-level world of Python et al probably don’t even know they don’t know.

    You really wouldn’t want to automate this because there’re more factors than just minimizing struct memory footprint. As discussed above, tuning for cache performance is a big one. It’s a problem that requires a much higher-level analysis than simply looking at the struct definitions. There’s no single Obvious Right Thing, so a simple algorithm won’t cut it.

  29. @Thomas Blakenhorn:

    > Why can’t it be a compiler optimization?

    IIRC, esr touched on that in his essay. Basically, C lets you cast pointers to anything, anywhere. Such a compiler optimization could break a lot of working code.

  30. @Thomas Blankenhorn:

    A C struct provides more than a convenient way to name stuff. It’s actually specifying the memory locations under the covers and providing an API, as well as providing certain guarantees about memory based on the ABI for the platform.
    Eg. If you create:
    struct foo {
    int a;
    int b;
    char * c;
    };

    Take a look at the ASM generated to *use* that structure. You’ll see a lot of instructions which are referencing address_of_foo + 0x04 or similar. This is because the CPU does stuff with pointer math to get to what you want. The ordering and packing matters.

    As you suggested, automatic packing would be possible, but only inside of a single compilation unit. The moment that you have to be able to link together code, esp. with regards to shared libraries, it all falls apart. Even inside of a compilation unit it is difficult to know if you are safe – I can come up with some corner cases where you’re providing a struct which must match a definition for a library (interface paradigm) where things might fall apart.

    Ultimately, a bad idea except in cases so trivial that struct packing is almost certainly not going to matter.

  31. Yeah, the thing I missed was elementary. It can’t be a compiler optimization because the compiler would also have to optimize all the code that uses the structure, past, present and future — and it can’t do that. My mistake.

  32. > You might think that sizeof(struct foo3) should be 9, but it’s actually 16.

    This reminds me of a bug I created a while back, where I wrote past the end of a malloc’d array by one item. It only produced ill effects when the array size was a multiple of 4 or 8 or something similar. I inferred at the time that malloc was padding the allocated block to some multiple of the machine’s word size. I don’t know if that’s true.

    Anyway, speaking as an intermediate C programmer, I found that enlightening and interesting.

  33. So… why doesn’t the compiler do this for us, to the extent possible?

    I mean, struct { char; int } the char has to come first to match struct { char } due to the prefix rule, but for example struct { char; int; char } could put the second character in the gap that was left by padding the int, but it doesn’t. {char a; int b; char c} is almost always “a000bbbbc000” instead of “ac00bbbb”

  34. And I see that other people have already answered my question with a non-answer. There is no “working” code that uses hard-coded offsets, and code using offsetof will continue to work.

  35. What I call the “prefix rule”, by the way, is a somewhat obscure rule in the C standard that holds that all structs that start with a given sequence of member types must use the same memory layout for those members. This allows members to be added to the end of a structure without breaking certain kinds of binary compatibility, it allows a family of structures to share certain members (like e.g. a tag or a virtual method table pointer), etc. In practice, it means that a compiler optimization can only work by putting later members in gaps between earlier members, not by a generalized “sort by size” operation.

  36. Both C11 (6.7.2.1p11) and C++14 ([class.bit]p1) do not actually require `struct foo9` to be 64 bits instead of 32; a bit-field can span multiple allocation units instead of starting a new one. It’s up to the implementation to decide; GCC leaves it up to the ABI, which for x64 does prevent them from sharing an allocation unit.

    1. >Both C11 (6.7.2.1p11) and C++14 ([class.bit]p1) do not actually require `struct foo9` to be 64 bits instead of 32;

      Thanks, I have edited appropriately.

  37. In practice, it means that a compiler optimization can only work by putting later members in gaps between earlier members, not by a generalized “sort by size” operation.

    Turns out I was wrong about even that, since all struct members also must appear in order according to (C99/C11 6.5.8p5; C89 3.3.8p4). C++ relaxes this requirement for types that are not “standard layout types”.

    So, whatever the rationale is, this is in the standard rather than a decision made by each compiler (or each ABI document)

Leave a Reply to Joel C. Salomon Cancel reply

Your email address will not be published. Required fields are marked *