How to submit a drive-by patch and get it accepted

I think it’s weird that I have to write this post in 2015, but earlier today I had to explain to someone with the technical skills to submit a good patch that he was doing the process wrong in some basic and extremely annoying ways.

Googling revealed that most explanations of patch etiquette are rather project-specific in their advice. So I’m going to explain the basics of patch submission that apply to just about any open-source project, with a focus on how to do it right when you aren’t a regular committer (that is, it’s what’s often called a drive-by patch). Here we go…

Let’s start with the blunder that motivated this post. It was email that inquired “Have you had a chance to review my previously submitted patch?” What was infuriating was that said inquiry did not identify or include a copy of that drive-by patch, didn’t identify what revision of the project it was made against, and for that matter didn’t even name the project.

This violated the first rule of getting your patch accepted, which is to minimize the cognitive load on the person processing it. That person may see dozens of patches a week on multiple projects (and frequently does, if it’s me). Expecting him to remember lots of context and match it to your name when you’re not already a regular committer that he recognizes is a good way to make yourself disliked.

So make it easy. When you mail a drive-by patch, always specify the target project it applies to, and what base revision of the project it applies to (by commit ID or date). The hapless n00b I am now excoriating replied, when asked what revision his patch was against, replied “HEAD” and it was all I could do not to leap through the screen and throttle him. Because of course HEAD now is not necessarily the same revision it was when he composed the patch.

Good etiquette is different if you have a well-established working relationship with the person you are submitting to. If I get a git-format patch submission from a regular contributor whom I recognize to one of my projects. he doesn’t have to repeat that context; I can generally assume it was made from a recently-synced repository clone and will apply against head. But I can only be relaxed about this if I am sure he will be explicit about the target project and base revision if it’s for a different project.

Another way of minimizing cognitive load is to make the patch self-explaining. Always include an explanation of the patch, its motivation, and its risks (in particular, if the patch is untested you should say so).

Hapless n00b failed to do this. I had to Google for the name of the principal function in his patch to learn that it’s a code-hardening move derived from OpenBSD security work. Even though it’s a good patch, and I merged it, that’s more work than he should have made me do. Through not being explicit, he actually slowed down the merge of a fix for a potential security issue by a couple of weeks.

Even if you are not sending a git-format patch, the git conventions for change comments are good to follow. That is, a summary line in imperative form (“Fix bug 2317.”; “Implement feature foo.”) should be optionally followed by a blank line and explanatory paragraphs.

Really good bug-fix patches include a recipe for reproducing the problem that motivated them. Because I can test those instantly, I can merge them instantly. On projects with a regression-test suite, you scale the heights of best practice by including a patch bands that enhance the test suite to demonstrate the correctness of the patched code. When you do that, I love you and want to see more patches from you.

Here’s a mistake the n00b thankfully did not make: his patch was small and simple, easily reviewed. When in doubt, send a patch series of simple changes rather than a huge rollup patch that does multiple things – that sort of mess is very likely to be rejected because reviewing it hurts the maintainer’s brain.

At least he shipped in diff -u format; it’d been years since I saw even a n00b use the horrible default -e format, which is very brittle in the face of any change to the target code at all (never do this!). diff -u -p format, which tries to put the name of the affected function in the header before each patch band, is better (this what git format-patch does).

Maintainers vary in whether they like patches to be plain text inline in the message or a MIME attachment. I myself am relatively indifferent to this, but others have strong preferences. If you are not sure, include an offer to resend in the form the maintainer prefers in your cover note.

On the other hand, unless the patch is large enough to blow an email size limit (which suggests it’s much too large) do not ever send it as an http/ftp link to be chased to the patch text. Your maintainer might be on the road with spotty network access when he tries to process it. Even if he isn’t, every step of hand-work he has to do (including trivial steps like typing a wget command) increases his process friction, which is unkind and reduces the odds that your patch will be merged.

It is irritating to the maintainer to get a patch against an old release, it is very irritating to get a patch for a problem he/she has already fixed, and it is extremely irritating to get a patch for a problem that the maintainer has to do extra work to find out he has already fixed. Find the project repository, clone it, test your issue, and make the patch against head. This will, among other things, greatly reduce the odds of your patch failing to apply.

Many maintainers (including me) dislike unnecessary merge bubbles. If you write a patch series using git that takes a while to complete, rebase it against current head before you ship. Again, this maximizes the chances that it will apply cleanly.

I’m going to finish with a preference of mine which while not necessarily universal practice, at least won’t offend anyone. If you have to refer to a previous commit in your change comment, please do not use commit IDs like git hashes or Subversion revision numbers to do it. Instead, quote the summary line of the commit and if in doubt cite the author and date.

There are two good reasons for this practice. One is that it’s human-friendly; a patch summary line is a much better cue to memory than a numeric or hex cookie. The other is that at some time in the future the repository might undergo surgery or conversion that will turn that cookie into a turd. Be kind to future maintainers by using a reference format that won’t become invalid and require hand translation.

113 comments

  1. > When you mail a drive-by patch, always specify the target project it
    > applies to, and what base revision of the project it applies to (by
    > commit ID or date).

    This is easiest when you can send the “patch” in a form which already contains information about what the parent commit is. Both git and mercurial have a bundle command which does this.

    1. >Both git and mercurial have a bundle command which does this.

      Yes, but I hate bundles and never, ever want to see one again.

      The reason: they’re unreviewable. I can’t look before I merge.

  2. > Even if you are not sending a git-format patch, the git conventions for change comments are good to follow. That is, a summary line in imperative form (“Fix bug 2317.”; “Implement feature foo.”) should be optionally followed by a blank line and explanatory paragraphs.

    Actually if you are sending patch inline (e.g. using git format-patch and git send-email) the summary line (synopsis) is the subject of the patch, and the text of an email is the rest of commit message.

    If you need a comment on the patch itself, separating the commentary from the description of a patch can be done e.g. with — >8 — scissors line.

    Another important issue: when sending patches, put [PATCH] or [PATCH/RFC] as the prefix of the email subject, so that it is easy to see that email contains patch, and is not a part of discussion.

  3. This is a sidetrack about minimizing cognitive load online in general.

    One piece of good practice is clear subject lines on email. While I don’t think they’re essential between people who are in an ongoing conversation, they make email easier to find again, and they mean your email is less likely to be ignored as spam.

    Another is to include relevant links rather than telling readers to do their own research. The poster already *has* the link, and the readers shouldn’t have to replicate the work of finding it. Your infuriating correspondent’s mistake is roughly under this category.

    Would it make sense to have a “how to submit patches” description or even a form somewhere where patch submitters are likely to see it?

  4. > If you have to refer to a previous commit in your change comment, please do not use commit IDs like git hashes or Subversion revision numbers to do it. Instead, quote the summary line of the commit and if in doubt cite the author and date.

    I acknowledge the motivation for this, but it does feel unwieldy. A message might include a bit like

    > We introduced a regression in 3fac2de1 such that the user could now only select one farmhand at a time.

    which reads fairly easily. The naive rewriting might be

    > We introduced a regression in “Make farmhands strike if their morale drops too low” such that the user could now only select one farmhand at a time.

    which to me is harder to parse. The first version also seems like it would be more open to automatic linking, if someone was reading in a viewer which could support that. (Do such viewers exist currently?)

    How would you feel about including it as a footnote?

    > We introduced a regression in 3fac2de1 such that the user could now only select one farmhand at a time.

    > 3fac2de1 – “Make farmhands strike if their morale drops too low”, Alan Smithee, 2015-03-18 13:03:12

  5. >Both git and mercurial have a bundle command which does this.

    Yes, but I hate bundles and never, ever want to see one again.

    The reason: they’re unreviewable. I can’t look before I merge.

    Better than bundles: format-patch includes the parent data, extended from the standard diff -u format (and compatible with), and git am is able to automatically merge them in. For avoiding merge bubbles, I believe it accepts the –ff-only option, or you can merge it in a temporary branch and massage it with rebase.

  6. > > Both git and mercurial have a bundle command which does this.
    > Yes, but I hate bundles and never, ever want to see one again.
    > The reason: they’re unreviewable. I can’t look before I merge.

    I agree that git is not very helpful here; you have to add the bundle as a remote and then update from it before you can see what’s in it. With Mercurial you can just do an ‘hg in’ on the bundle.

    I also don’t know why this information isn’t just in the patch…

  7. Amendment to my post: bundles are mainly useful for sneakerneting repositories and large changes without an electronic network between two computers. Git is pretty good about being neutral as to whether you even have a network. Using bundles to send drive-by patches is doing it wrong.

  8. > I agree that git is not very helpful here; you have to add the bundle as a remote and then update from it before you can see what’s in it. With Mercurial you can just do an ‘hg in’ on the bundle.

    You don’t need to add bundle as remote in Git, you can simply “git fetch” or “git pull” giving path to bundle in place of remote repository URL.

  9. > (In particular, if the patch is untested you should say so).
    Is there *any* situation in which it’s appropriate to submit an untested patch? Other than “could only test on my computer,” “could only test with these build options,” “could only test with this workload,” etc.

    1. >Is there *any* situation in which it’s appropriate to submit an untested patch?

      Sometimes it’s the most effective way to communicate a diagnosis you cannot completely verify.

  10. A couple things that might be worth adding:

    Check if the project has a CONTRIBUTING file or similar, and if it exists, read and follow the directions therein. The maintainer wrote it for a reason. (and on the other end, projects that don’t include such a file should)

    If the project is hosted on github or another site with a similar pull-request mechanic, use it. The maintainer will be used to receiving patches that way; hence lower cognitive load.

    1. >If the project is hosted on github or another site with a similar pull-request mechanic, use it. The maintainer will be used to receiving patches that way; hence lower cognitive load.

      This may not always be true. Gitorious had a pull-request mechanic that was so hard to use I preferred emailed patches to it.

      Even on a forge with a better interface, like GitLab or GitHub, a maintainer may dislike merge bubbles enough to want to process through git am rather than use the web UI – I fall in that category.

      Best thing is to ask the maintainer what his preferred workflow is.

  11. Good point. I suppose I would modify that sentence with “unless the maintainer and/or contributing file request otherwise.” I still think using site features is a sane default.

    I’m not actually sure a norm of “ask the maintainer before sending a patch” is a great idea. Wouldn’t that get irritating very quickly for nontrivial projects? (I’ve never been on that side of the fence for a widely-used project)

    1. >I’m not actually sure a norm of “ask the maintainer before sending a patch” is a great idea. Wouldn’t that get irritating very quickly for nontrivial projects?

      I like the implied concern for my time. Probably others feel similarly.

  12. Why not fix diff to use smarter options by default? Should that be a simple patch?

    1. >Why not fix diff to use smarter options by default? Should that be a simple patch?

      It would be, but the cost might be breaking a lot of scripts that are embedded ghod knows where. No maintainer has been brave enough to risk that.

  13. > diff -up [emphasis added] format, which tries to put the name of the affected function in the header before each patch band, is better.

    Minor correction: I think you meant diff -u, like you’d written earlier – not “diff -up”. I just experimented with diff’s -u and -p switches; combining them resulted in the latter being ignored.

    No snark intended; just letting you know that this time I did my research before correcting you, lest I screw up like I did with “Alice’s Restaurant”. ;-) (And I’ll stop correcting altogether if you want, of course. I’m just trying to be helpful in my own, modest way; but being admitted in A&D is an honor I don’t want to jeopardize, whether I can “help” you or not.)

    1. >Minor correction: I think you meant diff -u, like you’d written earlier – not “diff -up”.

      Actually, you can blame whoever wrote the Linux kernel’s patch-submission guide for that glitch. Their invocation said “diff-dprN”.

      I’ll fix it.

  14. It’s worth adding that, if the project gives no directions or guidelines at all to patch submitters, a reasonably sane default behaviour is to follow the Linux kernel rules — at least, if you have a MUA you can jawbone into not corrupting whitespace. (Thunderbird can be so persuaded, albeit with difficulty…)

    Incidentally, their approach to referencing other commits is to use ‘<commithash> (“<Summary line>”)’; it communicates well to the reader of plain-text and ensures the tool-user can look up the commit. There is, though, the issue of translation that ESR mentions in the OP — but perhaps the repository translator or surgeon tool will be able to lift references of this kind. In any case, the best practice here is probably to look through the existing repo history and follow whatever convention (if any) is already in use.

  15. > Actually, you can blame…

    I’m not out to blame anyone. We all make mistakes.

    Interestingly, you replaced diff -up with diff -u -p. So you had intended to include the -p flag. However, in my experiment, those two spellings (-up and -u -p) both resulted in -p being ignored, yielding exactly the same output as diff -u. Only the -p flag on its own produced something different (no, not an error message; I did use C files). Why do you use both switches if – at least in my experience – one of them trumps the other? :S

    In other words, can you really have diff “output … unified context” and “show which C function each change is in” with just one command? (Text between quotes is from diff’s man page, natch.) Sorry for the topic drift, but this does interest me.

    1. >In other words, can you really have diff “output … unified context” and “show which C function each change is in” with just one command? (Text between quotes is from diff’s man page, natch.) Sorry for the topic drift, but this does interest me.

      I’ll experiment when I’m slightly less hassled.

  16. Hi Jorge! I don’t get the same behaviour on my machine; diff -up (and diff -u -p) gives output like this (hopefully WordPress won’t clobber the formatting):

    — rep.c 2013-05-30 15:47:05.000000000 -0600
    +++ tmp-rep.c 2015-07-13 14:16:25.289769286 -0600
    @@ -32,6 +32,8 @@ cell_ref_t read_sexp(void);
    […]

    I’m using GNU diffutils 3.3. Which diff utility are you using? Might merit a bug report.

    1. >Hi Jorge! I don’t get the same behaviour on my machine; diff -up (and diff -u -p) gives output like this (hopefully WordPress won’t clobber the formatting):

      That’s what I see too.

  17. Eric, I notice that all your patching-etiquette advice is for programmers. Are there any best practices you want readers of your writings to know when they consider sending you drive-by corrections? For example, do you have a preference between getting patches against an html page and reading messages like “Typo in book x, section y.z: ‘that word’ should be ‘this word’ “?

    1. >For example, do you have a preference between getting patches against an html page and reading messages like “Typo in book x, section y.z: ‘that word’ should be ‘this word’ “?

      The second is better. Actually the most useful thing you can add is searchable context of the error so I can find it with a regexp or Emacs incremental search.

  18. > The hapless n00b I am now excoriating replied, when asked what revision his patch was against, replied “HEAD” and it was all I could do not to leap through the screen and throttle him. Because of course HEAD now is not necessarily the same revision it was when he composed the patch.

    It’s very unlikely this person will ever submit a patch again. You’re what wrong with the software engineering community.. somewhere along the line this elitist attitude became tolerated. Instead of helping this engineer understand/grow to your standards, you turn to openly demoralizing this person in a blog post.

    1. >Instead of helping this engineer understand/grow to your standards, you turn to openly demoralizing this person in a blog post.

      I forgive you, because you are ignorant of the actual facts. I instructed him more gently, in private, before writing a blog post in which I did not name him.

      Trust me that if demoralizing him had been my goal I would have had some very effective tools to hand. I didn’t use them.

  19. @ Dan C

    > I’m using GNU diffutils 3.3. Which diff utility are you using? Might merit a bug report.

    Mine is GNU diffutils 3.2. Since your version is newer and works well, I suppose there’s no need for a bug report. Now I know where the problem lies, so thanks. :-)

    @ esr

    > I’ll experiment when I’m slightly less hassled.

    Thanks, but it won’t be necessary now that Dan C has clarified the matter.

  20. > If you have to refer to a previous commit in your change comment, please do not use commit IDs like git hashes or Subversion revision numbers to do it. Instead, quote the summary line of the commit and if in doubt cite the author and date.

    Surely “please do not” is misplaced. It’s better to use both than only one. In the common case (a repository that *hasn’t* undergone massive surgery or rebasing), a short-form git hash means I can see the referenced commit with `git show` or its equivalent. Summary lines or author/date pairs are harder to cross-reference.

    In repositories I work on, I think I would actually *prefer* a git SHA to the other metainformation.

    1. >In repositories I work on, I think I would actually *prefer* a git SHA to the other metainformation.

      Please reread. What happens if the repository has to be split, merged, or surgically altered and SHA1s become invalid? This is more common than you might think.

  21. > Surely “please do not” is misplaced. It’s better to use both than only one. In the common case (a repository that *hasn’t* undergone massive surgery or rebasing), a short-form git hash means I can see the referenced commit with `git show` or its equivalent. Summary lines or author/date pairs are harder to cross-reference.

    Though summary lines are not that hard, assuming that you know the branch, thanks to ^{/}

  22. > Please reread. What happens if the repository has to be split, merged, or surgically altered and SHA1s become invalid? This is more common than you might think.

    If you are using git filter-branch to split, splice or alter the repository, you can use –msg-filter or –commit-filter, regexps for catching SHA1s, git rev-parse (for unshortening) and the map shell function available in filters.

  23. > If you are using git filter-branch to split, splice or alter the
    > repository, you can use –msg-filter or –commit-filter, regexps for
    > catching SHA1s, git rev-parse (for unshortening) and the map shell
    > function available in filters.

    Sure, and reposurgeon will help with that as well. It becomes trickier when you’re migrating to a system that doesn’t use SHA1 hashes at all, or when you’re migrating from something that didn’t use hashes. It’s better to use something that isn’t completely dependent on the specific vcs you’re using.

  24. > The reason: they’re unreviewable. I can’t look before I merge.

    You can’t merge it into a branch? Or a clone? You need a working copy to finish reviewing anyway, if reviewing includes running tests. Whether that’s done by applying the patch after looking at it, or merging from a bundle and then running git-diff to see the change in diff format, is rather academic.

    > Amendment to my post: bundles are mainly useful for sneakerneting repositories and large changes without an electronic network between two computers. Git is pretty good about being neutral as to whether you even have a network. Using bundles to send drive-by patches is doing it wrong.

    Oh, so a bundle is (or can be) a whole repo? Doesn’t that make sending a bundle the non-github equivalent of a pull request?

  25. I can only reflect on the last paragraph, as this is the only thing I have some expertise about: you are essentially saying you would rather identify a pair of shoes by its product name / description rather by the Article Number or the UPC / EAN / GTIN barcode? While the second is more human friendly, I don’t really understand the surgery argument: unique IDs should generally be immutable as a principle of database design. We call them unique ID’s because they represent the identity of a thing, as long as that thing is still that thing, the ID should not change. The description or name may change – you may rebrand a product give it a better name, but the ID should not be mutable. This is why there is another principle in database design that the ID should not contain any kind of meaningful information i.e. should be a simple integer or GUID because if does contain information, then there could be a situation when you want to change that and that is bad. Of course this is violated all the time because users like to have expressive, mnemonic ID’s, but I was assuming when the users are programmers this is not really a big issue. Of course, when you need to merge two databases and realize both has a record with the ID 12345 you are kinda fucked, and that is why my ERP industry fell in love with GUIDs.

    1. >unique IDs should generally be immutable as a principle of database design.

      Git hashes aren’t exactly. They’re a chain hash of the commit object and its ancestors, which is why they’re not stable if the repo DAG changes.

  26. > Please reread. What happens if the repository has to be split, merged, or surgically altered and SHA1s become invalid? This is more common than you might think.

    I would argue that unless the structure of the repository which produced the original SHA1 was completely malformed, or information in it needed to be excised due to e.g. DMCA reasons or something, the original commit, if it was ever public, should still exist (as a merged parent or ancestor of some revision in the new history) – this doesn’t even cost much in space if the blob and tree contents are the same.

    I take it reposurgeon doesn’t support this kind of workflow?

    1. > information in it needed to be excised due to e.g. DMCA reasons

      That’s a real case.

      >should still exist (as a merged parent or ancestor of some revision in the new history) – this doesn’t even cost much in space if the blob and tree contents are the same.

      It will still exist, but its hash may be different — will be, if anything in its DAG ancestry was modified.

  27. Also what are the odds of any of this happening, in the time between starting work on a patch and it being reviewed by the project maintainer, and the maintainer being unable to correlate the original SHA-1 of a relatively recent revision to its post-surgery counterpart?

  28. My point is that unless it _cannot_ exist with the same hash, the unmodified version (i.e. in mundane cases like fixing a typo) should be left in the repository.

    And there should be, if there’s not, support for e.g. excising a blob that has DMCA-removed content and simply having the tree for that version a dangling pointer to it (and therefore the original hashes for the tree and commit are still valid). Fossil seems to have this feature (I don’t see any reason in the docs that you can’t “shun” a content artifact without also shunning manifest artifacts that refer to it).

    For an example of what I’m saying, the original graph is

    HEAD: R5
    first-parent chain: R5 – R4 – R3 – R2 – R1

    And then some content in a file that was committed in R3 and stayed in place through R4 has to be removed, so you now have:

    HEAD: R5a
    first-parent chain: R5a – R4a – R3a – R2 – R1
    R5a, R4a, and R3a respectively have R5, R4, and R3 as their second parent.
    R4 and R3 point to trees that point to missing blobs.

    R5 still exists, and still points to a complete tree (the same tree as R5a does, in fact) it’s just not HEAD anymore. So someone’s patch note mentioning R5’s SHA-1 is intelligible.

    1. >So someone’s patch note mentioning R5’s SHA-1 is intelligible.

      I don’t think this will work. Changing the ancestry chain to take R3 and R4 out will alter the hash-chain computation for R5.

      Anyway, even if it worked, you’re being what our British friends call “too clever by half” and relying on fragile details that might not be paralleled in future version-control systems.

  29. Removing inappropriate material in a commit (e.g. commit message) or tree (e.g. filenames) is dicier, but could still be done in more or less the same way. Fixing a mere typo, of course, doesn’t mean there’s any reason the original commit should be deleted at all.

  30. Oh, so a bundle is (or can be) a whole repo? Doesn’t that make sending a bundle the non-github equivalent of a pull request?

    Bundles can be either a whole repository or just a subsection of them (typically the difference between an origin and your own HEAD), yes. They can be used to send changes over email, but this is not preferred by most maintainers; not only are they unusual (the chances of a maintainer going “wtf?” at seeing one is rather high), but they are also opaque, almost only useful by plugging it into a repository and doing a fetch against it. Plain text diffs can be eyeballed without any special tools; git has a perfectly usable tool for generating them, giving the same benefits of this type of bundle: format-patch.

    Pull requests are much older than GitHub for that matter ;) The traditional ways are to send a series of patches to a mailing list (Git and Linux themselves still do this). An alternative, and far more familiar than bundles, is to set up a repository somewhere that a maintainer can fetch from; “git fetch” and “git pull” support repository URLs without having to set up a remote first.

    About rewriting history: by its very nature, reposurgeon won’t preserving the original history in the modified repository (though it *does* keep a backup of the repositories). git filter-branch will, in the form of unreferrenced branches and commits, subject to being cleaned up by garbage collection and not transferred at all over network protocols (file:// is included in that, fwiw). Relying on the old identifiers to still be around is a bad idea; the main reason git’s tooling preserves them is so that you can undo almost all “oops, I fucked up” moments, but it won’t keep them around forever.

  31. I couldn’t help feeling I’d heard these recommendations before, and now I recall reading a similar work by our host: the Software Release Practice HOWTO’s section on good patching practice. Perhaps it would be useful to integrate the above comments, specific to drive-by patches, into that document.

    1. >Software Release Practice HOWTO’s section on good patching practice.

      Written well before git was as dominant as it is now. You’re right, I should use this to modernize that.

  32. Isn’t Software Development Practice HOWTO about the other side: maintainer, not a contributor?

    1. >Isn’t Software Development Practice HOWTO about the other side: maintainer, not a contributor?

      It’s about both with an emphasis on maintainer, but there is a section about good patch practice. I’m revising it now.

      I’m also writing about something I’ve become much more aware of recently. Coding against C99 and POSIX.1-2001 really works now as a way of bulletproofing against portability problems. Enough so that if your code has a lot of pre-POSIX shims for, e.g., old BSD Unixes, you’re better off throwing them out and losing the code complexity. Even old iron has newish toolchains these days – I’ve learned this from GPSD experience.

      (You might think that GPSD doesn’t need to run on old iron, because old iron is stationary. It turns out that’s wrong because time service.)

  33. > Coding against C99 and POSIX.1-2001 […]

    By the way, I wholeheartedly recommend “21st Century C” by Ben Klemens, if you are to write in C.

    1. >By the way, I wholeheartedly recommend “21st Century C” by Ben Klemens, if you are to write in C.

      Reading now. Looks pretty good, except that I don’t share his inexplicable fondness for autotools.

  34. Re: 21st Century C, by Ben Klemens

    Is The C Programming Language obsolete? (To make matters worse, I own the first edition. :$)

  35. @Jorge: “The C Programming Language”, 1st edition does not include C99 and C11 standard niceties (compound literals, variadic macros, struct within struct, etc.). “21st Century C” tries to cover what books like that missed, for example it assumes that you would use libraries instead of writing code from scratch (e.g. for UTF-8, or XML, or statistics). There is also development that is not strictly about C: make, autotools, version control, documentation, unit tests. debugging,…

    I’ll give you two titles of chapters in “21st Century C” that describe (somewhat) what this book is about:
    * “Inessential C Syntax that Textbooks Spend a Lot of Time Covering”
    * “Important C Syntax that Textbooks Often Do Not Cover”

  36. > I don’t think this will work. Changing the ancestry chain to take R3 and R4 out will alter the hash-chain computation for R5.

    R3 and R4 are still in the ancestry chain of R5. And, for that matter, in the graph (but not the first-parent chain) of R5a.

  37. > Anyway, even if it worked, you’re being what our British friends call “too clever by half” and relying on fragile details that might not be paralleled in future version-control systems.

    The lack of an ability to do this would be a deficiency of the future version-control system and a reason to refuse to migrate to it.

  38. Since a picture is worth 1000 words (and my ASCII graph evidently didn’t get the message across, since you seemed to think I was talking about modifying R5 when I was talking about calling the modified version R5a and retaining the unmodified original as R5), here is what the transform I am proposing looks like:

    http://i.imgur.com/iTbWAYw.png

  39. > Relying on the old identifiers to still be around is a bad idea; the main reason git’s tooling preserves them is so that you can undo almost all “oops, I fucked up” moments, but it won’t keep them around forever.

    I am proposing keeping references to the old identifiers in the actual repository, as the secondary (or more, if it was already a merge) parent of a synthetic “merge” commit, not having them hang around as unreferenced objects.

  40. > I am proposing keeping references to the old identifiers in the actual repository, as the secondary (or more, if it was already a merge) parent of a synthetic “merge” commit, not having them hang around as unreferenced objects.

    That would confuse the hell out of merge algorithms (e.g. three-way merge used by [almost] all version control systems), probably keeping changes that were to be deleted. IMVHO.

  41. @Jakub Narebski – the graph on the right of my diagram could occur entirely naturally though – imagine a feature branch (which includes the offending code), and the master branch is frequently merged from it with each merge having a conflict resolved by rejecting the offending material and keeping the rest of the changes.

  42. @Random832: Actually, once you merged the thing once and resolved by deleting The Offending Material, subsequent merges wouldn’t reintroduce Said Offending Material (though there might be occasional issues with context lines). See https://plus.google.com/u/0/100357083629018071519/posts/jG7CN9R1SsZ

    However, as one of this blog’s British friends, I concur with ESR’s assessment that you are being too clever by half.

    If you really feel the need to preserve the modification history of your repository (and I must add that I strongly doubt the existence of that need), you should have a meta-VCS that stores the change history of your change history. The least insane way of doing this would probably be to store a git-fast-import stream under git, committing updates before and after any piece of surgery. If you ever find yourself constructing a meta-meta-VCS to track surgical operations on your meta-VCS, step away from the keyboard, then burn it.

  43. @esr:
    >You might think that GPSD doesn’t need to run on old iron, because old iron is stationary.

    Actually, I’ve fantasized for a while about somehow finagling some manufacturer into releasing a phone using some mainframe or mini architecture.

    VAXus anybody? Galaxy S/390?

  44. Jon:

    FWAP!!

    The S/390 architecture would be uniquely bad for a smartphone, not so much because of the CPU but because of the I/O. The S/390 channel architecture is a good choice for mountainous batch processing, but all it would do is really, really get in the way of a GUI.

  45. Actually, can someone explain to me why a merge algorithm would care at all (let alone be confused by) about a synthetic merge commit (i.e. one created not by actually executing a merge algorithm, but rather by creating a commit whose tree contents were determined by other means which has two parents and asserting that it is conceptually a merge of those two branches)?

  46. > Actually, can someone explain to me why a merge algorithm would care at all (let alone be confused by) about a synthetic merge commit

    Because for merge algorithm synthetic merge commit is simply a merge commit; it doesn’t care that it was created artificially, only what parents it have.

  47. Why would a merge algorithm be looking at the structure of the commit graph rather than at the three copies that are given to it?

    I also don’t understand how the scenario I described (and showed in the diagram) would _actually_ be confusing at all. It’s not like there’s some random content there, the content of each commit is conceptually related to its two parents in exactly the way that a real merge would be – it contains the changes from both (i.e. the forward movement and the removal of offending material) relative to their common ancestor.

    1. >Why would a merge algorithm be looking at the structure of the commit graph rather than at the three copies that are given to it?

      Read the section on recursive-three-way merges at https://en.wikipedia.org/wiki/Merge_%28revision_control%29 for one answer. Basically, any merge technique more advanced than CVS’s diff3 – and, in particular, the default recursive three-way merge in git – needs to know the DAG.

  48. How advanced is automatic merging in this open source / git world? In the walled garden I play in I tend to do merges manually with visual compare and merge tools like Beyond Compare, although I have already considered that through adopting some kind of a regular comment format I could automate it. Such as a +this line replaces that line” format or a +don’t ever try to merge anything between this line and that line” comment format so some kind of a merge instructions basically. Is there something sort of a mathemathical or CS theory of merging?

    1. >How advanced is automatic merging in this open source / git world? In the walled garden I play in I tend to do merges manually with visual compare and merge tools like Beyond Compare

      More advanced than that. Needing to do a manual merge is uncommon and normally only happens when there’s a real conflict, i.e. two devs have evolved the same code in diverging directions.

      >Is there something sort of a mathemathical or CS theory of merging?

      There’s a patch-composition theory associated with the darcs VCS, built around some ideas from finite-group and monoid theory that I recognize – I used to do that kind of math. Unfortunately, though a rigorous development may exist in the head of David Roundy (author of darcs), very little of it has been written down; what’s on the web amounts to a tease with the interesting parts missing.

  49. Of course, the ideal solution would be a way to tag a commit with a reference (that actually works as a reference and doesn’t get garbage collected) to another commit in some capacity other than the usual “parent, or multiple parents in case of a merge”.

  50. @Shenpen: Codeville version control system tried to create a smarter merge algorithm… but it turned out that while it could resolve automatically a few more situations than simple 3-way merge, when it failed it failed incomprehensible, and there was higher chance for mismerge. Ever heard of Codeville? No? Well, the author said himself that 3-way merge is better in practice.

    1. >@Shenpen: Codeville version control system tried to create a smarter merge algorithm… but it turned out that while it could resolve automatically a few more situations than simple 3-way merge, when it failed it failed incomprehensible, and there was higher chance for mismerge.

      This is not quite right. The sweet spot, at our current state of knowledge, is not simple three-way merge a la CVS but rather recursive three-way merge a la git. There are common cases like cross-merge where the simple three-way merge loses its cookies but the recursive version works.

      On the other hand, as you say, attempts to produce an algorithm strictly better than recursive 3-way have so far failed. Codeville is dead, darcs appears terminally stalled.

      If I had time to pay attention to this problem it is possible that I could push the state of the art here – I have all the right background. But I don’t figure my odds to be better than David Roundy’s, so I’ve left it to him and Bram Cohen. Most likely, if there’s going to be real progress, it’ll come from some brilliant kid who’ll spot something simple the three of us missed.

  51. FWAP!!

    The S/390 architecture would be uniquely bad for a smartphone, not so much because of the CPU but because of the I/O. The S/390 channel architecture is a good choice for mountainous batch processing, but all it would do is really, really get in the way of a GUI.

    It’s sometimes difficult to remember now, but before the IBM PC left the field of microcomputer systems design a smouldering ruin, the very best micros — such as the Macintosh and, most notably, the Amiga — were designed with considerable smarts in the I/O hardware, enabling it to carry out tasks independent of the CPU, communicating with the main processor via DMA and interrupts, much like the channel controllers on an IBM mainframe. And the Amiga not only had a GUI, it was a very advanced one; and for years a 7-MHz Amiga was considered more responsive than even a 25-MHz PC. Some Amigans were loath to let go of their machines up until PCs reached about the 300-MHz range, for it was then that PCs had finally started to catch up to their old 25- and 50-MHz 680×0 boxes in terms of responsiveness.

    So a channel architecture in a smartphone could yield a very impressive GUI indeed. Is there a latency-vs.-throughput tradeoff or something in the S/3×0 architecture that would make it impractical for this task (even if, say, the CPU and channel controllers were integrated into a SoC)?

  52. >Is there something sort of a mathemathical or CS theory of merging?

    There’s a patch-composition theory associated with the darcs VCS, built around some ideas from finite-group and monoid theory that I recognize – I used to do that kind of math.

    And physics: the idea of commuting and non-commuting patches, like operators (e.g. “creation” and “annihilation” operators) in the quantum mechanics, I think.

    > Unfortunately, though a rigorous development may exist in the head of David Roundy (author of darcs), very little of it has been written down; what’s on the web amounts to a tease with the interesting parts missing.

    Also, while theory might have some more or less rigorous mathematical basis, the practice with Darcs theory of patches is, from what I have heard, exponential time complexity.

    Nb. as fasr as I know recursive merge strategy does not (as opposed to straight 3-way diff and 3-way merge) any mathematical derivation – it is pure heuristics, but heuristics that works (on real-life code).

    1. >the practice with Darcs theory of patches is, from what I have heard, exponential time complexity.

      Yes. It appears to me that the main reason darcs is stalled is than Roundy and whoever is collaborating with him hasn’t found any way to avoid this. It’s a shame, because the patch theory is rather elegant. I could see where it was going the moment Roundy introduced commutators.

  53. Jeff Read: The wheel of reincarnation is a remarkable force. Before concluding that smart I/O co-processors are appropriate these days, I’d like to see a software system that was actually designed from the ground up for the soft-realtime requirements of typical user interaction – like the Amiga OS itself and some other things after that (BeOS, anyone?). Standard Linux setups are terrible for soft-realtime/low-latency scenarios, and Android is even worse.

  54. @guest:
    >Standard Linux setups are terrible for soft-realtime/low-latency scenarios, and Android is even worse.

    I’ve certainly noticed issues with Android, but I’ve never noticed Linux setups with the traditional X11/GNU userland to have responsiveness problems (I’ve primarily used Ubuntu since 8.10).

  55. I’ve certainly noticed issues with Android, but I’ve never noticed Linux setups with the traditional X11/GNU userland to have responsiveness problems (I’ve primarily used Ubuntu since 8.10).

    A Linux box under load will show noticeable UI lag and judder. Not so much an Amiga, BeOS or Haiku box. This is because those systems are designed from the kernel through the user libraries on up to respond immediately to user input events.

    Part of the problem is X11. The architecture of X11 is a bit silly — treating UI as yet another network service. Especially in light of the fact that it isn’t really network transparent since modern toolkits depend on the X server’s fast DRI transfers. So some improvement should come when the switch to Wayland occurs.

    However, there is a deeper issue: with Macintosh, Amiga, all these other great systems, the hardware and software are designed to function as a smooth cohesive unit. Even when running on PC hardware, in order to provide a decent UI your software needs to be *designed*, from the kernel all the way up the stack to the end-user libraries and apps, to make the user a priority. Linux is not *designed* with the *goal* of creating a smooth UX; a typical GNU-userland Linux system is composed of independently developed parts that are not designed with a single cohesive goal. (If they are designed for anything, it’s developer expediency, not creating joy in the heart of the end user.) This gives Linux certain advantages, but it does not make the system a joy to use in the way that an Amiga or even a modern Mac are.

    Android has its own set of problems: its runtime is actively user-hostile. The Android runtime is, again, engineered for developer expediency, allowing third-string developers to port their existing Java skills. The GC- and VM-based runtime creates lots of inefficiencies that add up to a slow and janky UI experience. This was bad design on Google’s part and it means they are still losing the developer mindshare war to Apple.

    Rumor has it that Google has internally built an alternative C++-based runtime for Android, for the day when the hammer falls and they are finally brought to account for flagrantly violating Oracle’s copyright. If you ask me that is the sort of thing that should be released sooner rather than later.

  56. > Rumor has it that Google has internally built an alternative C++-based runtime for Android, for the day when the hammer falls and they are finally brought to account for flagrantly violating Oracle’s copyright. If you ask me that is the sort of thing that should be released sooner rather than later.

    Since Android 5.0 “Lollipop”, instead of Dalvik Java virtual machine there is Android Runtime (ART) runtime environment, which compiles entire applications into native machine code upon their installation.

    And API is not copyrightable (judges are being stupid).

  57. Jakub, no matter what you may think, it’s now settled law in the US that APIs are copyrightable. It is not settled yet whether reimplementing an API is fair use, however.

  58. Jakub, no matter what you may think, it’s now settled law in the US that APIs are copyrightable.

    Not quite; the Supreme Court denied review, and the opinion was issued by CAFC, which only had jurisdiction on this case because of weird case history. I would be very surprised to see a similar ruling out of the Ninth Circus.

  59. Why not fix diff to use smarter options by default? Should that be a simple patch?

    It would be, but the cost might be breaking a lot of scripts that are embedded ghod knows where. No maintainer has been brave enough to risk that.

    For what it’s worth, Busybox and Toybox both implement unified-mode-only versions of diff.

    It actually took a while before they fixed their own “bloat-o-meter” scripts, which will show the effects of changes on code size by diff’ing the output of nm run on two versions of the binary…

  60. @Christopher Smith:
    >Not quite; the Supreme Court denied review, and the opinion was issued by CAFC, which only had jurisdiction on this case because of weird case history. I would be very surprised to see a similar ruling out of the Ninth Circus.

    Plus, even if the Supreme Court had ruled on the issue, to treat SCOTUS decisions, or anything other than the text of the law itself, as “settled law” is to rush headlong down our current path towards Constitutional disintegration. SCOTUS sets federal policy on how the law is to be interpeted, it does not define the law. It *cannot*, if democratic society is to survive.

  61. Jon Brase,

    That would be true in a civil-law jurisdiction, where law is determined primarily by statute.

    The USA is a common-law jurisdiction where statute, tradition, and judicial precedent all make up the law. So a Supreme Court ruling IS settled law in the USA (though they can be overturned by an act of Congress).

  62. >Linux is not *designed* with the *goal* of creating a smooth UX; a typical
    > GNU-userland Linux system is composed of independently developed
    > parts that are not designed with a single cohesive goal. (If they are designed
    > for anything, it’s developer expediency, not creating joy in the heart of the end
    > user.) This gives Linux certain advantages, but it does not make the system
    > a joy to use in the way that an Amiga or even a modern Mac are.

    The Macintosh, the Amiga, and Windows (at least initially) were designed from the ground up to sit on your desk and give you warm fuzzies.

    Linux was designed (at least initially) as a research and learning tool. Its evolution up to today has involved the tension between running on a billion architectures as opposed to being perfected for one, and between server and soft-realtime requirements on one hand and desktop use on the other.

    99% of the interaction I have with Linux systems are over SSH at the CLI. They are servers (600 in prod, ~125 in dev). Half the time it’s not even “interaction”, I’m using something like pdsh to do work.

    I’m not going to say I’ve given up on Linux on the Desktop, but as long as Mac hardware is within ~10% of the cost of comparable Dell/IBM/HP stuff, and as long as it “just works”, then I really don’t have any incentive to go back.

    And yeah, I realize that I’m probably not current on the Linux desktop, but every time I do wind up installing a UI on a linux host the look and feel is…not getting better.

  63. Imho the Linux desktop has been at its best when it was weird, ugly, and consisted of barely more than a window manager with a terminal window so I could launch my programs, and maybe a dock or taskbar. The attempts to imitate Mac or Windows have been bad, both from a Unix power user standpoint and as imitations of Mac or Windows.

    I know I’m weird. But back in the day Linux was weird too and that’s why I liked it. It was also rock solid once set up and that’s also why I liked it. The versions of Linux which attempt to sit at the cool kids’ table are inimical to my well being; they introduce complexities and bugs that I don’t want to bother with. Apple stuff is the same way; it “just works” until it doesn’t, or until HQ in Cupertino decides you should have upgraded by now and feels free to break something you were depending on. (Happened to me once. Apple gets no more of my money as a result.)

  64. Well, the Amiga interface was never all that ‘cool’ either, but this did not detract from its simple and smooth UX, even for new users. You get a very similar feeling today from things like GnuStep, or even the Plan 9 user interface.

    –And in fact the comparatively recent influence of Plan 9 on Linux is now helping popularize features that were already known from the Amiga, like virtual union mounts (we used to call them ‘assigns’ back then) and auto-mounting removable media based on its label – the one feature that made it even remotely bearable to use a single floppy disk drive as the only sort of storage.

  65. @William:
    >And yeah, I realize that I’m probably not current on the Linux desktop, but every time I do wind up installing a UI on a linux host the look and feel is…not getting better.

    For the most part, I would say that the look and feel of a properly configured Linux UI has improved moderately since I started using Linux in 2009. The out-of-the-box defaults on most distros are getting frighteningly less sane over time, but, while it takes more and more custom configuration to do so, I am still able to get almost exactly the same environment I had back on 8.10, with the following exceptions:

    1) MATE menu. This is the component that makes me say that things have improved moderately. A menu with a search box is a good thing, ’nuff said. (And if you really don’t like it, the classic GNOME menu is still in MATE).

    2) The migration of a lot of application software to GTK- 3. While this doesn’t affect the system UI directly, it interferes with the integration of those applications with my desktop theme.

    3) Many of the Linux installs I’ve done recently have been on ancient hardware that won’t support MATE. On the other hand, it won’t support the newfangled crap like GNOME 3 or Unity either.

    @Jeff:
    One of my pet peeves with Windows and modern Linux GUIs that I’ll give Apple credit for not falling into is the habit of yanking familiar interfaces out from under users. While the Mac OS GUI is crap, and has been behind the times since task bars were invented, it is at least *consistent* crap: from what I can see from looking at screenshots across its history, it changed very little in the pre-OS X period, underwent some significant changes in the transition to OS X (but relatively minor compared to other such transitions in other parts of the industry), and has since changed in little but eye-candy.

  66. > A menu with a search box is a good thing, ’nuff said.

    I remember how much crap Vista got for it, IMO purely because it was the new thing Microsoft was doing (there was a lot not to like about Vista, but the new start menu was not one of them). Now I just wait for the day everyone else catches up to the ribbon.

    Of course, Linux distributions have always been a little better about keeping the menu well-organized than Windows, because of the fact that every program you install in windows thinks it’s important enough to deserve its own top-level group.

  67. > Now I just wait for the day everyone else catches up to the ribbon.

    Ribbon interface has its advantages, but also disadvantages.

  68. Now I just wait for the day everyone else catches up to the ribbon.

    The ribbon is patented tech, so no one else can use it without a license.

  69. > The ribbon is patented tech, so no one else can use it without a license.

    Errr… WTF? How an UI can be patented (and not copyrighted)? And was it not struck down with Apple-vs-Microsoft trash icon fight?

  70. Errr… WTF? How an UI can be patented (and not copyrighted)? And was it not struck down with Apple-vs-Microsoft trash icon fight?

    U.S. Pat. No #8,255,828. You can patent a ham sandwich in this country. Copyright protection is too narrow, and both Apple and Microsoft wish to see their UI innovations protected to retain their competitive advantage, so it’s off to the PTO they go.

    Although… Oracle v. Google has reopened the question of copyrightability of the structure, sequence, and organization of computer commands. The Tetris Company LLC asserts copyright over all falling-tetromino games, and has won cases in court over this issue, gotten imports stopped at customs, and gotten Tetris clones pulled from the Apple and Google Play stores. So it’s possible to give developers a hard time with just a copyright in hand, but a patent is pretty much a slam dunk.

    Software developers interpret patents as damage and innovate around them.

    To do so is often to court death by a thousand lawsuits. Software patents can be very broad; virtually the entire concept of online video better than MPEG-1 is patented. Every time someone comes out with a new ostensibly open, unencumbered codec MPEG-LA is all like “oops, turns out that’s covered too! See you in court! *trollface*”

    Also, what of your beloved Apple without broad patents (including the ability to patent rectangles)? Or is this a sort of Heinleinian “radiation is good for you” type argument?

  71. @Jeff and others
    “You can patent a ham sandwich in this country. Copyright protection is too narrow, and both Apple and Microsoft wish to see their UI innovations protected to retain their competitive advantage, so it’s off to the PTO they go.”

    This has become a global problem. Search for “Freedom to innovate”:

    The Freedom to Innovate: A Privilege or a Right?
    http://www.plantcell.org/content/19/5/1433.full

    I see this as a US lead attempt to tax the rest of the world. Any product created anywhere is to be subjected to a USA based patent tax.

  72. OTOH US patent law has led to a windfall for the German-based Fraunhofer IIS, whom many people have heard of first as the holder of the MP3 patents.

  73. Not quite; the Supreme Court denied review, and the opinion was issued by CAFC, which only had jurisdiction on this case because of weird case history. I would be very surprised to see a similar ruling out of the Ninth Circus.

    The CAFC has binding precedent nationwide. The Ninth Circuit is not likely to rule differently.

    Android as you know it is dead in the water.

  74. As someone accustomed to GitHub and having just used git-format-patch/git-send-email for the first time to send you some NTPsec patches, I’ve gotta say, yuck. I can’t believe this is your preferred workflow. If the merge bubbles that result from pull requests offend you so much, why not just pull to a private branch, rebase against master, then push? This would still much lower-friction than using git-am.

    Anyway, I like merge bubbles, because they preserve information about what repo state the contributor was working with when he wrote (and hopefully tested) his patch. This is diagnostically useful if hidden merge conflicts (e.g. one patch adding a call to a function that another patch renames) later crop up.

    1. >If the merge bubbles that result from pull requests offend you so much, why not just pull to a private branch, rebase against master, then push?

      Because it’s a PITA requiring more steps by hand.

      My workflow via mailbox is beautifully simple; I read the patch in my mailer (which I’m comfortable with) and pipe it to “git am” if I like it. No attempt I have yet seen to wrap a web interface around pull requests satisfies me. Perhaps this will change in the future.

  75. No attempt I have yet seen to wrap a web interface around pull requests satisfies me.

    What do you think are the deficiencies of the GitHub interface?

    1. >What do you think are the deficiencies of the GitHub interface?

      A deal-killer is that it persists in giving me merge bubbles even when the patches on a source branch properly commute with the patches on a target branch. I hate that.

      I could tolerate it, maybe, if it gave me an option to merge-rebase after I’d reviewed the patch to be sure it reall y commutes. But last I checked there was no such option. GitLab claims to have such a thing but when I’ve tried it it hasn’t worked – I’ve gotten bubbles anway.

    1. >Useless history noise.

      Exactly. The presence of a merge bubble carries useful information when, and only when, one of the patch pairs fails to commute.

      Also, it complicates bisection searches.

  76. “When in doubt, send a patch series of simple changes rather than a huge rollup patch that does multiple things” – is there an easy way to make one of these from a series of commits in git? I have fixed a number of problems in showkey – including a total rewrite of the visualize function.

  77. @Random832: the tool you are looking for is git-format-patch(1). Beware that you can’t rely on –thread; your email client will probably change the message-id, so send the cover-letter, find out its message-id, and format the patches without –cover-letter but with –in-reply-to=<message-id you found out in the previous step>.

  78. Because Linus doesn’t like attachments, and he wrote it.

    (You can take the mbox format output files and attach them all to a single message. But don’t do that on the LKML or you’ll get eviscerated ;)

  79. Is there a tool that can just submit these directly to my mail provider’s SMTP server? It occurs to me that someone should have by now written a sendmail drop-in that can do this instead of requiring a local MTA.

    I’m mildly surprised that OSX’s built in sendmail is postfix rather than something to use your Mail.app settings. Which postfix of course doesn’t work because it isn’t configured and I have no idea to configure it without dpkg –reconfigure.

  80. > The hapless n00b I am now excoriating replied, when asked what revision his patch was against, replied “HEAD” and it was all I could do not to leap through the screen and throttle him.

    It occurs to me that this problem could be solved by writing a script to figure out what, if any, recent revisions the patch applies cleanly against. Ideally it should find exactly one where all the context perfectly matches at the exact line numbers given in the patch file. Unfortunately, I don’t know what [if any] knobs patch(1) has to facilitate this. (I can’t even get it to reliably figure out what directory the target files are in. I know that’s what the -p option is for, but I can’t get it right.)

    If you like bubbles, you could even make it branch the candidate revision, apply the patch, and then merge to the current HEAD. (Since the merge would be the final operation, you could then manually resolve any merge conflicts) If you don’t, you could make it do all that and then rebase.

Leave a Reply to Jakub Narebski Cancel reply

Your email address will not be published. Required fields are marked *