Managing compatibility issues in ubiquitous code

There’s a recent bug filed against giflib titled giflib has too many unnecessary API changes. For a service library as widely deployed as it is (basically, on everything with a screen and network access – computers, smartphones, game consoles, ATMs) this is a serious complaint. Even minor breaks in API compatibility imply a whole lot of code rebuilds. These are not just expensive (requiring programmer attention) they are places for bugs to creep in.

But “Never change an API” isn’t a good answer either. In this case, the small break that apparently triggered this report was motivated by a problem with writing wrappers for giflib in C# and other languages with automatic memory management. The last round of major changes before this was required to handle GIF animation blocks correctly and make the library thread-safe. Time marches on; service libraries have to change, and APIs with them, even when change is expensive.

How does one properly reconcile these pressures? I use a small set of practice rules I think are simple and effective, and which I think are well illustrated by the way I apply them to giflib. I’m writing about them in public because I think they generalize.

First rule: if backward-compatibility is a must, fork your library into API-stable versus unstable/evolving versions. This is why I ship both a 4.2.x giflib and a 5.x.x giflib. The 4.2.x version is backward-compatible to the year zero; because of this, application developers get a choice and the effective cost of API breakage in the 5.x.x series decreases a great deal.

There are costs to this maneuver. The main cost to you, the library developer, is that you will need to cross-port fixes from one line of development to the other. This is acceptable for giflib, which is pretty small; it gets more difficult for larger, more complex libraries.

The cost to the application developers using it is more serious. The stable version plain won’t get some fixes from the unstable version, exactly the ones that would require API changes. 4.2.x is never going to be thread-safe, and its extension-block handling is a bit flaky in edge cases. Also, it’s easy to drop a stitch and fail to cross-port fixes that could and should be applied.

In the case of giflib, these are not major problems. The 4.2.x code is very old, very stable, and has passed the test of time and wide deployment. Apparently there was never a lot of need for thread-safety in the past, and the the extension-block handling was good enough; we know these things because the rate of reported defects over the life of the project has been ridiculously low – averaging, in fact, fewer than four per year over a quarter century.

Other libraries may incur different (higher) implied costs under this strategy. If your service code is necessarily evolving really fast, forking a stable version may not be practical because the cost of back-porting fixes is insupportable. Engineering is tradeoffs; the point of this essay is more to raise awareness of the tradeoffs than to argue that any one rule of practice is always right. Be aware of why you’re doing what you’re doing, and document it.

Second rule: Provide #defines bearing each level of the release number in your library header so that people can use compile-time conditionals in the C preprocessor to write code paths that will compile and just work with any version of the library. (There are equivalent tactics in other languages.)

There’s no downside to this. If you do it properly, application developers can choose to never lose back-compatibility with older versions of your library. Just as importantly, they can know they’ll never lose it. This is a confidence-builder.

Third rule: document, document, document. Every API change requires an explanation. Especially, do not ever leave your client-application developers in doubt about when an API or behavior change took place. They need to be able to conditionalize their code properly to track your changes (see the second rule), and they can only do that if they know exactly when in your release timeline each change occurred. This, too, is a confidence builder.

Fourth rule: Prefer noisy breakage to quiet breakage. The worst kind of API change is the kind that introduces an incompatible behavior change without advertising the fact. That way lies bugs, madness, and other developers rightly cursing your name.

Even so, this happens a lot because library maintainers mis-estimate tradeoffs. There’s a tendency to think that requiring users to recompile their applications (or re-link to a new major version of a shared library) is so irritating that it’s better to preserve the API by slipstreaming in changes in run-time behavior that you tell yourself will only be problematic or incompatible in rare edge cases. This belief is almost always wrong!

The bug report that motivated this apologia came in because the person who filed it thinks I shouldn’t have altered the argument profile of DGifClose() and EGifClose(). What he fails to understand is that I chose this path over some trickier alternatives because I wanted the API breakage to be noisy and obvious at compile time. This way, the client-application builds will break once, the fix will be easy, and the result will be right.

To apply rule four in this way, it helps to have been careful about rules one through three, in order to lower the cost of the disruption. Thus, application developers using giflib have 4.2.x to fall back on if they really can’t live with my break-it-noisily practice.

You also want to put in effort to make sure the fix really is easy. Not just to save other developers work, though they’ll thank you for that; the real reason is that tricky fixes get misapplied and spawn bugs.

The bug reporter wants to know why I didn’t leave DGifClose() and EGifClose() as they were and introduce new entry points with the different profile. This is a fair question, and representative of a common argument for adding complexity to library APIs rather than breaking backward compatibility. It deserves an answer.

Here it is: code and API complexity are costs, too. They’re a kind of technical debt that creeps up on you, gradually. Each such kluge looks justified when you do it, until you turn around and discover you have an over-complex, unmaintainable, buggy mess on your hands. I take the long view, and prefer not to let this degeneration even get started in my code! This choice may transiently annoy people, but it’s going to lower their exposure to defects over the whole lifetime of the software.

Being able to take this pro-cleanliness position is an un-obvious but important benefit of open source. The people in my distribution chain may gripe about having to do rebuilds from source, but they can do it. When you’re gluing together opaque binary blobs, the cost of API breakage is severe and you get forced into tolerating practices that will escalate code bloat and long-term defect rates.

37 comments

  1. This week I got to experience what happens when a library vendor (who will remain nameless — for now) shipped a point release and then pushed us (hard) to upgrade to it. The product is a persistence library, with about 95% library and 5% server and management commands. The language is Java. The version number went from X.Y to X.Y+1 (yes, I’m being vague). In this point release, the vendor committed the following release sins (IMHO):

    0. They don’t ship documentation or release notes with the installer (instead see their wiki for details).
    1. They removed (or made private) the constructors for several classes, requiring that you use factory methods instead.
    2. They removed declared exceptions from methods (which means that code that catches those exceptions now has to be modified to remove the catch).
    3. They changed classes to interfaces, made concrete classes abstract, and deprecated some classes. (see #2 above)
    4. They removed standalone commands and merged them into a command runner, with different argument names.
    eg: xxcleanup -local becomes xxx CleanupXX -local -configFile

    So, to make this simple “all you have to do is run xxupgrade to your xx database” took me 2 days of work to get a working application on a test system, and will take another 2+ days to get the changes pushed through to the other developers and demo and production environments.

    As an extra side bonus, I got to re-learn why I hate Subversion’s lack of sophistication on merges. (Hint: trying to replace a copy of a file with a reference to an external file is impossible to do in a single commit. You have to remove the file in one commit, and then execute a second commit to add the svn:externals property change.) Thus meaning that once I had established all of the changes I needed to commit, I still got several interim (broken) builds on my way to a final good build.

  2. Herein lies a major difference between common free Open Source libraries and commercial Enterprise software (I’m not certain exactly which distinction matters the most, though I suspect it’s the free/commercial distinction). I work for a company which develops software for the Enterprise Storage market. Many of our decisions are based in-part on what would be backwards-compatible with previous versions.

    I had a discussion the other day about some upcoming work that’s going to be done. Given that the work was going to be substantial, we’d be able to perform a drastic clean-up of our user interface for this component. One of the biggest arguments against this change is that “customers have lots of scripts they use to manage” our products. Making the UI much more intuitive and getting rid of many ways to unintentionally experience error or misconfiguration must fight with the fact that our customers want things to be *better* without actually *changing*.

    This matters because of the nature of competition in the market. Every single thing that a customer needs to change in order to perform an upgrade or as a part of buying new equipment is a reason for the customer to consider different vendors. After all, if you have to go through the whole hassle, why not see if you can save a few percent on the cost while you’re at it. Thus bad technical decisions may be made because they are good business decisions.

  3. The bug reporter wants to know why I didn’t leave DGifClose() and EGifClose() as they were and introduce new entry points with the different profile. This is a fair question, and representative of a common argument for adding complexity to library APIs rather than breaking backward compatibility.

    I can see a good compromise position here. Use new names for the entry points with new behavior, so that nobody who calls the old names will get a surprise; but publish definitions of the old functions that call the new ones. Then don’t keep these “adapters” as a permanent part of the library. Let it be the user’s problem to maintain them in the future if he keeps using them.

    And I second Craig’s complaint #0. I never forgave DEC for replacing their “Big Gray Wall” of manuals with a CD that could only be read after their system was up and running. Maybe insisting on paper is too much, but there should at least be a complete set of docs shipped as one file in a common format (and I mean one as widely used as .txt or .pdf, not a tarball that expands itself into hundreds of files before it can be used). I hate having to look things up on a wiki in the middle of a project, too often it’s a slow wild goose chase.

  4. Nice thought, but I think this would only delay the problem until you release the version with the adaptors removed.

  5. … BTW this really was an interesting post to read, Eric, if only you’d focus more on such quality posts instead of promoting junk science.

  6. Being able to take this pro-cleanliness position is an un-obvious but important benefit of open source. When you’re gluing together opaque binary blobs, the cost of API breakage is severe and you get forced into tolerating practices that will escalate code bloat and long-term defect rates.

    Pure BS. Let’s suppose I’m a vendor producing a binary C API, and one day I want to change the interface. If I have any brains whatsoever then I just bump the version on the API and explode if the client requests the old version. The client then releases a fix that requests the new version of the API. Of course this is stupid, but this is *exactly* how it works in open source land, and it’s just as costly. Your source doesn’t magically recompile correctly against new API versions. That requires labor, as does redistributing the new client code. It’s just that the labor is provided by various unpaid neckbeards. And the end-user has zero need to see the “open source” for this arrangement to work.

    1. >If I have any brains whatsoever then I just bump the version on the API and explode if the client requests the old version.

      Except because of pressure from customers this can almost never happen. See Craig Trader’s “but they have so many shellscripts” comment earlier in the thread.

  7. > Pure BS

    exactly – your comment, not Eric’s quote. In those many years I have spent with ${UNIX} systems, I have seen many libraries installed in more than one version at the same time and binaries using one of them as needed, I have re-compiled /usr/local binaries which would not run after system updates, and I have come across DOS or Windows applications which installed their own versions of system DLLs, overwriting the original ones, so they would run properly (but other ones failed afterwards).

  8. @Manfred Maybe you should read and understand the comment before responding with random factoids that have nothing to do with it. You unthinking animal.

  9. Except because of pressure from customers this can almost never happen. See Craig Trader’s “but they have so many shellscripts” comment earlier in the thread.

    Jesus, try *reading* the comment. I mean, you’re practically repeating a fragment of it while giving the appearance of disagreeing! This is exactly the point! It’s not any less “costly” in the open source world to make these sorts of changes – its that different people are paying! Which is why nobody does it in the “commercial” world – because the person who makes the mistake pays for it! The fact that you can so easily con your API clients into doing free work for you, and into passing on more work to all the distributors may well be a psychological “benefit” of having them deal with open source bullshit all day. Somehow I don’t think that’s the message you intended to send with this post.

    1. >Jesus, try *reading* the comment. I mean, you’re practically repeating a fragment of it while giving the appearance of disagreeing!

      Actually, I misunderstood what you were driving at. Mostly my error, I was reading in a hurry on a mobile phone, but might have been avoided if you had done more paragraphing and less ranting.

      > It’s not any less “costly” in the open source world to make these sorts of changes – its that different people are paying!

      Yes. But the overall cost of disruption in the open-source way of doing things is, I think, much lower. In our way of doing things the adjustments get made sooner, more frequently, at smaller granularity (gflib’s history represents this rather nicely). In this way we avoid huge pileups of technical debt and big-bang changeovers that are traumatic for everyone.

  10. Yes. But the overall cost of disruption in the open-source way of doing things is, I think, much lower. In our way of doing things the adjustments get made sooner, more frequently, at smaller granularity (gflib’s history represents this rather nicely). In this way we avoid huge pileups of technical debt and big-bang changeovers that are traumatic for everyone.

    No doubt there’s some truth in this (and it indicates to me that you should read Antifragile and The Black Swan). The open source *ecosystem* is more robust to disastrous changes, because people tolerate constant but low levels of suffering. I prefer the route of having an emulation layer for old versions of the API and regular, major iterations.

    1. >(and it indicates to me that you should read Antifragile and The Black Swan)

      I’ve read both. I thought Antifragile was brilliant, enough so to make The Black Swan seem like a bit of a letdown afterwards. It’s pretty clear from various near-quotes in the former that Taleb has been reading my stuff, too, and we have common sources – I think Taleb owes more to Hayek’s analysis of social knowledge than he admits.

      >I prefer the route of having an emulation layer for old versions of the API and regular, major iterations.

      It’s a nice theory. Sometimes you can’t make it work. I could have got around the most recent break that way, but not the thread-safety changes. Those are why I forked the library.

  11. I note several uses in this post of the word “user”. In this context it means something very different from what it means almost everywhere else.

    This usage could be confusing to people who haven’t grasped the context, but there doesn’t seem to be a better term.

    1. >This usage could be confusing to people who haven’t grasped the context, but there doesn’t seem to be a better term.

      After considering the matter, I have reworded the OP to avoid the term “user”.

  12. This usage could be confusing to people who haven’t grasped the context, but there doesn’t seem to be a better term.

    I think the term client might be suitable.

  13. @Roger Phillips:
    The relation and relevance of facts not being visible to the braindead doesn’t mean there is none.
    Maybe you should read the comment and try to understand the implications of the facts mentioned therein before responding with another rant.

  14. The relation and relevance of facts not being visible to the braindead doesn’t mean there is none.
    Maybe you should read the comment and try to understand the implications of the facts mentioned therein before responding with another rant.

    hahaha

  15. > It’s a nice theory. Sometimes you can’t make it work. I could have got around the most recent break that way, but not the thread-safety changes. Those are why I forked the library.

    How so? It’s always possible to make a wrapper that is less thread-safe than the underlying implementation. localtime implemented in terms of localtime_r, and so forth.

    1. >How so? It’s always possible to make a wrapper that is less thread-safe than the underlying implementation. localtime implemented in terms of localtime_r, and so forth.

      They were tangled up with other stuff. IIRC, the thread-safety changes were not cleanly separable from what I had to do to fix the extension-block handling.

  16. The conclusion I draw from this is that the only responsible thing I, as a software developer doing work for hire, can do is to use the older version of the API. Because I won’t be paying those change costs incrementally. I will have added a feature to something that needed gif support, made it work, and then nobody is going to touch it again, unless there’s a darn good reason to. Say, three years from now someone discovers the gifbleed bug, which really, really needs to be patched. If I’m on the later branch, which I’m sure you’d prefer people would be on, then my costs to patch will be much greater.

  17. The conclusion I draw from this is that the only responsible thing I, as a software developer doing work for hire, can do is to use the older version of the API. Because I won’t be paying those change costs incrementally.

    Not necessarily. It depends on the client. Any non-trivial software project will have ongoing maintenance. Non-trivial development projects will often take on the order of months or years to complete. And the informed client will have a period of time that they wish to have the software supported with patches, upgrades and other ongoing support work. Maybe it’s 3 years or 5 years or whatever. You have to look at the project’s development and support time windows and make an informed choice for each individual library you use.

    In reality, you often don’t get to make the call what version of the library you use anyway; your client will say “I want it to run on AWS Linux” or :”I want it to run on the CentOS version I’ve standardized on,” in which case you’ll probably use the version of giflib that comes standard with the distribution.

  18. As a relatively frequent consumer of other people’s APIs (even if not in the Open Source world) I heartily agree.

    Please break in an obvious and easily noticed (and ideally easily fixed) way, rather than Probably Still Working But Maybe Not.

    That way I can notice I have work to do, and do it.

  19. I would go so far as to say that backwards-incompatible API changes require whole new libraries with new sonames, header files, etc. just to prevent ANY confusion.

    If you have a language which supports namespaces, modules or packages with exportable interfaces, etc. you may wish to use a completely different interface with a different identifier to expose your API. For example, when Java and .NET replace their GUI libraries with some new hotness, they place the GUI control classes in a new namespace (e.g., javax.swing instead of java.awt).

    This is difficult to do in plain C (and another reason why imho C sucks as an applications language). But one of the reasons why DirectX won developer mindshare is it’s COM-based, and each iteration of DirectX exposes current GPU functionality through a new COM interface. Older functionality kay be supported through older interfaces and compatibility layers in the video driver.

  20. > This is difficult to do in plain C

    It is? I thought it was trivial to define a new function name when backward portability can’t be achieved with the new functionality, leaving the old name to go to that compatibility layer.

    Or if your API defines functions that can take a variable number of arguments, make the first argument be the protocol version (to keep it simple, this can be an integer, which is increased by 1 every time you break backward compatibility) and choose the correct code path accordingly.

    Thus foo(2, arg1, arg2, …) can have a completely different syntax from foo(1, arg1, arg2, …)

  21. @the monster

    I’m far from an expert, but your method seems aesthetically displeasing. Why not just have foo(x) and bar(x), or maybe foo1(x) and foo2(X)

  22. The Monster,

    Those are workarounds, not solutions. Sure you can introduce new APIs with different names that replace old-API functionality, but that just leaves old-API cruft lying around in the global namespace where it can neither be removed nor segregated. That’s cruft that developers shouldn’t have to deal with, but do. They have to learn the new name and signature of the new API, and it’ll be all too easy to invoke the old API by mistake, possibly without even raising a compiler error. Namespaces help keep a clean boundary between the old and the new.

    Microsoft learned this lesson with the -Ex versions of its Win32 API functions, e.g., CreateWindow becoming CreateWindowEx with more bells and whistles.

    As to your other suggestion, it’s much cleaner to say:

    using Namespace1;

    foo(arg1, arg2, …);

    and

    using Namespace2;

    foo(arg1, arg2, …);

    than to have to pass the version of foo to use as a parameter. It’s also much cleaner for library authors to implement.

  23. @Lambert

    Yes, that’s another way to do it.

    @Jeff Read
    A compiler can’t tell you anything about a function that takes variable arguments, but a preprocessor might be able to.

    There’s nothing the slightest bit “cleaner” about namespaces. In your example, the line
    foo(arg1, arg2, …);
    appears twice, but invokes a different function each time. I can’t see how that makes any sense. I’d much rather have the -ex approach or the -2 Lambert suggests.

    fooEx() and foo2() are clearly not the same function as foo().

  24. I know that C++ isn’t the most favoured language here, but for the example given in the bug report isn’t function overloading or default arguments the natural solution? Completely source compatible apart from some rare cases, and no need to introduce any additional names. So the code there becomes:

    // .h file, original
    extern int DGifCloseFile(GifFileType *GifFile);
    // .h file, becomes
    extern int DGifCloseFile(GifFileType *GifFile, int *ErrorReturn = NULL);

    // implementation
    int DGifCloseFile(GifFileType *GifFile, int *ErrorReturn)
    {
    // … original code here
    if (ErrorReturn!=NULL) *ErrorReturn = status;
    return (…);
    }

  25. Well, with some contortions you can have this in C, either via variadic function, or variadic macros.

  26. Yes, possible in C with varargs – at the cost of losing all compile time and run time argument type checking. There lies danger…

  27. I never got the “I broke the API because I want clean code” argument.

    Why are your needs as an author more important than my needs as a user/programmer that uses the API? Put shims to make it compatible.

    It’s this kind of reasoning that keeps Linux around 1%. “Break this kernel ABI to have clean code, break this important app (VMware) because we don’t care”: http://www.osnews.com/comments/26983

    Windows is backward compatible (apps and drivers) all the way to Vista, that’s 7 years. And XP apps work too.

    There is a business reason to maintain back compat. All OSes developed by corporations that have a financial incentive to keep the user happy do it (Android, Windows for example). The Linux kernel APIs are also stable https://lkml.org/lkml/2012/12/23/75 because they are useful to Linux Server and hence RedHat and IBM will go mad if compat gets broken, and Linus knows this.

    Only X.org and the Linux audio stack are unstable messes, because they are useless to the Linux Servers (if you run X.org in a big server cluster, you don’t know what you are doing) and so the volunteers maintaining them (mostly) have free reign to break stuff to save effort. Keeping the user happy is an afterthought for them, because they don’t make money from users, like Google and MS do.

  28. Ahh… the free market. Keep me happy as a user, and I will give you my money, either directly (Windows) or indirectly via app purchases and ad clicks (Android).

    Money: The incentive for devs to try to cater to my needs (such as back compat) as a user. Also, the incentive missing from the bazzar model.

  29. It’d be nice if more projects understood this.

    e.g. Apache Tomcat – https://issues.apache.org/bugzilla/show_bug.cgi?id=45015 is where someone thought making the quote rules tighter and giving an error rather than a warning, in a minor-minor bugfix release, would be the best possible idea, and never mind existing code bases, e.g. ours (Eclipse 3.3 did not detect the quote error as an error, so this coding idiom was all through our stuff). AAAAAAAAA At no point did anyone responsible acknowledge that this might be an issue. “Oh, you can just tweak your config if it bites you.” Yeah, thanks.

  30. David Gerard: The problem is, there’s a slippery slope toward never fixing bugs at all, only documenting them. (“Your stock price API returns GOOG stock too low by $100, consistently.” “Oh, we know, but our existing customers have already written code to work around that bug. If we fixed it now, they’d all break.”) However, without lock-in, people will eventually walk away from such clumsy and incomprehensible APIs.

  31. Josh – yeah, it’s a tradeoff. It’s changed, and documented, between Tomcat 6 and 7, and that’s entirely fair enough – it is actually wrong. But a minor-minor “bugfix” release is not the place to change API; I now have to consider Apache minor-minor releases things I can’t trust in case they bite us, and that sucks.

Leave a comment

Your email address will not be published. Required fields are marked *