Build engines suck. Help GPSD select a new one.

One of the eternal mysteries of software is why build engines suck so badly.

Makefiles weren’t terrible as a first try, except for the bizarre decision to make the difference between tabs and spaces critically different so you can screw up your recipe invisibly.

GNU autotools is a massive, gnarly, hideous pile of cruft with far too many moving parts. Troubleshooting any autotools recipe of nontrivial capacity will make you want to ram your forehead repeatedly into a brick wall until the pain stops.

Scons, among the first of the new wave of build engines to arise when mature scripting languages make writing them easier, isn’t bad. Except for the part where the development team was unresponsive at the best of times and development has stagnated for years now.

Waf is also not bad, except for being somewhat cryptic to write and having documentation that would hardly be less comprehensible if it had been written in hieroglyphics.

cmake and meson, two currently popular engines, share one of the (many) fatal flaws of autotools. They don’t run recipes directly, they compile recipes to some simpler form to be run by a back-end builder (typically, but not always, such systems generate makefiles). Recipe errors thrown by the back end don’t necessarily have any direct relationship to an identifiable part of the recipe you give your front end, making troubleshooting unnecessarily painful and tedious.

Wikipedia has a huge list of build-automation systems. Most seem to be relatively new (last decade) and if any of them had distinguished itself enough to be a cler winning choice I’m pretty sure I’d already know it.

Scons is where I landed after getting fed up with GPSD’s autotools build. In retrospect it is still probably the best choice we could have made at the time (around 2006 IIRC), and I think I would be very happy with it if it had lived up to its early promise. It didn’t. Time to move on.

Waf is what NTPsec uses, and while it has served us well the abysmally bad state of the documentation and the relatively high complexity cost of starting a recipe from a blank sheet of paper make it a questionable choice going forward. It doesn’t help that the project owner, while brilliant, does not communicate with lesser mortals very well.

Is there anything in this space that doesn’t have awful flaws?

Here’s what we want:

– works on any reasonable basically-POSIX system for bulding C programs

– there is no requirement for native Windows builds, and perhaps no requirement for Windows at all.

– has a relatively compact expression of build rules, which more or less means declarative notation rather than writing code to do builds

– has a track record of being maintained, and enough usage by other projects that there is a reasonable expectation that the build system will continue to be a reasonable choice for at least 5 years, and ideally longer

– doesn’t impose difficult requirements beyond what gpsd does already (e.g., needing C++11 for a build tool would be bad)

– has a notion of feature tests, rather than ifdefing per operating system, and promotes a culture of doing it that way

– supports setting build prefix, and enabling/disabling options, etc.

– supports dealing with -L/-R flags as expected on a variety of systems

– supports running tests

– supports running the just-built code before installing so that it uses the built libs and does not pick up the already-in-system libs (necessary for tests)

– supports cross compiling

– supports building placing the objects in a specified location outside the source tree

– supports creating distfiles (sufficient if it can shell out to tar commands).

– supports testing the created distfile by unpacking it, doing an out-of-source build and then running tests

– is not a two-phase generator system like autotools, cmake, and meson.

What in Wikipedia’s huge list should we be looking at?

111 comments

  1. I may get thrown rocks at, but Gradle (depends on having a JRE installed) is something you’ll want to look at if the JRE requirement can be stomached. Some of the requirements I can’t vouch for (-L/-R flags), but most of what you mention here looks doable from the surface.

    1. I don’t think requiring the jvm for a build tool should be a blocker. Lots of build tools are built on python or ruby and hence require those languages and associated runtimes to run the tools. Maybe 10 years ago this would be a deal breaker, but with fully open source openjdk available on every Linux distro, I don’t think it’s more onerous than requiring a specific scripting language.

      I feel your pain in selecting C/C++ build tools. About 2 years ago I was curious about C++14 and went looking for a good build tool to use while developing some toy projects. For me the big requirement was good test and debug support. They all seemed to have major issues. I ended up going with gradle because as a Java/Groovy developer with 6+ years using gradle seemed like the least bad option.

      Using gradle for C/C++ was pretty good. They seem to have put a lot of thought into supporting lots of different compilers on lots of different platforms, which forces them to make the build tools abstract/paper over the differences. Switching back and forth between clang and g++ was pretty trivial. Also, different build outputs (executable binary, static library, shared library) was possible in the same build. Running tests was also a breeze. Output locations are always configurable. It isn’t two phase, it runs the compilers itself. Building distributions is a piece of cake when doing jvm development (zip and tar.gz supported). Those are built into the java build ecosystem, so making them work with c may require more work. But, you can directly script that stuff in a platform independent way with groovy right in the build script itself.

      It takes advantage of the gradle build infrastructure so builds are pretty fast. If you were doing java, scala, groovy, or kotlin, it would be the clear winner. But since you are doing c, it may be worth as the competition seems to be pretty bad.

      1. > It isn’t two phase, it runs the compilers itself. Building distributions is a piece of cake when doing jvm development (zip and tar.gz supported).
        I remember distinctly — back when I was dealing with gradle more often — being able to invoke shell scripts with something that looked like:

        commandLine ‘command’ ‘arg1’ ‘arg2’

        So even if native gradle stuff doesn’t support it, it complies with being able to shell out commands…

        1. Sure, groovy allows shelling out to commands. However, given you have the full power of the groovy language and the jvm, there’s usually little to be gained by using the shell. But it you want it, it’s there.

  2. Bazel supports most of this and is sane, easy to use and scalable. The main places it falls down on your list are:
    1) not having an easy built in method of installation of built code (you’d need to make your own install command, which isn’t that hard but is not there out of the box)
    2) it needs a JVM to run the core server
    3) it can be quite opinionated about file naming schemes
    It brings the advantages though of being quick and consistent, with a very clear language based on Python for extending functionality and a flexible declarative way of specifying your build requirements. I’ve been impressed using it.

    1. buck, pants and please also use the same starlark syntax as bazel. I have not used any of them. I think based on specs that pants (python) would be the forerunner in that subset. please (golang) might be a reasonable second. which leaves buck and bazel
      (java from facebook and google respectively) at third.

    2. Bazel, at least the version I used, didn’t support creating multiple versions of an output from a single source. For example, there wasn’t a way to build concurrently the x86, x86_64, and debug-x86_64 binaries.

    3. Bazel is the open source version of the tool I use as a Google employee who writes iPhone apps, some in Swift, some in Objective-C. Every app I work with builds with it, and it will run them, and run the tests. It is, of course, also used to build and run the tools, often python programs that link to native libraries. One Google team I was pn formerly used Scons, but changed to the Bazel-equivalent a few years ago.

    4. Seconding the recommendation of Bazel. Being a Google tool originally it will presumably be supported for a good few years into the future. Here’s an example package build file to give an idea:

      package(default_visibility = [“//visibility:public”])

      cc_library(
      name = “market”,
      srcs = [“market.cc”],
      hdrs = [“market.h”],
      deps = [
      “:goods_utils”,
      “//market/proto:goods_proto”,
      “//market/proto:market_proto”,
      “//util/arithmetic:microunits”,
      “//util/headers:int_types”,
      ],
      )

      cc_library(
      name = “goods_utils”,
      srcs = [“goods_utils.cc”],
      hdrs = [“goods_utils.h”],
      deps = [
      “//market/proto:goods_proto”,
      “//util/headers:int_types”,
      ],
      )

      cc_test(
      name = “goods_utils_test”,
      size = “small”,
      srcs = [“goods_utils_test.cc”],
      deps = [
      “:goods_utils”,
      “@gtest”,
      “@gtest//:gtest_main”,
      ],
      )

      cc_test(
      name = “market_test”,
      size = “small”,
      srcs = [“market_test.cc”],
      deps = [
      “:market”,
      “@gtest”,
      “@gtest//:gtest_main”,
      ],
      )

      Ugh. Obviously it was sensibly indented in what I copied from. The dependencies start with ‘:’ for packages in the same subdirectory, ‘//’ for full paths from the workspace base, and ‘@’ for third-party libraries defined in the base WORKSPACE files.

  3. “perhaps no requirement for Windows at all”

    In that case cmake is overkill, so Makefile w/ autotools and friends. But if that requirement changes, then cmake walks on water

    1. >cmake walks on water

      No. Being generator-based is unacceptable. Been in that hell with autoconf, won’t go back.

      1. Honestly I’d rank cmake as significantly worse than autoconf/automake. It hypothetically supports version directives and tries to adjust its own behavior to remain backwards-compatible with old cmake definition files, but it too often fails at that task, or worse, the projects never bothered declaring what version of cmake was used and tested against and it fails without any good hints of how to fix it.

        At least autoconf/automake don’t depend on anything installed on the system once you’ve generated the configure file.

  4. I’m starting to see a parallel with init systems and systemd here.

    Everyone agrees that well established build systems may seem suitable for simple projects, but have grown over time into something which is held together with string, have accumulated platform-specific hacks which were first needed years ago and are probably not needed any longer, and are unmaintainable.

    A new build system comes along, which initially seems to be reasonably portable and easy to use. Projects start using it because it is suitable for them, and then others start doing so because lots of other projects are using it and it looks to be becoming the new standard.

    In the desire to accommodate even more platforms and use cases, it starts to grow hacks and features which were never part of its original plan. By now it has become so entrenched that the effort of switching would be greater than fixing or working around the problems, so projects just continue with it. Those for whom it thankfully works can’t see its disadvantages.

    My experience with this: CMake. Most of the projects that I work on use it, but it was obvious from the beginning that its major disadvantage was the disconnect and lack of transparency between the input rules and the generated Makefiles. Plus its complete lack of debugging or logging options where it is necessary to understand what happens in the generation process. And its professed platform-independence, yet often where its built-in feature detection fails the only fallback is platform-specific shell commands.

    1. Sure, that describes how this has tended to happen in the past but that’s simply evidence of poor design/maintenance (or at least an unwillingness to simply refactor the system to simplify things).

      I see no intrinsic reason why you can’t have a build system in which easy, normal things are easy and the weird special cases you need to deal with bizarre systems or weird setups are only necessary to deal with those systems and stay out of the way of projects who don’t need them.

      1. >I see no intrinsic reason why you can’t have a build system in which easy, normal things are easy and the weird special cases you need to deal with bizarre systems or weird setups are only necessary to deal with those systems and stay out of the way of projects who don’t need them.

        And if scons hadn’t been effectively left to rot it would have been that system. Sigh…such a missed opportunity.

          1. >Fork & resurrect scons?

            I have briefly considered it. Thing is I’m now trying to leave C and its toolchain behind – all my new work is in Go, which doesn’t have this problem. That makes me less than ideal as a potential scons maintainer.

            1. Just curious, why do you say that scons has been left to rot? From a quick scan of the website and Github, there seem to have been pretty much daily commits happening since at least November last year; the current 3.1.2 release was in December 2019 and earlier releases came in August, July and March that year. I haven’t looked at the substance of the commits – that is, whether they are addressing real problems or whether they are just tweaking – but there is at least activity on the project.

              1. >Just curious, why do you say that scons has been left to rot?

                Some time back I shipped the scons devs a pretty major feature simplifying the handling of shared libraries. There was no action on it for months. What was actually merged was changed so far out of recognition that I couldn’t use it – I had to retain the rather nasty workaround I’d been trying to get rid of. My documentation patch was completely lost, leaving the new feature undocumented. Nobody answered my emails. The response was so slow, clumsy, and botched that I was forced to conclude there was effectively nobody home over there.

    1. >CMake is the conservative choice, your objection in principle seems somewhat arbitrary.

      No, autotools would be the really conservative choice. And wrong for the same reason.

      1. Probably only if you’re already using it with no issues…

        I’m not sure whether embedding Ninja (for example) into something like CMake to make it meet your requirement would bring any actual benefit.

        FWIW I’m not an expert but if I encounter an autotools build I expect to have to fix issues, CMake probably not. Something else that I have to get installed before proceeding becomes a barrier to diving in. Just my feeling.

        It will be interesting to see where you go.

  5. I wonder if this could be another one of those cases where you are eventually gonna end up writing another “category killer” replacement.

  6. Some languages (Java, Go) have a build-dependency system built into the compiler, so that running “go install” for example, will do one of the stereotypical tasks of Make.

    It strikes me that one could break a build system into two parts, a dependency engine and a language for stating (additional) rules. If I were tempted to do so, I’d pattern the language after Make, but take out the “we’re using a ASR 33” part and use indentation in place of tabs alone.

    1. >Some languages (Java, Go) have a build-dependency system built into the compiler, so that running “go install” for example, will do one of the stereotypical tasks of Make.

      In the case of Go it’s actually simpler than that. What’s eligible to be compiled and linked is defined by your directory layout. It compiles every piece of source it can reach (including fetching and caching sources for imports) and links them all together. There’s no attempt to cache objects and invalidate them by dependency analysis at all.

      While this is expensive, what you buy with the extra cycles is that it nukes the stale-object problem and means you never have to write a build recipe (with a minor and tractable exception near code generation). I think this is one of Go’s good decisions.

      >It strikes me that one could break a build system into two parts, a dependency engine and a language for stating (additional) rules.

      You could. But I think the Go devs are right and dependency engines solve a problem that you can design your language toolchain not to have in the first place. Rust takes a similar approach.

      1. When you’re writing in C (or C++), you can do a similar thing yourself: just arrange to only have a single compilation unit.

        You just need a single file that #includes all your other source files. You can then compile your program by building just that one file, rather than by building many files and linking them together. This reduces the complexity of the build system to a one-line shell script. If you have multiple platforms to support, then you just have one root file per platform along with platform-specific implementation files for things that are different on different platforms.

        Transitioning to this system might be annoying, of course, but so would transitioning to any new build system.

        1. You also lose the namespace isolation of translation units (i.e. file-scope variables and macros) and fail on the feature tests requirement.

          1. We have real namespaces now, so that’s not a big concern (although it is a risk if you’re converting existing code).

            Feature tests don’t change at all here, because it still comes down to writing a program to do the test, using the result of the test to set defines, and then using those defines in your code. If you’re using autoconf, then you can run configure.sh so that it generates config.status, run config.status to generate config.h, and then include config.h in your one single compilation unit.

        2. I suppose you don’t plan to build any shared libraries or anything else dynamic.

          If you’re working in a higher level language like go running on a VM your less likely to be building the kind of big library likely to be used by many programs simultaneously and if it ever becomes a real issue a VM for a safe language can go ahead and implement this kind of optimization https://cs.uwaterloo.ca/~bernard/SLVM-ipdps03.pdf

          1. True, if you’re building a binary plus a library, or multiple binaries (ntp comes with several binaries, for instance), then you need a compilation unit per binary.

            1. I have a friend who always does this. He puts everything into the library except for things like argument parsing and output aesthetics. Even if he never calls that library from another binary, the discipline of separating the project out that way is worth it to him in the form of cleaner code.

      2. > There’s no attempt to cache objects and invalidate them by dependency analysis at all. … I think this is one of Go’s good decisions.

        This is plausibly true for many software projects today, and of course go does its best at keeping compile times small in order to support this. But it’s absolutely not generally true, and certainly isn’t true for many large not-purely-software projects. I have used make before to manage data pipelines and generate reports, and it has worked quite well, and saved huge amounts of recomputation.

        > But I think the Go devs are right and dependency engines solve a problem that you can design your language toolchain

        This approach fails massively for polyglot projects, of course. And since different languages have different strengths, these are … not uncommon.

        1. >This approach fails massively for polyglot projects, of course.

          At least in Go-land, this is not a real problem. And the reasons it not could, I think be generalized to Rust or any language with at Go-like build strategy.

          I’ve been writing a lot of Go with Python and shell auxiliary scripting lately. That kind of polyglot doesn’t trigger your massive failure because the Go build system doesn’t have to do builds in the other languages. For Go with C code, cgo provides a pretty general solution.

          So you’re really going to hit “massive fail” only if you try to polyglot Go with a compiled language that is not C. In 2020 how likely does that seem, really?

  7. I hear ya! Autotools was so awful that I’m moving away from build systems completely. The thing is, we don’t have the kind of variance between different OS platforms that we used to. Simplicity is the key now.

    So in the Makefile I have:

    # config.mk is generated by ./configure
    include config.mk

    and later…

    config.mk:
    ./configure

    Of course, ./configure is just a shell script that figures out the CFLAGS and PREFIX and other things like that. And it *works* … every time. People aren’t trying to build on 36-bit Bizarre/IX anymore.

    1. +1. Chibi Scheme uses Make, period (except on Plan 9 where it uses mk). There is a sub-Makefile called Makefile.detect that figures out your platform mostly from uname, sets a bunch of Make variables, and builds some dot-h files as targets. If you want it to run on a new platform, add stuff here. Currently Chibi supports MacOS, BSD, Msys, Cygwin, Android, Linux, and Solaris.

      Chicken Scheme currently makes you set PLATFORM yourself (fixed in the next release but one), and has separate Makefile fragments that write out the dot-h files directly in the Makefile with echo, echo, echo, but is otherwise the same. It supports AIX, Android, BSD, Cygwin, Haiku, Hurd, Linux, MacOS, Msys, MinGW-without-Msys, and Solaris, plus Linux cross-builds to iOS and MinGW (obviously those can’t be autodetected).

      Guile Scheme, on the other hand, is on autotools and always will be, because GNU Project. It takes forever to run ./configure, and twice forever if you have to go back behind that script.

      Here’s what happens if you type “./configure” on Chibi:

      Autoconf is an evil piece of bloatware encouraging cargo-cult programming.
      Make, on the other hand, is a beautiful little prolog for the filesystem.
      Just run ‘make’.

  8. (ooh, and I should mention that if you run ./configure on its own, it will accept all of the usual flags, but if you fail to run it, the Makefile runs it for you with the default settings.)

  9. I think two steps generators are the right thing, even if autotools are an incredible mess and hurt a lot of people.

    CMake syntax is boring, to say at least…

    For several years, I was expecting a build system to come with simple features:

    – easy syntax
    – declarative language (not extensible!)
    – sane default settings
    – support for several mainstream languages

    Meson just met my expectations, and even beyond. Syntax is nice (python like), you can’t extend it (which is basically *THE* feature avoiding hell for me) and it does all I need (thanks to generators: https://mesonbuild.com/Generating-sources.html). Cross compiling and subprojects are natively handled.

    The fact Ninja is its unique backend (by choice) is an incredibly good decision.

    The fact Meson developers understood build systems should not be a programming language is finally bringing peace to me, and gives me more time for *really* interesting things.

    I really suggest you to spend some time with it. I will never look back. The fact several “famous” projects (mesa, gnome, …) are choosing it clearly shows its capabilities.

      1. As long as you have a strong guarantee that your top generator will be called when it is necessary, as it is the case with meson, why would a two step build be an issue?

        I appreciate (daily) the possibility to have several build folders and easily switch between them by just typing ninja -C build, or ninja -C build_with_address_sanitizer for instance. And knowing everything will be handled correctly. Autotools do not offer such a guarantee, which is, I agree with you, a pain in the a**.

        Considering this, why do you prefer having a monolithic build step?

        1. Recipe errors thrown by the back end don’t necessarily have any direct relationship to an identifiable part of the recipe you give your front end, making troubleshooting unnecessarily painful and tedious.

          This is where two-part build systems often fail. The back-end doesn’t keep sufficient information for diagnostics that point to the cause of the problem (if the back-end is even aware that a problem exists). The front end combines the project’s recipe with a library of built-in rules, so there’s a big space to search when things go wrong, especially when things go wrong by omission.

          But…this is hardly a behavior unique to two-part build systems. Any software can choose not to produce usable diagnostic information.

          Even Make is very bad at this: you can’t directly ask Make “how did you decide, with references to source files and line numbers and source measurements, to run that command with those arguments at that time?”. The closest you can get is a partial walk-through of its decision making process and a raw dump of all of its variables after processing all makefiles completely, and try to figure out how one lead to the other yourself.

          1. Not unique no but if you are reducing your build commands to some *existing* language (i.e. not a virtual machine designed to make it easy to associate errors and etc… with high level code) it’s going to be pretty difficult to do this effectively.

            I mean I’m sure you could do it but I suspect only by putting in the kind of effort that would have let you build a system from scratch that didn’t compile into some lower level language/files.

        2. >Considering this, why do you prefer having a monolithic build step?

          You really aren’t very good at basic reading comprehension, are you? It’s been explained at least three times on this thread.

          1. I read other answers, of course, but they all start from the point “When it fails, it’s hard”. I ask you what is the issue if it doesn’t fail in the first place?

            If your input language is easy (no wildcard, no functions, no strange things, like Meson), and your backend language is very basic (no wildcard, no functions, just a list of tasks, rules, that’s all), my point is that you will not met problems you saw with CMake/autotools.

            That is exactly why some people only like C, especially around here. Because they completely understand what is done, they can go through assembly, and nothing is hidden. To me, Meson is basically C, and Ninja is just assembly (it is even presented like this by its authors). Please don’t put it in the same category than CMake.

            I know CMake created a lot of frustration, autotools bored people to death. But, IHMO, Meson guys did something really new (declarative and KISS) and they found a *really* good compromise between expressiveness and simplicity. Like any system, it is not perfect, but it is close to what I expect from an optimal build system.

            But well, if in the end your pleasure is to write 3’000 lines of python to build a simple project (SConstruct…), that is a basic right for any computer guy as well.

            Sometimes, people forgot that a build system should just build things. If you need something around, a convenient script shell will always do a better job. Mozilla developers had a pretty good idea by having a “mach” script doing your everyday task easily: https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach

            1. >I read other answers, of course, but they all start from the point “When it fails, it’s hard”. I ask you what is the issue if it doesn’t fail in the first place?

              LOL. No. I don’t consider this a plausible premise.

              1. What is a concrete example of failures you think about? I am interested in understanding this.

                For instance:
                – is that function available?
                – is that compiler option available?
                – does that target has this feature?
                – is that library available?

                In my experience (clearly shorter than yours), besides those things I never expected anything else from a build system than:
                – here is a list of sources
                – here are my flags
                – here are my flags for *this* file
                – here is a shared object
                – here is an executable
                – here are my tests

                With meson, I got all this, and for free (without adding a single line):
                – cross compiling
                – build variants (debug, release, opt_debug)
                – testing
                – unity builds
                – packaging/install

                What do you need more that may introduce a “hard to find” failure origin?

                1. >What do you need more that may introduce a “hard to find” failure origin?

                  I won’t know until it happens. That it will happen, eventually, is something which my experience does not permit me to doubt.

                  1. What you say is very interesting, because for me, it shows why all build systems failed until now: Users expect too much from it.

                    In consequence, they tried to solve any possible issue on earth. “Let me checkout those source on internet for you”, “Let me push your code for you”. “Let me be clever for this”.

                    Result is, as any complex (and too clever) system, a mess.

                    If you step back, and just expect a build system to do what you want (take a list of files, and assembles it into a binary object), then most of your problems are gone.

    1. > – declarative language (not extensible!)

      This is a fantastic feature. I despise e.g. Python’s setup.py that can do anything. I want the build language to be restricted and auditable.

      1. Forbid wildcard (and associated functions) is another strong point.

        Easily, anyone can see which sources are grouped, how they are linked, and what objects are produced.

        CMake custom flags generating functions is usually the moment where you start to hate it, and for a good reason.

  10. Nix is the most bashy / unixy build system I’ve ever seen. It allows you to use bash scripts to write your compile steps and then glues them togethor in an incredibly repeatable fashion. It definitely has rough edges (some of which I’m working on fixing slowly), but it has most software in the linux universe packaged, has built in support for running tests, cross compiling, and a bunch of other crazier things that are impossible in other languages.

    1. I’d never heard of it…well, it’s not like I am obsessed with keeping track of such things.
      I now have a shiny new NixOS VM and am having fun playing with it. Installation and configuration in VirtualBox has been….an ‘experience’ ;)
      Nix is certainly an interesting concept.
      Thank you!

        1. oh you glorious bastard…now my life is fucked ;)

          I’ll be playing with guix all weekend now

  11. I don’t have a favorite, but I’ve learned to live with POSIX make.

    To the extent that Automake and others make portability easier, they aren’t needed anymore on POSIX systems ( https://varnish-cache.org/docs/2.1/phk/autocrap.html ). And without that, they just offer a new syntax to specify what you want built (which, ultimately will probably be translated to make anyway).

    1. Indeed. Usually when I’m looking for a new build system, I want to know things like “how do I make this build system add -fwibbly-wobbly to only the 3 source files that need it, and no others?”

      When the answer is “first read our 30-page white paper on the magic of build system innovation, then integrate your requirements into our constraint solving engine and user-contributed feature categorization library (make sure you’re using the latest version because we keep making breaking changes!), and possibly encode build-controlling information into the positions or names of files within your source tree” I close the browser window and move on. I rarely need to read more than 15 pages of the Make manual to solve a problem. If I want someone else’s crappy unmaintained build recipe fragment for some exotic dependency or artifact, I can use DuckDuckGo–there is no need to involve a dedicated curator for such things.

      If I’m going to do that much work, I could just write my own build system. “Write a program that can read and execute very simple Makefiles” is literally an assignment somewhere near the beginning of year 2 of a CS degree program, built out of earlier core topics like parsers, macro expanders, trees, filesystems, recursion, and threads. I don’t want to write a bespoke build system–that’s why I want to use someone else’s–but I also don’t want to integrate into my project and then debug someone else’s second-year CS assignment that escaped the lab and found a fanbase on Github.

      I once even tried the article’s approach–challenge those users who open bugs against crappy Makefiles to contribute a working patch to switch a reasonably simple project with a few build-time quirks to a better build system than Make. Ninja and CMake have been mentioned in comments, but there have been zero pull requests so far. Obviously Eric has more users than I do, so I look forward to seeing if he gets better results.

  12. I’m willing to bet that the first build tools were made by developers who were working on something else, got tired of running the same set of commands over and over to build it, and slapped just enough macro code together to automate it. The result is Makefile and the rest.

    If this is true, then the usual boring economic explanation is that this keeps happening. The intersection of the set of developers who need a build engine and the set of those who want to focus on it is small enough that idiosyncratic factors such as first mover, heroic coder, etc. dominate.

    And in some cases, the dominant factor is the language – someone wants a language to get traction badly enough to write a builder for it, that turns out to work extremely well for only that language. I know this is an easy pattern to fall into, given how languages often come with their own assumptions about how local source code is arranged and where dependencies are located.

    Are there any tools on the Wikipedia list that clearly don’t fit these explanations? I’m not familiar enough with all of them.

  13. Also, I find I overlooked a comment in your post on reposurgeon documentation that reminds me of the build tool problem here. paradox says:

    The documentation is complex because you’ve really built a “reposcalpel”. A repo surgeon is still required to operate it. You’re trying to stuff all of “repo medical school” into the documentation.

    The list of requirements in the OP for build tools shows how nontrivial build tools can be. The requirement that a developer understand and master this list cuts down the eligibility set even more.

    Plus, I can think of at least three requirements I didn’t see listed there:

    – Manage dependency versions as well as, say, Maven.
    – Generates an installer for most major platforms. I imagine this is enough of a hairball all by myself to make even veteran programmers wonder why they own cats.
    – Has a catchy name. “buildsurgeon”? “fhtagn”? “yourgodnow” (as in “where is”)?

    1. Redo is cool, WAAAAY simpler than most would believe possible, and surprisingly flexible.

      The problem with redo is that I’ve tried two implementations (apenwarr & jdebp) and they had silly incompatibilities with each other; at the very least the jdebp requires do files to be executable while apenwarr didn’t care and always ran them with “sh -e” (unless you had a different #! line), and iirc there were a couple of convenient features that I found out the hard way were extensions (*).

      Sorry for being vague, but it was almost 10 years ago, and I couldn’t find any notes.

      https://redo.readthedocs.io/en/latest/ (from the repo you linked) points to other implementations that I didn’t even know existed; one of them says it’s trying to improve over the original design, so it doesn’t care to stay compatible (yet didn’t change name :), and I have no idea if the others are good.

      (*) I __believe__ that apenwarr’s redo also had stdout set to the output file, which just confused jdebp’s redo.

  14. I write my own in straight python.

    All I usually want to do is call a compiler sequentially on all source-files within a tree (excepting a main.cpp or something), wad it up into an archive file, call a linker on that and the main object for the test case(s). And that’s what my python code does.

    Also some things like moving the include files to a common include directory, moving updated .sos to the bin directory, moving library files to a common lib directory.

    1. Quick quiz: Describe how this approach might be far less than optimal in large projects.

      1. I am pretty inexperienced with large projects. The stuff I’ve worked on before has been the work of myself or one-or-two other people.

        What does a build system need to do? As far as I’ve followed them, it needs to run source files through a compiler, link object files, link to external dependencies, and spit out an executable.

        Why is a straight imperative script of some sort (in any language) not all you need?

        If the project is huge in terms of compile time, split it into chunks and libraries. Compile the ones that you work on.

        PS my rant below is mostly inspired by Cmake – I have to use it at work, because we have a weird and very constrained build environment. It’s slow, indirect, fails in cryptic ways, (and other grumbling).

  15. IMO, (my not very experienced at large projects opinion), any sane build system should at least echo what it’s going to try to do to all the source files (what compiler commands it’s going to actually try to run), so that you can take the burning dumpster fire of errors, and write something of your own that calls the compiler and feeds the compiler what it needs to see.

    I just need to know (1. what you’re trying to link to), (2. what compiler switches and #defines your stuff needs to see). That’s all I need. Impossible to figure out if your build system is some towering fractal dependency-seeking-but-not-finding octopus that barfs errors before it gets to where it tells you what flags it was going to use.

  16. I use BSD make (“bmake” in many Linux distros) for my build system in new C projects of mine. BSD make is not just a make, it comes bundled with makefile templates for common tasks like building executables and libraries, which you can use by simply setting variables for them and including them in your makefile. I think it even chases header dependencies. It’s quite powerful.

    But Cmake has become an established de facto standard for modern C and C++ development. If you want to play nice with the rest of the ecosystem, you can’t go wrong by sucking it up and using Cmake.

  17. Gah. I hate cmake. It was engineered by Germans, and it shows. (A German doesn’t engineer anything unless he overengineers it. Ask any Mercedes mechanic.) The 1.2 MLOC hairball I work on regularly uses cmake, and just as that project has taught me to hate C++, so, too, has it taught me to hate cmake.

    I wish I had a better answer…

  18. Have you looked into how mksh is built? It’s just a hairy bourne shell script, as far as I can tell.

  19. Most of my hobby programming is done in image based Smalltalks, so I don’t have classical build issues and little insight into the current crop of build tools. Make handles the rest of my needs well enough. But the general problem is interesting.

    > One of the eternal mysteries of software is why build engines suck so badly.

    Build systems suck so badly because they aren’t solving the right problem, or at least not in the right way. If you squint a little you’ll see that building a system is partially and substantially a cache invalidation, generation, and dependency problem with all the attendant problems that always come with caching. (You’ve talked around the edges of this, but you haven’t explicitly said it in this post.) Worse, most systems require that you maintain dependencies in two places: the code and the build scripts.
     
    https://gbracha.blogspot.com/2020/01/the-build-is-always-broken.html

    Gilad is talking in the context of designing the Newspeak language and making sure it has liveness (a superset problem to building), so he assumes some pretty deep language changes can be made. But, I think his basic points hold and you are not going to be able to completely desuck build engines without at minimum the Golang rebuild-the-universe level of language support. The questions then become: how can you isolate the suck into as small a compass as possible? Are there any metaprogramming tricks available in the languages (or OSs or IDEs) you’re using that can make it easier? Can you separate the other issues (configuration management, testing, packaging, etc.) into different tool(s) so as not to overcomplexify the poor build engine and it’s scripts?

    This is starting to remind me of the complexity discussion you bring up in TAoUP with editors. What is the right size? What is essential? Are we looking at manularity or adhocity traps?

    This does give me the amusing idea of writing a build system in emacslisp. You could use the existing codewalker tools to fake out metaprogramming language support, it is obviously fairly portable, and has all the support one needs for managing the shared contexts of a build system and the related tools.

  20. Many of these requirements sound less like build *engine* requirements, and more like requirements for a standard library for a build engine.

    I wish I could recommend “cook” by Peter Miller (author of the classic “Recursive make considered harmful” paper (do absolutely read this if you haven’t), as well as the interesting but flawed aegis revision control system), but unfortunately since his death it’s been completely unmaintained, and appears to have disappeared off the net. Another typical still-not-usable python-build-system-attempt has now taken the name.

    Shake ( https://shakebuild.com/ )is an interesting Haskell based system, but because of that is likely not suitable for your purposes.

  21. I’m pretty sure that if the way code is built isn’t integrated into the language, the problem is either intractable, or it would be easier to write a new language with embedded build information than it is to solve the problem generically. An ad-hoc builder has (potentially) bad data about how and what to do with an inability to interpret failure further obscured by interleaved output from parallel building.

    Having a project with enough complexity to need a complex build tool is the core problem. Every build tool only cares to be good enough to address that — to make having a complex project manageable — everything else is random niceties.

  22. – supports creating distfiles (sufficient if it can shell out to tar commands).

    Necessary in the two-pass build systems, sure, but I would argue it’s the wrong place to solve the problem, especially if you aren’t using git submodules. Two-pass build systems (eg, autotools) practically mandate having distfiles targets themselves in order to include the generated files (./configure et al), but if you do things right, your VCS HEAD should be the only files that are required.

    The git archive command is wonderful, it generates reproducible tar or zip files from any tree object. GitHub and GitLab both use it as the backend to their source download options.

    1. >if you do things right, your VCS HEAD should be the only files that are required.

      What about generated documentation products and things like Yacc outputs? I like your theory, but it’s impolite to increase a project’s build dependencies when you don’t have to.

      This issue isn’t confined to two-phase systems.

  23. More a case of what you don’t want than what you do…

    Tup (http://gittup.org/tup)

    It’s a nice-ish build engine with a very compact build rule lexicon. However one of it’s idiosyncrasies is that it doesn’t do non-build rules (i.e. if you’re not specifying build this output from this input using this command it’s militantly not interested, this is considered a feature not a bug by the maintainer). This sort of means install and test are “not my tool’s problem” with a response of “you should build a shell script”. Also i’m not confident that it’ll do feature tests nicely.

    1. Reading the documentation, it hardly seems different from shell-scripting. Hard-coding compilers, lack of configuration, and some kind of grotesque dependency on a “.tup” directory seem to make it a no-go right from the start.

      If I ever came across a project using this, I might be tempted to just rewrite the thing in Make.

  24. Another issue you are forgetting, which I hate current build systems for:
    There is no way for the application build system to tell a higher level build infrastructure what kind of stuff it builds.

    As someone who has worked on the build of complete systems, the inability to remotely connect application build systems together is a real pain. Consider the following maximally-complex-likely scenario for gpsd (it’s a hybrid of all of the annoying things I’ve had to solve in my career):

    I have some system building process. At the start I have a bunch of source repositories. At the end I have a bunch of iso/tarball/cpio/fat32/whatever images for installation. In the middle I have a nasty set of dependencies. I want this to be able to run as parallel as possible because wasted time is wasted developer productivity. Also, I concurrently want to build for x86, x86_64, a debug version of x86_64, and arm.

    I decide that I want to add a package (for whatever reason) to this build process. Right now, I need to figure out what stuff needs to be built for the host, platform-agnostically for the target, and platform-dependent for the target. For example, many protocol processors will take a protocol definition file of some sort in and spit out .h/.c files to be compiled. Depending upon the set-up, that protocol compiler itself may need to be built for host-only, host-and-all-targets, or only all targets. Then there’s the question of whether the generated .h or .c files are agnostic to the build target or not. (my current rage is OpenSSL generating asm code using Perl). So far, this isn’t that big of a deal. But you also have to consider that other code may be depending upon the agnostic/target header files generated by this process.

    For another app which relies on the generated headers, you have the following chain:
    Host protocol compiler -> header files -> (target package binaries + other target package binaries). Simply building without providing useful information to be exported fails to allow for parallelism between the building of the binaries inside the single package and the building of the binaries in other packages which depend on the built header files.

    Next, we have the need to support multiple build methods. Being able to build multiple architectures (I treat debug and non-debug as separate architectures because sometimes they might as well be) concurrently also allows for savings. This requires being able to support building not-in-place, so that I can have the output files get put into non-overlapping directories.

    Another issue which I encounter is simple compile-level parallelism. Facilities such as make support -j which works well enough, I suppose. But if I’m trying to wrap that at a level up, it’s really hard to get make to be smart. I currently have a system which involves make wrapping scons which wraps make.

    I suspect that, like most things, this type of issue is only faced by people who are working on large systems and so aren’t addressed by the OSS hobbiest-tier.

  25. One of the eternal mysteries of software is why build engines suck so badly. …

    Is there anything in this space that doesn’t have awful flaws?

    <sarcasm tone=”cynical”>ASDF? I know it doesn’t meet your needs, being Common-Lisp specific, but I’m sure you didn’t mean to imply that being written in an unpopular language, for said unpopular language, and focusing entirely on one unpopular language is a flaw…</sarcasm>

    Honestly, looking over the wikipedia list either SCons or bazel seem to be current best-of-breed for generic build systems, with maven not that far behind. If it weren’t obvious from the above, IMO software suck here tends to come from build rules trying to be “too” generic and flexible and the additional complexity that adds. No real mystery here, just the impact of someone adding “…and makes me coffee in the morning” as a project requirement and now having to support 3,145 different coffee-making services….

  26. People with your complaints about build systems (including me) seem to converge on just writing Makefiles by hand. I only ask one thing, of you and everyone – if you write makefiles that require GNU make, can you name them GNUmakefile, not Makefile? Thanks.

    1. >I only ask one thing, of you and everyone – if you write makefiles that require GNU make, can you name them GNUmakefile, not Makefile? Thanks.

      On even numbered days I want to reply: “Uh. OK, If you can point me at a bulletproof way to check for BSD-make conformance so I’ll actually know when I’m using GNU-make extensions.”

      On odd-numbered days I want to reply: “Dammit, you BSD guys are a pain in my butt. GNU Make won, adopt it and get over yourselves.”

        1. >If it’s not here, it’s an extension

          I said an auditing tool and I meant it. I’m not interested enough in BSD makefile compatibility to be willing to work at checking it. See “You BSD guys are a pain in my butt”, above.

          1. Seems the best tool would be just running the Makefile through FreeBSD, either in a VM you have or as part of a CI environment.

            Or just throw your hands up, declare GNU Make the winner, tell anyone on non-GNU systems to just use gmake.

            1. >Seems the best tool would be just running the Makefile through FreeBSD, either in a VM you have or as part of a CI environment.

              Lots of work.

              >Or just throw your hands up, declare GNU Make the winner, tell anyone on non-GNU systems to just use gmake.

              I think that’s going to be my policy, yes.

  27. First of all what is really build system?
    Why simple bash script could not handle it?
    As is Turing complete it should handle any task that computers can do.
    Problem is how hard it will be? You will probably need reimplement Autotools from scratch.

    But there is other way around? what if we split autotools in small parts that will be called directly by some custom script, some thing like:

    “`
    build_check include “” && z=YES_XYZ || z=NO_XYZ
    if [[ z == “YES_XYZ” ]]; then build_check code $’#include\n constexpr z = xyz::foo();’ || z=NO_XYZ; fi;
    build_list_include_dependency “./src/main.c” | grep ‘/foo\.h’ && w=YES_FOO
    “`
    This is in line with Unix philosophy that each tool should have only one job to do any do it right.

    With many tools like this you could even create wrappers for them for your favorite langrage or build tool.

  28. Well crap. I saw the word “Engine” and I thought, Ford, Chevy, Toyota or Nision. Sorry to be so old school. ?

  29. This whole thread makes me even more convinced that c and C++ should be considered deprecated.

    All the modern languages I have any familiarity with do not have this problem or at least it is in the category of a solved problem: e.g. Go, Rust, Python, JavaScript, Flutter/Dart, Java/Kotlin.

    Now if we could just get the systems people to stop writing system and library APIs in that lowest-common-denominator language from 5 decades ago, perhaps we could improve the state of our craft.

  30. ESR> Waf is also not bad, except for being somewhat cryptic to write and having documentation that would hardly be less comprehensible if it had been written in hieroglyphics.

    In your judgement, what stands in the way of writing good-enough documentation for Waf? Is it something deep, like the problems you described in your thread about documenting reposurgeon and gdb? Or is it just plain lack of developer interest or project resources?

    1. >In your judgement, what stands in the way of writing good-enough documentation for Waf?

      What stands in the way is waf’s designer. Brilliant guy but ludicrously bad at explanation.

    2. > In your judgement [sic], what stands in the way of writing good-enough documentation for Waf?

      Nearly two years ago (!) I had started to write an extended essay or small book of worked examples with thorough explanations for waf, as a start on introductory documentation. But $REALLIFE (like fixing up and moving into a century-old, new-to-me house) got in the way. Watch. This. Space….

      Now, exactly where did I put that Round Tuit??

  31. Would Kconfig/kbuild with a sane defconfig solve your problem? Last time I looked, you can define the build options inside your code. It’s not really a standalone build system, but it’s better than just using make by itself.

  32. Your geekness is used by the evil imperialist forces that destroy the life for all of us and you are openly raving about assisting developing submarine capabilities when you clearly know they are instruments of war? You are not a teenage boy anymore, are you Mr. Raymond ? Do you take pride in indirectly participating in foreign murders of naive & unaware people ? You should know by now that your government plays to the tunes of the Old Convenant without any remorse and that the Russia/China paranoia your government has a long time record of spreading is way out of proportions to be even minutely serious. It’s easy to not feel anything about those sort of stuff when they happen on the other side of the world, to people you have never met or seen, but do you ever even stop for a moment to wonder about the nature of your government’s biddings or do you simply blindly carry on supporting whatever it chooses to depict as right and moral to you and your fellow citizens’ astronaut helmets’ eye-goggles feeds on your remote sheltered piece of (is)land?
    Artificiality and seclusion between humans can only last so long.
    And I bet it would take the destruction of this planet once or twice for your people to really learn it.

    1. You seem preoccupied with submarines, Mr. Doe.

      Have you considered checking back with the mother ship more frequently?

    2. You guys laugh, but I actually think this is the new push for progressives in the open source movement: to fight for legitimacy for ethical licenses with stipulations that bar use by entities deemed to be complicit in human rights abuses, and then “name and shame” open source projects that have not yet adopted such licenses.

      Like Eric’s coronavirus speculations, there are consequences we should be able to observe if this prognostication is true: first, a sexual or other scandal involving one or more members of the OSI board of directors. (Bruce Perens was the target I had in mind, until his retirement; maybe he is aware of something we are not?) Second, a twitstorm among the Usual Suspects (including Corey, Aurynn, Valerie, and Sage) calling for disbandment of the OSI board and the election of a new board who will reverse the OSI’s field-of-endeavor policy and admit “ethical source” licenses as open source. Third, when said disbandment doesn’t happen (because why would it), campaigns to twist the arms of OSI’s corporate sponsors to pull funding.

      Fourth, irrespective of success with the OSI, campaigns to twist the arms of corporate open source users and contributors to adopt an “ethical source only” policy where possible.

      Should be a fun 2020.

      1. >I actually think this is the new push for progressives in the open source movement: to fight for legitimacy for ethical licenses with stipulations that bar use by entities deemed to be complicit in human rights abuses

        I don’t think they’ll get anywhere. Clauses like that would horrify corporate lawyers, because how do you evaluate compliance? CoCs got a foothold because they didn’t create fear about conformance costs; “ethical source” clauses surely would. Accordingly, OSI’s corporate sponsors would shoot this shit down fast and hard.

  33. Since you’ve committed to using GNU make (if you use make at all) — recent versions have Guile support. It may be possible to write the bulk of your “build system” in Scheme and have a relatively small make part driving the whole thing.

Leave a comment

Your email address will not be published. Required fields are marked *