Analysis of scaling problems in build systems

My post SCons is full of win today triggered some interesting feedback on scaling problems in SCons. In response to anecdotal assertions that SCons is unusably slow on large projects, I argued that build systems in general must scale poorly if they are to enforce correctness. Subsequently, I received a pointer to a very well executed empirical study of SCons performance to which I replied in the same fashion.

In this post, I intend to conduct a more detailed analysis of algorithmic requirements and complexity in an idealized build system, and demonstrate the implied scaling laws more rigorously. I will also investigate tradeoffs between correctness and performance using the same explanatory framework.

An idealized build system begins work with four inputs: a set of objects, a rule forest, a set of rule generators, and a build state. I will describe each in turn.

An object is any input to or product of the build – typically a source or object file in a compiled language, but also quite possibly a document master in some markup format, or a rendered version of such a document, or even the content of a timestamped entry in some database. Each object has two interesting properties: its unique name and a version stamp. The version stamp may be a timestamp or an implied content hash.

A rule forest is a set of rules that connect objects to each other. Each rule enumerates a set of source objects, a set of target objects, and a procedure for generating the latter from the former. A build system begins with a set of explicit rules (which may be empty).

A rule generator is a procedure for inspecting objects to add rules to the forest. For example, we typically want to inspect C source files and add for each one a rule making the corresponding compiled object dependent on the C source and any header files it includes that are in the set of source objects.

A build state is a boolean relation on the set of objects. The value of the relation is true if the right-hand object is out-of-date with respect to the left-hand object. In some systems, the build state is implied by comparing object timestamps. In others, the build system records a mapping of of source-object names to version stamps for each derived object, and considers a derived object to be out of date with respect to any source name for which the current hash currently fails to match the recorded one.

We say that a build system guarantees correctness if it guarantees that the build state will contain no ‘true’ entries on termination of the build. It guarantees efficiency if it never does excess work (corresponding to a DAG edge for which the corresponding value of the build state is false but the rule on the downstream site fires anyway).

The stages of an idealized build look like this:

1. Scan the object set to generate implied rules.

2. Stitch the rule forest into a DAG expressing the entire dependency structure of the system. Each DAG node is identified by and with an object name.

3. For each selected build target, recursively build it. The recursion looks like this: scan the set of immediate ancestor nodes A(x) of each target x; if the set is empty, you’re done. For each node y in A(x), check the build state to see if x is out-of-date with respect to y. If so, rebuild y. The dependency graph must be acyclic for this recursion to have a well-defined termination state.

Now that we understand the sequence of events, let’s consider the algorithmic complexity of the steps in the process.

Scanning to detect and record implied rules will be O(n) in the total size of the objects. Each object will need a name lookup for each reference to another object (such as an #include) that it contains. Thus O(n log n) in the number of objects, though a naive implementation could be O(n**2).

Stitching the rule forest into a DAG will also be minimally O(n log n) in the number of rules, because every object name occurring as a source will need to be checked against every object name occurring as a target to see if that target should be added to the source’s ancestor list. In typical builds where most source files are C sources and thus have one dependent which is a .o file, this implies O(n log n) in the number of source files. (Note: I originally estimated this as O(n**2), thinking of the naive algorithm.)

The recursive build may have a slightly tricky order but as a graph traversal should be expected to be O(n) in the number of DAG nodes.

We notice two things immediately. First, the dominating cost term is that of assembling the dependency DAG from the rule forest. Second – and perhaps a bit counterintuitively – the build system overhead will be relatively insensitive to whether we’re doing a clean build or many up-to-date derived objects already exist. (Total build time will be shorter in the latter case, of course.)

Now we have a cost model for an SCons-like build system that guarantees build correctness. As I orginally posted this, I thought that a complexity of O(n**2) in the number of objects was empirically confirmed by Eric Melski’s performance plot – but, as it turns out, this curve could fit O(n log n) as well.

But Melski also shows that other build systems – in particular those using bare makefiles and two-phase systems like autotools that use makefile generators – achieve O(n) performance. What does this mean?

Our complexity analysis shows us two things: If you want to pull build overhead below O(n log n), you need to not incur the cost of stitching up the dependency DAG on each build, and you need to also not pay for implicit-dependency scanning on each build.

Handcrafted makefiles don’t do implicit-dependency scanning, avoiding O(n log n) overhead. They do have to stitch up the entire DAG, but on projects large enough to be an issue much of the overhead for that is dodged by partitioning into recursive makefiles. The build process stitches up DAGs for the rules each makefile, but this is a sum of quadratic-order costs for much smaller ns.

The problems with bare makefiles and recursive makefiles are well understood. You get performance, but you trade that for much higher odds that your rule forest is failing to describe the actual dependencies correctly. This is especially true when dependencies cross boundaries between makefiles. The symptoms include excess work during actual builds and (much more dangerously) failure to correctly rebuild stale dependents.

Two-phase build systems such as autotools and CMake attempt to recover correctness by bringing back implicit-dependency scanning, but also keep performance by segregating the O(n log n) cost of implicit-dependency scanning into a configuration pre-phase that generates makefiles. This sharply reduces the likelihood of error by reducing the number of dependencies that have to be hand-maintained. But it is still possible for a build to be incorrect if (for example) a source change introduces a new implicit dependency or deletes an old one.

Another well-known problem with two-phase systems is that build recipes are difficult to debug. When make throws an error, you get a message with context in the generated makefile, not whatever master description it was made from.

My major conclusion is that it is not possible to design a build system with better than O(n log n) performance in the number of objects without sacrificing correctness. If the build system does not assemble a complete dependency DAG, some dependencies may exist that are never checked during traversal. Belt-and-suspenders techniques to avoid this (for example in recursive makefiles) tend to force redundant builds of interior objects such as libraries, sacrificing efficiency.

A minor conclusion, but interesting considering the case that drove me to think about this, is this: SCons is just as bad, but not necessarily worse, than it has to be.

UPDATE: I originally misestimated the cost of DAG building as O(n**2). This weakens the minor conclusion slightly; it is possible that SCons is worse than it has to be, if there is a naive quadratic-time algorithm being used for lookup somewhere. Since it’s implemented in Python, however, it is almost certainly the case that the lookup is done through a Python hash with sub-quadratic cost.

Objections to the above analysis focused on exploiting parallelism are intelligent but don’t address quite the same case I was after. SCons and other build systems on its historical backtrail (such as waf and autotools) will all try to parallelize if you ask them to; unless there are very large differences in how well they do task partitioning, Amdahl’s-Law like constraints pretty much guarantee that this can’t change relative performance much. And in cases where the dependency notation tends to underconstrain the build (I’m looking at you, makefiles!) attempting to parallelize is quite dangerous to build correctness.

69 comments

  1. An idealized build system begins work with three inputs: a set of objects, a rule forest, a set of rule generators, and a build state.

    “Amongst our weaponry…I’ll come in again.”

    ESR says: Heh.

  2. More seriously, your conclusion that it’s not possible to get there without sacrificing correctness in better than O(n**2) time, and yet saying that belt-and-suspenders techniques to defend against this wind up sacrificing efficiency, has a rather large hole in it:

    Is it possible to use m*O(n**2) results, where n is small and m larger than 1, to get where a straight (On**2) result with a much larger n will take us, and have the sum be smaller – small enough to offset the loss of efficiency?

    The rest can be handled by an automatic generator. While autotools may well not be that generator, that doesn’t say that the idea is invalid.

    1. >Is it possible to use m*O(n**2) results, where n is small and m larger than 1, to get where a straight (On**2) result with a much larger n will take us, and have the sum be smaller – small enough to offset the loss of efficiency?

      I don’t know. I doubt it. Can you construct a case that exhibits this behavior?

      (I ask because it’s quite easy to construct cases in which failing to build the full DAG gets you in trouble.)

  3. I don’t know either. I was inspired to thi question by your comment:

    The build process stitches up DAGs for the rules each makefile, but this is a sum of quadratic-order costs for much smaller ns.

    I’m not sure you examined the problems here; you seemed to rather cavalierly throw away solving the problem of not building the full DAG in one shot by taking a belt-and-suspenders approach, accepting some loss of efficiency to achieve correctness.

    My thought here is that driving make – especially GNU make, with all of its feeping creatures – with an automatic generator is missing the point. Move all of the creatures to the generator, and let the build executor do nothing but build DAGs and solve for a total build state of FALSE (computed by ORing together all of the right-edge build states). You avoid the problem of losing source file context by passing it to the executor and letting it produce messages from it. If you start with a clean-sheet executor, such as ninja, you’re still winning.

    What I don’t know is whether this can achieve correctness, even in the face of a loss of efficiency. In this discussion, efficiency is much, much less important than correctness: you only really care if the loss of efficiency takes it back into O(n**2) territory.

    1. >What I don’t know is whether this can achieve correctness, even in the face of a loss of efficiency.

      Er, conditionally. At minimum, we can say that this proposal has the common problem of all two-phase builders; if the implicit dependencies change and you don’t notice this soon enough to re-run your configurator phase, you lose.

      Your other problem is that you’ve left the O(n**2) overhead in the ‘executor’ – building the DAG is exactly the expensive part.

  4. I don’t have much of a dog in this fight, however, I will throw in my two cents.

    Your quadratic growth comes from the fact that you have to scan your c files for #includes each time. However, if your build system was properly integrated with your source control system, then the database of these dependencies could be formed in third normal form, meaning that you would not have to constantly recalculate them.

    Which is to say, a great build system would maintain a database of dependencies and reliably regenerated them as deltas with each source file check in. Calculating the whole “rule forest” is quadratic, but calculating the delta to the rule forest is not, I believe it is at most linear.

    This is even easier and more reliable of course if your programming language doesn’t use intractable yack like the preprocessor. It also assumes you always build out of the source control system — which in my world anyway, is considered best practice.

    It would be interesting to think about how that source code control thingie you made a while ago might achieve that. You could check in the DAG at some point in time, and then look at all the source code changes since that point in time, and apply them to the baseline DAG.

    1. >Your quadratic growth comes from the fact that you have to scan your c files for #includes each time.

      No, that’s the O(n log n) cost. The O(n**2) cost is the DAG building.

      Still, you have a point. It would be theoretically possible to build a combined version-control and build system that would maintain a dependency DAG in parallel with the tree state. That would subsume both implicit-dependency scanning and DAG-building.

      The hairs on the back of my neck rise when I try to think about such a design, though. It would have all the complexity of a version-control system, plus all the complexity of a build system, plus the interaction complexity of those two feature sets. That’s a lot of complexity, and a lot of ways to get pinned to a suboptimal design decision anywhere in the system by the sheer weight of the rest of it. My instincts say “Don’t go there!”

  5. It seems to me that some of this could be solved by intelligent caching.

    If building the DAG is the hard part, then we can partition source changes into those that we know will change the DAG (like adding an include file to a C file) and those that we know *won’t* change the DAG (like adding a class member).

    If your build system saved, not just the timestamp, but also the dependency information on a per-file basis, it would be easy to tell if that dependency information had changed, and a DAG rebuild was required.

    It’s been a long time since I did any coding that required a build system, but IIRC most of my changes were ones that wouldn’t change the DAG, and would therefore not require a large overhead from the build system.

    -YC

  6. An interesting analysis, and one I’ll have to mull over more thoroughly.

    I’m personally interested in a more obnoxious case: You limited your build description to an acyclic graph, but in real-world bootstrap scenarios, the relationships between objects may in fact be cyclic; Gentoo is a case where this shows up quite frequently in circular optional dependencies (at a simple level, CUPS depends on SAMBA for SMB support, which depends on CUPS for printing support). Does a case like this add an extra linear factor on the number of optional dependencies to find one without which the build can succeed and then the time to rerun it, or is it n^2 in the case where everything has an optional dependency on everything else?

  7. @Jessica: Getting rid of that yack does in fact make the problem much easier on a language-specific basis, and Eclipse’s partial builds do pretty much what you describe, but hair-raising aside, I’m not sure how version control isn’t orthogonal to simply caching the dependency graph based on a version stamp.

  8. > Stitching the rule forest into a DAG will be minimally O(n**2) in the number of rules, because every object name occurring as a source will need to be checked against every object name occurring as a target to see if that target should be added to the source’s ancestor list. In typical builds where most source files are C sources and thus have one dependent which is a .o file, this implies O(n**2) in the number of source files.

    I could be missing something here, but it seems to me that this could be done in O(n log(n)) using a hashtable of the object names.

  9. Building the dag, for reasonable sized builds, is much faster than the other steps, but being O(n**2), eventually dominates, becoming the slowest step.

    But we do not need to build the dag every time. We have a hash of the rule forest. Rebuild the dag when the rule forest changes – rebuild when a new source file is added, or an existing source file gains or loses an include.

    That leaves us with with the task of scanning the files to build the rule forest, which is O(n log n) and is slow even for reasonably sized builds.

    But we only need to scan a file when it changes – which leaves us with task of hashing the files, which is O(n).

    Scan and hash all files: If any file has changed, get its dependencies. If any files have changed their dependencies, rebuild the dag.

  10. We have a hash of the rule forest. Rebuild the dag when the rule forest changes – rebuild when a new source file is added, or an existing source file gains or loses an include.

    Interesting approach–hierarchical hash, a la Skein?

  11. In theory, this is an interesting topic. In practice, I don’t care how long a build takes, as long as it’s guaranteed to be correct.

    For any project where correctness matters, I’ll set up a Continuous Integration (CI) server (currently Hudson is my pick) and always checkout a fresh image from the VCS for each build. If I have multiple target platforms, then each one gets a separate job. Once I commit a set of changes, then I can work on something else while the builds are occurring. I never deliver from a development build, only from a CI build.

  12. (which may be emoty)

    Is this a typo? ‘Empty’ might make sense.

    If not , please define emoty in this context. I googled for it and failed, also skimmed the comments w/o enligtenment.

    Thanks,
    Jim

    ESR says: Yes, that was a typo for “empty”.

  13. @esr:

    Still, you have a point. It would be theoretically possible to build a combined version-control and build system that would maintain a dependency DAG in parallel with the tree state.

    Theoretically? This is pretty much what commercial SCM systems such as IBM/Rational ClearCase do.

  14. I’m with Eugine_Nier on this. Maybe I’m just tired and not understanding your terminology, but I am not seeing why building a DAG is order O(n**2). If I have a hash that is indexed by target name and each entry contains a list of immediate source names (which is essentially what I think I understood comes out of your rule generator), then it’s easy to create a recursive, memoized function that returns all the dependencies for all the sub-sources. I’ve done this lots of times, and I think it’s order O(n * m), where n is the number of targets, and m is the number of direct sources.

    Perhaps the problem is premature optimization. If you have a forest of .h files, and in creating the rules you fully populate the target with all the .h files, then maybe it does get close to order O(n**2).

    But if you use lazy evaluation as much as possible, then if most .c(pp) files only directly include one or two sources, you can actually essentially include the h. files as targets in your tree, that don’t require any steps to build them, but rely on themselves and on any nested .h files. Then the rule tree you have will be considerably smaller (unless the coding style calls out every .h file in every .cpp file).

    Certainly, the total size of the DAG itself will approach order O(n**2), but you’ll build it by putting large chunks together, so the time to build should not be linear with the size of the DAG.

    On the other hand, if you use lazy evaluation on the DAG itself, you may find that you never need to explicitly create it, in which case you don’t even reach order O(n**2) in size.

    1. >Perhaps the problem is premature optimization. If you have a forest of .h files, and in creating the rules you fully populate the target with all the .h files, then maybe it does get close to order O(n**2).

      Yes, you have to “fully populate” as you say, so it gets close to order O(n**2) in that way. But my original argument was defective; as several commenters have pointed out, node-matching in the DAG isn’t O(n**2). In fact it’s O(n log n) – lookups over the list of n nodes for each node.

  15. BTW, most of my builds these days are for Verilog stuff, where the synthesizer (kind of like a compiler) does a much better job of optimizing if it sees the entire source package, and the map and place and route (think heavily post-optimizing linker) can take hours, on just a few dozen source files.

    Of course, Xilinx’s software tries to implement a fancy build system down inside the tools, below the level where a customer can get at it. Of course that breaks badly sometimes and silently reuses old results under some unknowable set of conditions. And of course, it spews warnings about stuff that is not only perfectly reasonable, but impossible to modify to kill the warnings.

    So, given this particular problem space, my cheezy build system that I wrap Xilinx’s stuff with just does this:

    (a) blow away every single kind of intermediate file and directory that Xilinx might possibly stuff anywhere;
    (b) rebuild everything.
    (c) Filter warnings going to console and to logfile

  16. @Craig Trader:

    In theory, this is an interesting topic. In practice, I don’t care how long a build takes, as long as it’s guaranteed to be correct.

    Just out of curiosity, have you ever actually worked on a project where a build can take > 8 hours?

  17. >>(which may be emoty)

    Thank you. In almost anyplace but this blog, if I see a word I don’t recognize, I can be almost certain that it is misspelled or misused.
    I’ve probably looked up more words since I started reading this blog than in the previous ten years.
    One of the pluses about reading here.

    I do, just now, note that your larger essays don’t have nearly as many, if any, of the really unusual words.

    Write on

  18. I’m still trying to understand the fact that it’s published on April 10th, not April 01th. How come? It’s utterly perfect AFJ.

    Bacially long series of plausible-looking explanations and then… bam, them conclusion: SCons is just as bad, but no worse, than it has to be. What??? Are you joking? Well, it may be true in terms of bigO, but in real life absolute values matter too. A lot. Difference between few seconds and hours matter even if both build system have O(n**2) complexity.

    Our experience is interesting because we’ve started with SCons… and eventually replaced it with totally different build system. This was done because SCons does not scale. As in: it does not process large sourcebase fast enough. The fact that both SCons and blaze have O(n**2) theoretical complexity does not matter if one needs hours and another needs seconds.

    Let’s talk about realistic (but not extreme case): source forest is 10-50GB and there are few million files in the forest.

    1. There and O(N**2) and there are O(n**2). Even if you build something like GWS (which includes everything and a kitches sink… or two) you still only touch some small part of the forest. May be 10% or so. Because DAG-creation is O(n**2) this already gives us 100x difference in speed. This is BIG DEAL(tm) for large projects. How to do this? Simple: you don’t keep millions of files in a single directory, right? Put build files one per-directory and ignore the files which are not referenced by one particular tree. Separate tool does this – but this means build files are somewhat limited.

    2. Well, this may moves you from hundreds of hours to few hours. Still not good enough. What to do next? Well, the slowest part is DAG-creation, right? And in most cases you don’t need to change it (when you write code you don’t go and/remove .h files every REPL interation, right?). So now SCons only dumps rules to series of makefiles and the make does the rest. Preparation step still takes hours, but actual build takes minutes. But there a high price for this step: SConstruct is frozen (well, it’s not exactly FROZEN: there are some build wizard who can change it… but not mere mortals) and build files are limited – they can not contain any arbitrary python code anymore. And you must remember to rerun makefile-generator every time you include .h file from outside directory.

    3. Well, looks like without dropping SCons you can not make build faster so this is the next logical step. Build files are restricted even further, SConstruct is removed altogether and this makes it possible to write parser in C++ or Java or… whatever – something compileable and fast. Say… about 100x faster then python. At this point make goes away too because DAG-creation takes minutes and make needs few minutes too.

    4. Well, few minutes is not a bad result, but we want seconds… how to achive this? Easy: don’t build DAG every time. Instead change it when the build files are changed. Where to keep it? In database? That’s too slow. Why not keep it in memory: when you do a fresh build on a system blaze becomes a daemon and later requests just change the DAG in it. Thus first, fresh, build takes minutes but rebuild takes just a few seconds.

    5. Now we have correct and fast build, right? Well… not really: it’s fast, true, but is it correct? We still can include file accidenly and forget to add dependency information to build file. Let’s build in jail: all dependences are moved to temporary directory and if you try to include unlisted file… you’ll see “file not found” messge. Kind of opposite to LD_PRELOAD approach I’ve seen in another build system. NOW you have fast and correct build.

    6. Everything is done? Not really: incremental build takes second but “from scratch” build still takes hours – even with distcc. The solution is radical: don’t pull files you are not changing from VCS! Instead buid farm keeps shadow copy of the VCS (if thousands of builders will hammer VCS directly it’ll break so we need shadow copy) and builder pull files from it. We already know spurious files are not used (this was achieved in step 5) so it’s safe. And we may keep cache of compiled files around for the “future use”. This means that build “from scratch” does not actually builds anything from scratch (unless you change something in he source OR request “tests build” without help of the build farm). It just pulls precompiled binary and gives you that.

    It’s not the end, but you get the idea: theoretical scalability and practical scalability are different things. Note that our last system builds DAG in O(n**2) time – because you can not avoid this. So from purely computer science POV our achievements are trivial: we had O(n**2) system initially, now we have O(n**2) system, what’s the big deal? Well, the big deal is that “from scratch build” took hours and incremental build took hours but now “from scratch” build takes minutes and incremental build takes seconds.

    We paid heavy price, too: our build system is proprietary and is heavily tied to our build farm, our VCS, etc. But reduction of REPL from hours to seconds is worth it.

    P.S. Actually today the build is dominated by linking step: it’s done on a single system and it’s hard to speedup it. Gold helps a lot, but it’s still slowest part of the build – it takes about 90% of incremental build.

  19. I must be really dense on the issue of scanning the source files to generate implicit rules.

    It seems to me that it’s bog-simple to cache the output of this phase (the dependency info for $file is stored in $file.dep: $foo.c’s dependencies are in $foo.c.dep, $foo.h’s in $foo.h.dep) and then only rescan any particular source file if the dependency file’s time stamp is older than the source from which it was generated. And the time stamps for the dependency files need to be stored in the build system’s internal database, so that a manually-edited or touched dependency file won’t screw things up.

    Re: emoty. It’s kind of like verklempt. Tawk amongst yourselves. I’ll give you a tawpic: The Holy Roman Empire was neither holy nor Roman.

  20. @The Monster: Timestamps — apparently — aren’t considered reliable enough, especially in a distributed project that spans timezones. Most attempts at replacing make and autoconf, including SCons, cache checksums (i.e., MD5) and then only rescan if the checksum changes.

    I guess what I’m wondering about that is — why not? Isn’t this what ntp and TZDATA, etc., are for? Or can we not rely on people to use ntp and have correct timezone settings? Or is there some other reason timestamps aren’t reliable enough that we have to use MD5 checksums?

    Checksum computing is a far more expensive process than a call to stat().

  21. It almost sounds like khim and I work for the same company. :-)

    Build time does matter. Our system is rocking. I can do a partial build in about 10 minutes, and a full build in about 45 minutes while using a 40+ node build farm on the back-end. I quick grep through our source tree and it looks like we have about 40k source files. A full build occupies about 100GB of disk space. What matters from a development standpoint is how quickly I can do a build after making a change. This can be helped in some cases by throwing a lot of hardware at the problem. However, even if every single build step could be done in parallel and instantly, if the dependency time took an hour we’d be worse off.

    Optimize for the variable(s) you care about. SCons may work for smaller projects or where capital isn’t worthwhile be invested, but that doesn’t mean that it will scale for larger projects.

  22. Timestamps — apparently — aren’t considered reliable enough, especially in a distributed project that spans timezones.

    Timestamps are stored at the OS level as GMT, not as local time. I’m not sure what the problem is, really.

    But distributed projects have version control systems. Consider the time of the last check-in that modified a file as its timestamp. Problem solved.

  23. When we’re talking about a process being O(n log n), I assume we mean with respect to total work required. As Garrett’s comment points out, though, this is not necessarily the same as run time if you have a large enough infrastructure… there’s a big difference between O(n log n) that’s necessarily fully serial, and one that’s embarrassingly parallel.

    I mean, yes, unless you’re Google you probably don’t have *that* many nodes, but it still makes a big difference in your constants.

  24. Your analysis is either redundant or wrong depending upon what you mean (it’d be helpful if you used the appropriate terminology). The worst case is indeed quadratic, if that’s what you mean. However, in that case most of your “analysis” is redundant; the DAG may have a quadratic number of edges so the quadratic running time is a given once you know you have to construct it or traverse it.

    If you mean to say that the best-case is quadratic then you are incorrect. In fact, you can probably solve this problem with a running time that is linear with respect to the number of object dependencies. If your dependencies look anything like a tree then this will mean a running time close to linear with respect to the number of objects.

  25. If you mean to say that the best-case is quadratic then you are incorrect. In fact, you can probably solve this problem with a running time that is linear with respect to the number of object dependencies.

    Can you create a linear solution that guarantees correctness and efficiency as defined in the original post? If so, how?

  26. @Roger Phillips
    “If you mean to say that the best-case is quadratic then you are incorrect. ”

    You can get best case linear results even for NP hard problems. Who cares. Big O notation is not about best cases, but about worst cases.

    And indeed, typical cases can scale much better than worst cases. However, you will be surprised how often a non-typical case will turn up.

  27. Actually build system is the case where non-typical cases are rare. When projects are growing they become extinct. When we discuss practical scalability we can just ignore them completely.

    Why? Because O(N**2) complexity can only ever be realized when the average number of direct pre-requisites grows linearly as the project grows. As in: you have millions of files and each file directly depends on many thousand of files. On average. You can write build system which handles this case correctly. It’s not easy but doable. But you don’t need to have such a system because you can not find a programmer which will be able to understand and support such system. So, again, practical scalability is not tied to O(N**2) at all – even if theoretical scalability is.

    Last time I’ve checked we’ve discussed practical build systems for practical projects, not some CS-exercise… Have we abandoned all practical applications altogether?

  28. @khim
    “Actually build system is the case where non-typical cases are rare. ”

    Yes, but that is because of the limitations of the tools.

    Back in the CVS days, projects were centralized and there was “no need” for complex branching, renaming, moving, and repo surgery. And Linus would not use a repository version system because Linux would not fit.

    Now we have git and everyone has his own branch, merging and exchanging to their hearths content. We even have a tool to mend commit histories ;-)

    I expect the same development for build systems. If you write a tool that can handle all possible DAG builds, it will be used to the full extend. Even if this means O(n**2) runtime.

    Never underestimate the need for complexity!

  29. “ClearCase is a system administrator’s worst nightmare”

    http://xkcd.com/883/

    I’m a Systems Administrator. I work in a place where you need to show your badge at the front gate, leave **ALL** of your electronics in the car, or in a locker *OUTSIDE* the 3rd fence, then badge through a turnstile, and then do some other stuff to get in. And at each step it’s not some rent-a-cop watching you, it’s at least 2 federal officers with firearms.

    Hell, at my last full time gig we would get missile attacks about once a month.

    ClearCase?

    And yeah, someone has it worse than me. One of my cow-orkers used to work out of McMurdo. A month or two at sea, below where you can’t hit most of the telecom sats (NO googling problems, NO real time internet access, email over radio links etc.) AND they weren’t allowed on deck when it hit (IIRC) 40 below. When you had a problem you had only your brain, the man pages and your books to figure it out. He was the Junior Admin. He has a masters in IT stuff. You try to rebuild a xorg.conf file from memory and manpages because a senior researcher ran the reconfigure script twice, wrong, and it only copies file to file.backup[1]

    I’ve had to handle ClearCase before. It’s ugly, I didn’t like it, and wouldn’t pick it but nightmare? Shit man if ClearCase is your nightmare consider yourself reasonably lucky.

    [1] Can I rant here about backing up configuration files? INSTALL FUCKING RCS. Yeah, it sucks for SOURCE CODE MANAGEMENT. You aren’t managing a fucking source tree, it’s ONE FILE AT A TIME. ci xorg.conf; co -l xorg.conf ; vi xorg.conf ; ci -u xorg.conf. FUCKING EASY. MUCH BETTER than cp xorg.conf.`date +%Y%m%d%h%M%s` and winding up with 37 copies. Oh, and if you’re writing something that edits system configs USE VERSION CONTROL. The life it saves may be your own.

  30. You can get best case linear results even for NP hard problems. Who cares. Big O notation is not about best cases, but about worst cases.

    Who cares? People who want to understand the issues correctly. I guess I was wrong to seek clarification from someone who hadn’t used the correct terminology. Geez.

    And indeed, typical cases can scale much better than worst cases. However, you will be surprised how often a non-typical case will turn up.

    This post AFAIK is not talking about pathological cases. It’s talking about the scaling problems of SCons, which do not relate to pathological cases.

    1. >I guess I was wrong to seek clarification from someone who hadn’t used the correct terminology.

      What do you think the correct terminology would be? Very serious question. I’ve never has a CS course; I’ve had to learn big-O notation and complexity estimation by example, it’s quite possible I’m getting it wrong, and you’d be well qualified to notice that.

  31. @esr
    “I’ve never has a CS course; I’ve had to learn big-O notation and complexity estimation by example, it’s quite possible I’m getting it wrong, and you’d be well qualified to notice that.”

    Neither have I, but let the blind lead the blind. I will write down my understanding and then someone who really knows what s/he is talking about can explain how it really is. My understanding is that you have it right.

    O(f(n)) means, that for every implementation the real runtime effort, E(n), required there are numbers C,D >= 0 such that E(n) < D*f(n) for all C < n < infinity. The C and D mean you can ignore constant parameters (maybe D = C would do the trick). This boils down to find the worst step (O(n**2) in your case) that will dominate the effort for high n.

    This is inherently about worst case behavior. Best case behavior is always O(1), as there will alway be trivial cases.

    What people often try to do is to get to "typical" cases. That is, find a subdomain of inputs and a g(n) < f(n) such that the algorithm runs in O(g(n)) for inputs in the subdomain, generally expressed as a fraction p<1 of cases.

    With an ill defined user domain of build systems, the "tupical" cases of today will tell you nothing of those of next year.

  32. It’s been a long time since my CS courses, but I believe Winter has it right. The only thing I’d add, just to nit-pick, is that you can ignore not only constant parameters but anything other than the fastest-growing element of f(n)… but everybody here seems aware of that, anyway, and it’s implied by those definitions.

    khim’s point is good, though; if the DAG has some kind of reasonable locality, and isn’t the total worst-case spaghetti nightmare of everything referring to everything, that O(n**2) might be something more like O(n log n).

  33. I’m a Systems Administrator. I work in a place where you need to show your badge at the front gate, leave **ALL** of your electronics in the car, or in a locker *OUTSIDE* the 3rd fence, then badge through a turnstile, and then do some other stuff to get in. And at each step it’s not some rent-a-cop watching you, it’s at least 2 federal officers with firearms.

    Even my last contract gig wasn’t *that* bad, and yes, as a major Fortune 100 (not a typo) military and aerospace contractor they had classified/SCI data, most of it related to various navigation and guidance systems.

    But they did have rather a lot of locked rooms (and even a couple of hidden rooms) and the ever-present Wackenhut thugs in rent-a-cop uniforms, yes. And Certain rooms in the facility had the classified data. I could bring no electronics and there was no Internet access once inside the locked room, but I could bring all the printouts and CDs I wanted.

    Thing was, they had to scan the CDs for viruses and verify the contents — it was such a pain, that I just resorted to printing out everything — wikis, FAQs, HOWTOs, bug list reports, etc. and bringing it in with me, and bringing in what books I had, etc. They were paying for the paper and the toner, so I didn’t mind. :)

    As for your xorg.conf scenario, well, I always, always, always put my configuration files into some source control system — cvs, rcs, git…okay, maybe git is overkill. :)

    But, okay, perhaps I was exaggerating just a bit when I said “worst nightmare.” How about royal PITA?

  34. Several pieces of input from eric’s basement…

    1) ClearCase is a nightmare. The last time I used it I’d argue it added 9 months to the schedule as it introduced a centralized dependency on builds of a very large system (Sybase) that when it went down (which it did daily) took hours or days to resolve.

    2) The distribution of all things source code wise, via a distributed source code management system such as git is a real breath of fresh air, which implies that decentralizing the build logic also pays.

    3) All this talk of complexity in the DAG generation process is something that can be extensively parallelized and not bottlenecked by IO if the files are already in memory. So while complexity could be O(n**2), it may be reducible over the number of processors available, to a large extent. In an age where 48 core systems are becoming more common it would pay to look harder at the parallelization portion of the problem under the complexity.

    4) Scons doesn’t do cross compiles worth beans compared to the venerable autotools. (Which is what I’m working on this morning). There are probably other problems that autotools solves that scons doesn’t as well.

  35. What do you think the correct terminology would be? Very serious question. I’ve never has a CS course; I’ve had to learn big-O notation and complexity estimation by example, it’s quite possible I’m getting it wrong, and you’d be well qualified to notice that.

    I don’t know, because I don’t know what it is you’re trying to say. The worst-case is generally expressed in terms of Big-O notation. If I understand what you’re writing, this part is fine. However, you use the word “minimally” at some point, which makes it sound like you’re trying to establish an asymptotic lower bound, which is generally expressed in Big-Omega notation. I do not think there is such a lower bound that is quadratic. The formal notations are not the important bit, it’s distinguishing carefully between the different kinds of bounds.

  36. O(f(n)) means, that for every implementation the real runtime effort, E(n), required there are numbers C,D >= 0 such that E(n) < D*f(n) for all C < n < infinity.

    Mathematially, your definition is correct enough, but you are overinterpreting the notation. Big-O notation is a metamathematical construct, and doesn’t express anything about algorithms per se. The typical usage is to express the worst case running time of an algorithm as the inputs become larger, or sometimes similarly for the best possible algorithm for some problem. These are two distinct things, which is another reason to try to be precise about such things in writing.

    Nitpick: ‘n’ would generally be restricted to the natural numbers and so the restriction ‘< infinite' would be ill-typed.

    This is inherently about worst case behavior. Best case behavior is always O(1), as there will alway be trivial cases.

    Sorry, you are incorrect. Best-case analysis can also be expressed asymptotically, either about algorithms or the algorithms that exist to solve a particular problem. Some problems have proven non-constant lower bounds.

  37. Sorry, you are incorrect. Best-case analysis can also be expressed asymptotically, either about algorithms or the algorithms that exist to solve a particular problem. Some problems have proven non-constant lower bounds.

    As a trivial example, even quantum bogosort is O(n)–you have to verify the correct order.

  38. It’s not the worst case behavior that is interesting, it is either average case behavior, or amortized behavior. Quick sort has worst case O(n**2) performance, while on average (and with well designed quicksort you hit almost always average behavior) it is O(n * log(n)), with very small coefficients.

    Amortized behavior is more difficult to quantify; I don’t think anyone did such analysis for build systems, but here you would average time over different thigs that can be done: small changes to files (commit), large changes to files (pull), etc.

    If SCons has O(n ** 2) average case behavior… that means it is something wrong with it.

  39. I’m afraid I still don’t understand why stitching the DAG together is O(n^2).

    Why is it necessary to do:

    for s in sources:
    for t in targets:
    if match(s, t): …

    instead of being able to do lookup with s and t as keys?

  40. @Roger Phillips
    “Big-O notation is a metamathematical construct, and doesn’t express anything about algorithms per se.”

    There is a precise definition, but it is actually seldom used as the “gut”-feeling is normally correct.
    https://secure.wikimedia.org/wikipedia/en/wiki/Big_O_notation

    @Roger Phillips
    “Nitpick: ‘n’ would generally be restricted to the natural numbers and so the restriction ‘infinity and then changing that to C < n. Mixing up the cases I ended up in C < n < infinity. Which is non-sense.

    @Roger Phillips
    "Sorry, you are incorrect. Best-case analysis can also be expressed asymptotically, either about algorithms or the algorithms that exist to solve a particular problem."
    @Christopher Smith
    "As a trivial example, even quantum bogosort is O(n)–you have to verify the correct order."

    I should read carefully before posting. Indeed, many trivial cases will be like O(n), ie, identify the trivial case. But trivial cases are useless, exactly because they are trivial.

    I find "average" behavior rather dubious (let alone "best case" behavior). For many problems the average is ill-defined, or completely useless.

    But it is indeed well known that certain classes of sub-problems can have much better cost behavior than the full problem. Many NP hard problems have sub-spaces that are P hard. Most of the NP=P "proofs" end up relying on this error. They prove the P-ness of a problem sub-space instead of the complete space (often using O(exp(n) or worse memory to compensate for polynomial steps).

  41. Something ate part of my previous comment:

    @Roger Phillips
    “Nitpick: ‘n’ would generally be restricted to the natural numbers and so the restriction ‘ infinity and then changing that to C < n. Mixing up the cases I ended up in C < n < infinity. Which is non-sense.

  42. I find “average” behavior rather dubious (let alone “best case” behavior). For many problems the average is ill-defined, or completely useless.

    Best case is positively worthless, but average behavior can be useful for very specific values and ranges of “average”. Any algorithm that scales on a curve will have “sweet spot” where the algorithm scales best. This will make it useful the subset of all cases that fall within the range of the sweet spot.

  43. Scanning to detect and record implied rules will be O(n) in the total size of the objects. Each object will need a name lookup for each reference to another object (such as an #include) that it contains. Thus O(n log n) in the number of objects, though a naive implementation could be O(n**2).

    Do you assume here that average number of includes (of direct dependencies between sources) is O(1) rather than O(n)?

    Stitching the rule forest into a DAG will be minimally O(n**2) in the number of rules, because every object name occurring as a source will need to be checked against every object name occurring as a target to see if that target should be added to the source’s ancestor list. In typical builds where most source files are C sources and thus have one dependent which is a .o file, this implies O(n**2) in the number of source files.

    I don’t understand this. Couldn’t you use some kind of lookup over rules or reversed rules to limit number of object names that can be considered as targets for given object name occuring as source?

    BTW it should be easy to check if recursive ‘make’ is really O(n_i ** 2) by increasing number of files per directory, and not changing number of directories (and using loglog plot to do linear fit to find exponents in O(n ** k) behavior).

  44. I should read carefully before posting. Indeed, many trivial cases will be like O(n), ie, identify the trivial case. But trivial cases are useless, exactly because they are trivial.

    You have not read my comment correctly. Their is a dual to asymptotic worst-case analysis, and it is just as useful. It is not the same thing as identifying “trivial cases”. Your comment was given with relation to a comment I made about best-case analysis.

    Best case is positively worthless

    I hate to be rude, but it’d be helpful if you stopped confusing people with outright erroneous statements such as this.

    I find “average” behavior rather dubious (let alone “best case” behavior). For many problems the average is ill-defined, or completely useless.

    If someone has proven an average case behaviour, then it is not “ill-defined”. It is in fact, perfectly well defined. Furthermore, the fact that someone who doesn’t understand these analyses finds them “dubious” is unsurprising.

    Many NP hard problems have sub-spaces that are P hard.

    First of all, “P-hard” doesn’t mean what you think it means. Secondly, what does this have to do with anything? Demonstrating the existence of certain trivial cases or problem subclasses of classes that are themselves contained within NP-Hard does not refute the value of best case analysis or average case analysis.

  45. @Roger Phillips
    You are mixing up commenters.

    First, best-case would be the trivial case, just because it is trivial. And trivial cases are worthless, because they are trivial.

    However, you can redefine best-case to a broader, more useful (sub-)class in problem space. And we know that there are many sub-classes in problem space where the run-time effort is better than the worst-case behavior. Because, frankly, worst-case implies there are better cases.

    For example, if you can modularize you project in such a way that you can ensure that all dependencies are local and the number of requirements has a fixed upper bound, you will certainly do better than O(n**2) in your build system.

    However, talking about “average” cases suggests a knowledge of a probability distribution over the usage of the problem space. I have yet to see arguments that show me people have a reasonable understanding about the probabilities with which inputs prop up. Especially as the inputs are selected to run in reasonable time. That is, the “average” case will be a case that can be handled by the tools at hand. More problematic cases are simply not used as they cannot be done. Compare the complexity of projects using revision control systems. These projects scale with the quality of the tools. So the “average” use case for RCS will be simpler than that for CVS, which again will be simpler than that for git.

    In this sense, I find the use of “average” cases dubious. Not “wrong”, but difficult to justify in the long run.

    @Roger Phillips
    “First of all, “P-hard” doesn’t mean what you think it means.”

    Except for the ironic use of “hard” in P-hard, I always understood that P means O(n**i) with i some integer. That is, a solution is at worst found in a number of steps that is some polynomial of the size of the input (or problem). This complements NP which means that a given solution can be checked for correctness in polynomial time (ie, O(n**i) steps), but the number of steps needed to find the solution itself will in the worst case *not* be bounded by any polynomial in the input size. There are also problems for which even the correctness of a given solution cannot be proven in polynomial time. So what is wrong with my understanding?

    And my mentioning of sub-classes in NP problems that are P is support for a more fine-grained analysis of problem spaces. It was never intended to refute such an analysis. I just consider the use of the words “best-case” and “average” very misleading. If the boundaries of the problems sub-spaces are clearly delineated, users can be warned about the way they should structure problems and the validity of the solution. If they think in terms of average, they will simply assume they have average problems even if this is wrong. Such wrong believes might lead them into incorrect solutions or unmaintainable projects.

  46. @Winter

    It was never intended to refute such an analysis. I just consider the use of the words “best-case” and “average” very misleading.

    Here’s what you originally said:

    “If you mean to say that the best-case is quadratic then you are incorrect. ”

    You can get best case linear results even for NP hard problems. Who cares. Big O notation is not about best cases, but about worst cases.

    You say you were not attempting to refute the usefulness of best-case analysis. And yet, here you are asking “who cares” what the best-case is. Eric did not seem to know the terminology in this area, so I could not take it for granted that he knew the distinction between Big-O and Big-Omega (I will be generous and assume that you do).

    I’ve lost track of the numerous flaws in your reasoning with relation to various kinds of complexity analysis. Suffice to say, they are numerous, but you refuse to listen to me. I will point out another misconception that you have, as revealed in the last post you made:

    Because, frankly, worst-case implies there are better cases.

    “Frankly”, this is so completely and utterly wrong that I wonder how much schooling you have in this area. Have you bothered looking at the mathematical definitions of these terms? If you had, you’d (hopefully) realise that this statement is plainly false. The rest of your argument sits on about as firm a footing. The question is: how long could I be bothered sifting through it while you pretend to know what you’re talking about? I hate to be so rude, but you refuse to listen.

    you will certainly do better than O(n**2) in your build system.

    What if the running time of my build system is Theta(n**2)?

    So what is wrong with my understanding?

    Well, you could start with this:

    NP which means that a given solution can be checked for correctness in polynomial time (ie, O(n**i) steps), but the number of steps needed to find the solution itself will in the worst case *not* be bounded by any polynomial in the input size

    But in any case, the point at issue was the term “P-hard”, which means something different to “P”, though you seem to treat them as indistinct. You seem intent on expanding the argument by saying more an more irrelevant (and incorrect) things instead of simply addressing the original point and learning from your mistake.

  47. @esr, okay another nitpick:

    node-matching in the DAG isn’t O(n**2). In fact it’s O(n log n) – lookups over the list of n nodes for each node.

    It is in fact still O(n**2), since that is a bounding function. (n log n) is a better bounding function, but not the only one. It’s also worth noting that it can be done in linear time, since you can pre-process the set of symbols into a contiguous block of integer identifiers (linearly in the number of symbols), solve your problem, then post-process them back into symbols in linear time.

  48. I’m sorry, I haven’t completely digested your analysis yet, but I feel it’s important already to point out what I think is a misunderstanding on your part regarding my empirical results: the make-based build was not a recursive make build, but rather a non-recursive build — a single make instance, in which the entire dependency graph was constructed and traversed. There was no recursive invocation of make, nor any partitioning of the dependency graph to reduce the effective size of n at any particular point.

    Yes, the build operated on files in many directories, but this was achieved by means of the GNU make “include” directive.

  49. >the make-based build was not a recursive make build, but rather a non-recursive build

    I think my use of your results is actually independent of that. But I’m glad you find the analysis interesting and look forward to your critique.

  50. @Roger Phillips
    ““Frankly”, this is so completely and utterly wrong that I wonder how much schooling you have in this area.”

    As I wrote in the first comment: None.

    And please read what I wrote.

    @Roger Phillips
    “You say you were not attempting to refute the usefulness of best-case analysis.”

    No, I wrote:
    “And my mentioning of sub-classes in NP problems that are P is support for a more fine-grained analysis of problem spaces.”
    I never implied an analysis of best-case.

    And please explain why people use the word “worst case” when there are no better cases? It is obvious that there are many problems without “good” cases, e.g., breaking cryptographic keys for a good system. But you do not talk about “worst-case” analysis there.

    @Roger Phillips
    “But in any case, the point at issue was the term “P-hard”, which means something different to “P”, though you seem to treat them as indistinct.”

    Indeed, I did not see the difference between them. The only definition I could find was:
    “A polynomial algorithm was found very late (after 1950) in the literature for that problem”

    Which I find a pretty useless distinction in the current context. But maybe I simple do not know what useful difference there is? Please educate me?

    Furthermore, I do not see the practical use of Big-Omega analysis. What use is it if I know that for very large projects, the minimal run time scales as Omega(n) when I do know that I will hit O(n**2) run times all the time?

    Big-Theta is very useful. But I do not really see the difference with Big-O notation in the current context. If Eric manages to improve his estimates from O(n**2) to O(n log n) he has made a Big-Theta analysis. But he would do that anyway. Why bother people with a new name for what simply is a better estimate of the Big-O?

    In short, this discussion was very informative/educational, but I have difficulty seeing what my fundamental errors in understanding are.

  51. > In short, this discussion was very informative/educational, but I have difficulty seeing what my fundamental errors in understanding are.

    Welcome to Roger Phillips’s World Within Eric’s World. I am not being sarcastic when I say that Roger strikes me a very knowledgeable, nor when I say that in this exchange it’s clear that he knows the theory better than anyone else. Roger knows what he knows and he knows that we don’t know it. He also knows how to explain to us that he knows it and that we don’t know it. What Roger never appears to know is how to explain what he knows so that we also know it. That’s probably due to format. Explaining several weeks of a computer science course in blog comments does not seem an easy task. Still, it does leave us all, including lurkers like my self and probably including Roger frustrated.

    Roger, maybe you could take a page out of Eric’s book. Write a blog post on your blog explaining Big-O (which was covered in the CS classes I took, long ago), Big-Omega and Big-Theta and post a link here. Then we can have fun in your comments.

    Yours,
    Tom

  52. @Tom DeGisi
    “Still, it does leave us all, including lurkers like my self and probably including Roger frustrated.”

    If he only posted a link to some publication, any publication, that could illustrate his point, it would help. What *is* P-hard? How can you use Big-Omega analysis in practice?

  53. As I remember it:

    An algorithm is in P if it runs in polynomial time; if it is O(n**c) for some constant c.

    An algorithm is P-hard (I believe this is the same as P-complete) if it is in P, and it is possible to map any other problem in P to it (well, there are limits on the mapping function – no sneaking an O(2**n) in there!). It is therefore the “hardest” function in P.

    Here’s a nice reference: http://qwiki.stanford.edu/index.php/Complexity_Zoo:P#p

    I’m normally up for some mathematical pedantry, but really, I think the common usage of O-notation is plenty clear for these purposes…

  54. @Mike E
    Thanks, that at least makes sense. I will have a look at the zoo when time allows. It does look a little bewildering.

  55. Indeed, I did not see the difference between them. The only definition I could find was

    I do not want to get derailed into arguing all these little points with you when you don’t know what you’re talking about. Suffice it to say, you need to study the definitions of the terms P, P-Complete, NP, NP-Complete and NP-Hard. P-Hard is not a term I have ever heard used in informed discussion, but if it were a term it would no doubt be analogous to NP-Hard.

    breaking cryptographic keys for a good system. But you do not talk about “worst-case” analysis there.

    Yes, one does talk about worst-case analysis there. They may have different nomenclature but it is the same thing.

    Furthermore, I do not see the practical use of Big-Omega analysis.

    Yet at some point after this, you say:

    Big-Theta is very useful

    Big-Theta is the obvious conjunction of Big-Omega and Big-O, so a Big-Theta entails the existence of a Big-Omega bound.

    If Eric manages to improve his estimates from O(n**2) to O(n log n) he has made a Big-Theta analysis.

    No, if he does this he’s found an smaller upper bound. Which is good, but not equivalent to finding a bound that is both an upper and lower bound (Big-Theta).

    I have difficulty seeing what my fundamental errors in understanding are.

    Well, you’ve said a bunch of things that were wrong that I’ve pointed out directly. If that doesn’t count as having fundamental errors in your understanding then I don’t know what does.

    Write a blog post on your blog explaining Big-O

    I do not have a blog, nor do I intend upon adding to the mass of unedited dross on the Internet by starting one. The answer to this problem is to simply go and read a properly edited book on the subject. Around here a borrowing pass to a university library is cheap for an outsider, though that may not be true in your locale.

    How can you use Big-Omega analysis in practice?

    For establishing asymptotic lower bounds on problems or algorithms. For example, if Eric had been able to prove this for build systems, then that would be strong support for the difficulty of scaling them. Proving that the worst case is quadratic does not provide justification for this claim, but does provide a basis for speculation that it might be true.

    An algorithm is P-hard (I believe this is the same as P-complete)

    Why would P-Hard be the same as P-Complete when NP-Hard is distinct from NP-Complete?

  56. Well, Ok, I suppose for P-hard you would use more-or-less the same definition, but remove the restriction that it need be a member of P itself; I was specifying P-complete.

  57. Note that to prove that any build system that guarantees correctness must be O(n ** 2), i.e. that the problem is o(n ** 2) / Big-Omega(n ** 2) (and Big-Theta(n ** 2), because O(n ** 2) implementation exists), one has to offer rigorous mathematical proof.

    Like example one that states that sorting is o(n log n)… if you cannot assume anything about alphabet or values sorted, and have to use comparisons.

  58. I finally managed to write a thorough response to this article. The full text is here: http://blog.melski.net/2011/05/23/why-is-scons-so-slow/; a brief summary is as follows:

    * Build tools are essentially just a collection of graph algorithms. The complexity of these algorithms is well understood and bounded above by O(n**2), but that is the worst-case behavior. The dominating factor is the number of edges in the dependency graph. As such, you should expect O(n**2) behavior only when you have the maximal number of dependencies in the graph — one edge between every pair of nodes in the graph. In the real world, builds just don’t do that. In fact, real builds have O(n) edges, which means that the build tool ought to be able to process it in O(n) time.
    * Ensuring a correct build is really not all that tricky, and is very much possible using make, provided you follow long-established best practices for that tool.
    * SCons performance problems stem from design and implementation decisions in SCons, rather than from some pathology of the builds being executed, or from some absolute requirement dictated by the problem of ensuring correct builds.

    Thanks for writing this article, it was quite interesting to read. I hope you find my response worthwhile.

  59. Sorry for leaving this reply so late, but I was not able to find a hint in http://www.scons.org/CHANGES.txt that the SCons implementation based performance penalties described by Eric Melski have been tackled in the releases starting with 2.1.0.

    I’d be happy, if you’d like to comment on this.

    Furthermore, I wonder if it might be a good idea to put the rule itself into the DAG, i.e. the source objects and the target objects would no longer be directly connected, but connected via a node representing the rule.
    This gives you a DAG with (sorry for the LaTeX syntax)
    N_{Edges} = \sum_{i=1}^{N_{Rules}} (N_{SourceObjects, i} + N_{TargetObjects, i})
    instead of
    N_{Edges} = \sum_{i=1}^{N_{Rules}} (N_{SourceObjects, i} \times N_{TargetObjects, i})
    The builder of a rule has to be executed again, if one of its source objects changed.

    Would this change your analysis?

    Having used a make(pp) based build tool chain for the last years I’m currently evaluating SCons for the following reasons.
    1. Works on Windows and Linux.
    2. Seems to have a very powerful yet simple syntax for expressing my sometimes complex source/target relations.

    These performance problems concern me a little.

    Anyhow: thanks to both of you Erics for your articles.

Leave a comment

Your email address will not be published. Required fields are marked *