Declarative is greater than imperative

Sometimes I’m a helpless victim of my urges.

A while back -very late in 2016 – I started work on a program called loccount. This project originally had two purposes.

One is that I wanted a better, faster replacement for David Wheeler’s sloccount tool, which I was using to collect statistics on the amount of virtuous code shrinkage in NTPsec. David is good people and sloccount is a good idea, but internally it’s a slow and messy pile of kludges – so much so that it seems to have exceed his capacity to maintain, at time of writing in 2019 it hadn’t been updated since 2004. I’d been thinking about writing a better replacement, in Python, for a while.

Then I realized this problem was about perfectly sized to be my learn-Go project. Small enough to be tractable, large enough to not be entirely trivial. And there was the interesting prospect of using channels/goroutines to parallelize the data collection. I got it working well enough for NTP statistics pretty quickly, though I didn’t issue a first public release until a little over a month later (mainly because I wanted to have a good test corpus in place to demonstrate correctness). And the parallelized code was both satisfyingly fast and really pretty. I was quite pleased.

The only problem was that the implementation, having been a deliberately straight translation of sloccount’s algorithms in order to preserve comparability of the reports, was a bit of a grubby pile internally. Less so than sloccount’s because it was all in one language. but still. It’s difficult for me to leave that kind of thing alone; the urge to clean it up becomes like a maddening itch.

The rest of this post is about what happened when I succumbed. I got two lessons from this experience: one reinforcement of a thing I already knew, and one who-would-have-thought-it-could-go-this-far surprise. I also learned some interesting things about the landscape of programming languages.

For those of you who haven’t used sloccount, it counts source lines of code (SLOC) in a code tree. This is lines that are non-blank and not comments. SLOC counts are useful for complexity estimation and predicting defect incidence. SLOC is especially good for tracking how the complexity (and hence, for example, the potential attack surface) of a codebase has changed over time. There are more elaborate ways of making numbers that aim at similar usefuleness; the best known is called “cyclomatic complexity” and there’s another metric called LLOC (logical lines of code) that loccount can also tally for many languages. But it turns out empirically that it’s quite difficult to do better than SLOC for forecasting defect incidence, at least at the present state of our knowledge. More complex measures don’t seem to gain much; there is some evidence that LLOC does but even that is disputable.

I wanted to use loccount to post hard numbers about the continuing SLOC shrinkage of NTPSec. We’ve shrunk it by a little better than 4:1 since we forked it, from 231 KLOC to 55 KLOC, and that’s a big deal – a large reduction in attack surface and an equally large improvement in auditability. People have trouble believing a change that dramatic that unless you can show them numbers and how to reproduce those numbers; I wanted to be able to say “Here’s loccount and its test suite. Here’s our repository. See for yourself.”

But the code was still grubby, with multiple different rather ad-hoc parsers in it inherited from sloccount for different categories of languages; sloccount supports 27 in total. I didn’t like that, and over the next two years I would occasionally return to the code, gradually refactoring and simplifying.

Why parsers? Think about what you have to do to count SLOC. You need to run through a source file recognizing the following events:

  • Start of comment.
  • End of comment
  • Newline (so you can bump the SLOC counter when you’re outside a comment)
  • Start of string literal (so you start ignoring what would otherwise look like a comment start)
  • End of string (you stop ignoring comment starts)

This is complicated by the fact that in any given language there may be multiple syntaxes for start or end comment and start or end string literal. C and most other languages have two styles of comment; /* the block comment, which can contain newlines and has an explicit end delimiter */, versus // the winged comment, which can’t contain newlines and ends at the next one.

The most naive way to handle this simple grammar would be to write a custom parser for each language. Sometimes, when the language syntax is especially grotty you have to do that. There’s still a bespoke parser for Perl inside loccount. But most languages fall into fairly large syntactic families in which all the members can be parsed alike. Sloccount, supporting 27 languages, had at least four of these; C-like, Pascal-like, scripting-language-like. and Fortran-like.

Now, if you’re me, learning that most languages fall into families like that is interesting in itself. But the number of outliers that don’t fit, like Lisp and assembler (which sloccount can tally) and really odd ones like APL (which it can’t) is annoying. I can’t look at code like that without wanting to clean it up, simplify it, fold special cases into general ones, and learn about the problem space as I do so. Victim of my urges, I am.

I’ve written more parsers for messy data than I can remember, so I know the strategies for this. Let’s call the bespoke-parser-for-every-language level 0, and recognizing language families level 1. Sloccount was already at level 1 and loccount inherited that. Level 2 would be reducing the number of distinct parsers, ideally to one. How do we do that?

Let’s consider two specific, important family parsers: the C family (C, C++, Objective-C, Java, C#, Jana, Javascript, Scala) and the Pascal family. The main differences are (1) the block comment syntax is /* */ in C but (* *) in Pascal, (2) Pascal has no winged comments like C //, and (3) C double quotes around string literals vs. Pascal single quotes.

So instead of two parsers with two functions “is there a a /* next?” and “is there a (* next?”, you write one function “is there a block comment start next?” which dispatches to looking for /* or (*. Your “Is there a winged comment start?” file looks for // in a source file with a .c extension and always returns false in a file with a .pas extension.

That’s level 2. You’ve replaced “look for this family-specific syntax” with “look for the start-block-comment terminal” with the token-getter figuring out what to match based on which family the file contents is in. But that doesn’t handle idiosyncratic languages like – say – D, which uses /+ +/. Well, not unless you’re willing to let your start-block-comment matcher be a huge ugly switch statement with an arm not just for each family but for each outlier file extension.

And you’d need parallel big ugly switches in every other terminal – the one for end of block comment, the one for winged comment, the one for string delimiter, et tiresome cetera. Don’t do that; it’s ugly and hard to maintain.

Level 3 is where you change the big switch/case into a traits table. At start of processing you use the file extension to figure out which traits entry to use. You read your start and end block comment from that entry. It may have flags in it which declare, for example “this language uses single-quoted strings”, or “block comments nest in this language”.

This move from executable code to a traits table is really important, because it drastically reduces the complexity cost of adding support for new languages. Usually, now, each new one is just a table entry that can be easily composed from a read of the syntax part of the language specification. Declarative is greater than imperative!

The project NEWS file and repo history tell us how dramatic this effect is. In the first two years of the project’s lifetime I added about 14 languages to the original set of 27, less than one a month. Late last month I got the ontology of the traits table about right after staring long enough at the languages I’d been supporting. I immediately added five languages on one day.

Then I hit a bump because I needed more trait flags to handle the next couple of languages. But as I added the next half-dozen or so (Algol 60, F#, Dart, Cobra, VRML, Modula-2, and Kotlin) I noticed something interesting; the frequency with which I needed to add trait flags was dropping as I went. I was beginning to mine out the space of relevant syntactic variations.

Now I want to draw a contrast with the project I used to define the term Zeno tarpit. On doclifter I can always make more progress towards lifting the last few percent of increasingly weird malformed legacy documents out of the nroff muck if I’m willing to push hard enough, but the cost is that the pain never ends. Each increment of progress normally requires more effort than the last.

In loccount I’m clearly heading in the opposite direction. In the last week (and remember most of my attention is on NTPsec; this is a side project) I’ve added no fewer than 34 new languages and markup formats to loccount. My time to put in a new one is down to minutes – skim the spec, fill in a new table entry, drop a sample into the test directory, eyeball the results to make sure they’re sane, and update the news file.

The contrast with the effort required to get another 0.1% down the long tail of the man-page corpus is really extreme. What makes the difference?

OK, yes, the nroff-lifting problem is more complicated. You’re parsing for many more different kinds of terminals and relations among them. That’s true but it’s not a “true” that generates much insight. No; the real difference is that in that domain, I still have to write a parser extension – executable code – in order to make progress. In loccount, on the other hand, I’m just composing new table entries. Declarative is greater than imperative!

Actually it has become a game; I’ve made a couple of passes through Wikipedia’s List of programming languages looking for plausible candidates. I’ve put in historical relics like SNOBOL4 and B simply because I could. When you’re on a stamp-collecting streak like this, it’s natural to start wondering just how far you can push it. A victim of my urges sometimes, am I.

I do have some limits, though. I haven’t gone after esoteric languages, because those are often deliberately constructed to be so syntactically bizarre that upgrding my parser would be more like work than fun. And I really don’t care about the 17th dialect of Business BASIC or yet another proprietary database language, thank you. Also I have in general not added languages that I judged to be academic toys intended moe to generate research papers rather than production code, though I gave in on a few I thought to be of particular historical interest.

The lesson here is one I know I’ve written about before, but it deserves reinforcing. Declarative is greater than imperative. Simple code plus smart data is better than enough smart code to do the same job. The move from four parsers to one parser and a big trait table was a win on every level – the result is easier to understand and much easier to extend. This is still an underutilized design strategy.

There is a level 4. I may yet do something like what we did with Open Adventure and move all that smart data from a table initializer to a YAML file compiled back into the trait table at build time. Then adding new languages would usually just be an edit to the YAML, not touching the Go code at all.

I think at this point I’ve come pretty close to entirely excavating the space of production languages that might reasonably appear on a Unix system today. Now prove me wrong. Throw a language loccount -s doesn’t already list at me and let’s see if my generic parser and quirks can cope.

Some restrictions: No esolangs, please; nothing with only closed-source implementations; nothing outright extinct; no fad-of-the-week webdev crap; no academic toys only ever touched by the designer’s research group. Exceptions to these restrictions may be made in cases of exceptional historical interest – it amused me to add B and SNOBOL4 and I’d do it again.

Oh, and the current count of languages? 117 The real surprise is that 117 languages was this easy. There is less variation out there than one might suppose.

94 comments

  1. It seems to ignore Inform ( https://en.wikipedia.org/wiki/Inform ). E.g.:

    $ git clone git@gitlab.com:duncan-bayne/witchingisle1.git
    Cloning into ‘witchingisle1’…
    remote: Enumerating objects: 11, done.
    remote: Total 11 (delta 0), reused 0 (delta 0)
    Receiving objects: 100% (11/11), 21.04 KiB | 78.00 KiB/s, done.
    Resolving deltas: 100% (2/2), done.
    $ loccount ./witchingisle1
    makefile SLOC=5 (100.00%) LLOC=0 in 1 files

  2. When you start a project such as sloccount in pretty much any general-purpose language, you start at level 0 and climb whenever you notice just enough similarity.

    However, if you were to start it in Emacs Lisp, you’d probably be at level 3 from the get-go, because Emacs already has a syntax table abstraction for recognizing comments and string literals.

    I wonder what insight can be gained from this observation.

  3. A small feature request/idea, if it’s not already supported: i have my own home-grown language which is syntactically similar enough to JavaScript that i use emacs’s JS mode when editing code for it. i’d like to tell loccount that files named *.s2 are JavaScript. Is there a way, short of editing the traits table, to add extensions as aliases for other entries?

    i’ve just run it on one tree and get tons of warnings about “newline in string”. i don’t think that needs to be a warning: newlines are perfectly legal in shell code strings. It seems fair to me for loccount to assume that its inputs are legal inputs for the appropriate compilers/interpreters, without loccount trying to judge the code’s syntax.

    loccount also gives that newline warning for most of a TCL file i have laying around here, apparently confused (as is emacs’s syntax highlighting) by a particularly weird escaped string construct which likely won’t paste properly here, but i’ll try:

    proc quote-if-needed {str} {
    if {[string match {*[\” ]*} $str]} {
    return \”[string map [list \” \\” \\ \\\\] $str]\”
    }
    return $str
    }

    loccount warns incessantly about “newline in string” after that “string match” line. (This tcl code is known to work, so it’s presumably syntactically legal in tcl.)

    Edit: it also occurs to me: file extensions aren’t quite enough. That weird tcl bit is from a file named “autosetup”, without an extension, and many shell/perl/whatever scripts don’t have extensions. Analysis of the shebang line, if any, would be an interesting addition.

    1. >Is there a way, short of editing the traits table, to add extensions as aliases for other entries?

      No, not yet. I might take a patch to implement an alias option or config file for such aliases.

      >i’ve just run it on one tree and get tons of warnings about “newline in string”. i don’t think that needs to be a warning: newlines are perfectly legal in shell code strings.

      Yes, but not in C or many other languages. There’s a quirk flag “eols” that tells loccount string literals can have embedded newlines. The reason there’s a warning about this is that otherwise an unbalanced string quote in a language without eols can silently mess up your line counts.

      I think the actual problem here is that Tcl doesn’t have the cbs quirk, which tells it that backslashes can escape string quotes. I’ve just added that to Tcl and Wish, and eols to shell.

      Analysis of the shebang line, if any, would be an interesting addition.

      Already done, but the extensionless file has to have its exec bit set before loccount will look

  4. > Sometimes I’m a helpless victim of my urges.

    I ate half a box of peanut butter cookies tonight.

    But I’m running second shift and need to be up for a meeting at 6:30.

    1. >I ate half a box of peanut butter cookies tonight.

      Mmmm. My second favorite cookie, as it happens, but it has to be the chewy home-style kind with fork marks. Most boxed versions don’t tempt me.

      My very favorite cookie is an old-fashioned kind of raisin cookie that’s like a raisin-filled mini-turnover. My mother makes them for holidays. They’re rare; I’ve found recipes for them on the net but never seen one anyone but my mother made.

      1. Those cookies sound really good…what are they called?

        My favorite would probably be black walnut cookies. Close seconds would be any of: shortbread, snickerdoodle, oatmeal raisin, and peanut butter.

        Also, if it’s not homemade, it’s not a cookie, it’s an imitation cookie. (This is true for nearly all baked goods…the commercial recipes do things to save costs which compromises flavor, texture, and nutrition.)

        1. >Those cookies sound really good…what are they called?

          We never called them anything but “raisin cookies”. My mother told me once they were popular in the 1940s and 1950s but went out of fashion.

          The picture attached to recipe for “Old Fashioned Christmas Raisin Delights looks about right. A couple other websites describe what is clearly the same concept as a “filled raisin cookie”.

            1. >Looks like a Chorley Cake, locally known as “Fly Pie” :-)

              You happen to be addressing one of the relative handful of Americans who has actually laid tooth on a Chorley cake, or something nigh-indistinguishable from one. They’re unknown in the U.S, but I’ve lived and traveled in Great Britain. Actually I didn’t know they had a name (other than perhaps J. Random British Pastry Thing Of Which We Yanks Wot Not) until I chased your link.

              A filled raisin cookie does look a bit like a Chorley cake, but the construction of the shell is quite different. To make a filled raisin cookie, you cut a round of sugar-cookie dough, drop a generous spoonful of raisins in the middle, then fold it over into a half-moon shape and crimp the edges down lightly. Whereas the shell on a Chorley cake is symmetrical around the filling and…well…cakelike (or scone-like in some variations)…the shell on a filled raisin cookie is thinner, denser, and slightly chewier.

              1. The shell on a Chorley Cake is shortcrust pastry. If it’s flaky pastry then it’s a (more common and inferior) Eccles Cake.

        1. >They weren’t actual peanut butter cookies, they were Girl Scout Do-Si-Dos.

          Sigh. One of my personal why-did-they-ruin-it moments was the year they changed the recipe of Girl Scout shortbread cookies, which were at one time surprisingly good for a cheap mass-produced shortbread – properly buttery and crumbly and not overly sweet. Then they added lemon and sugar and (I think) reduced the butter content. Ruined the flavor and texture, and one of the small constant good things of life died. I still haz a small sad when I think about it.

          1. Mint chocolate short circuits my “yum” response so hard that I have hardly ever managed to try any Girl Scout cookies aside from Thin Mints. I can hardly imagine how they could ruin them aside from adding sodium fluoride or some such (the one way I know of to ruin mint is to use it to flavor dental products. Yech).

            1. I wouldn’t say they ruined thin mints, but they are not quite as good now as before they changed the recipe a few years ago.

              Outside that,, completely there on the cookies. It is so unlikely that any other Girl Scout cookie will be as good to me as a thin mint, why would I try something else.

            2. The recipe change that JohnOC mentioned from what I can recall means that I need to avoid them now; I don’t remember what the original recipe was but the new(er) formulation is such that I can’t eat them without risking a migraine.

              I may eventually try to come up with a recipe for them, but I’m not all that interested in the labor required to chocolate-cover things.

              (On the mint+chocolate topic, the same happened to York peppermint patties; they used to be good but the shade of blue in the logo changed sometime in the last few years, and the recipe apparently did as well….they don’t taste the same, and I find them near-revolting now.)

        2. See, those just aren’t quite as good as the sort of real-deal peanut butter cookies ESR described. Not bad, but a bit disappointing once you’ve had the real thing.

          Tagalongs, on the other hand, are one of my great weaknesses. They’re one of the few cookies where I not only want to wolf down an entire box, but have in fact done so, not really even noticing until I reached for another and there weren’t any. The only way these could be better is to replace the milk chocolate with dark, thus cutting the sweetness just a wee bit – which is the one weakness of the current recipe.

    2. Peanut butter cookies are my favorite after any dark chocolate chunk cookie and an ancestral brownie my grandmother dubbed ‘Congo Chip Nut Bars’.

      She also made a killer chocolate chip oatmeal cookie. And, unfortunately for my taste, an oatmeal raisin cookie that was nigh-on indistinguishable from it. (Edit: Upon Visual inspection.) To quote the comedian, “Raisin Cookies are the reason I have trust issues…”

      It’s not the taste, it’s the texture. You bite into it and suddenly you’re just holding it in your mouth because you just know an attempt to swallow is going to result in catastrophic explosive out-cookying…

      My youngest brother has the same reaction.

  5. Mrrrr…is a LOC metric even useful in APL? Especially considering one line of APL can replace hundreds of lines of code in just about any other language?

  6. Your table is now a part of the code, and yet it’s complexity isn’t captured in loccount.
    It’s (honestly) great that you’ve replaced hundreds of LOCs with a few entries in a table, every step up the abstraction ladder makes code easier to understand/debug/modify.
    However, unless you’re able to quantify the entropy you’re adding to that table, the entropy that was removed from the previous level becomes somewhat imaterial.
    Thermodynamics was my bane, but I’m fairly positive that there is a way to compare (valuate) the changes of a multiple abstraction levels system.

    1. You could compare Kolmogorov complexity. There’s an algorithm for doing that that’s pretty easy to implement, IIRC.

      1. Could you give more information about that algorithm? I have an interest in that stuff. The only “algorithms” I know to compare KC are using compression.

  7. Has it tackled Go code now? In a recent post I thought I remembered Go was still not on the list..

    1. >Has it tackled Go code now? In a recent post I thought I remembered Go was still not on the list.

      It could do Go SLOC just fine from near the beginning. Until recently it couldn’t do Go LLOC because Go is not one of the languages with an end-of-statement marker. But I got Go’s own parser library to count statements for me in release 2.1.

  8. > >i’ve just run it on one tree and get tons of warnings about “newline in string”. i don’t think that needs to be a warning: newlines are perfectly legal in shell code strings.

    > Yes, but not in C or many other languages. There’s a quirk flag “eols” that tells loccount string literals can have embedded newlines. The reason there’s a warning about this is that otherwise an unbalanced string quote in a language without eols can silently mess up your line counts.

    If the input is not valid code, loccount shouldn’t be responsible for trying to behave as if it _might_ be valid code. IMO that clearly falls into the realm of Undefined Behaviour, and any resulting messed up counts become the responsibility of the code’s owner(s). Having loccount vet the syntax of one’s code sounds way out of scope – that’s what the compiler/interpreter are for.

    Edit: PS: thank you for this tool! i’ve long wanted an update to sloccount.

    1. >Having loccount vet the syntax of one’s code sounds way out of scope – that’s what the compiler/interpreter are for.

      There is something in what you say, but also something to be said for doing the best one can not to silently return bad data.

      I’ll consider removing the feature if I get lots of reports of spurious warnings.

      1. C++ supports ‘raw’ strings that can contain anything, including new lines. I don’t remember how to quote code but both these strings,

        const char raw[] {R”(A
        string
        spread
        over
        some
        lines.
        )”};

        const char cooked[]{“This ”
        “string ”
        “is ”
        “just a single line\n”};

        presumably should be treated as single statements. Perhaps it doesn’t matter.

        1. The second one – implicitly concatenated string literals – isn’t a problem, because my parser will ignore /* or */ within the string literals.

          The first case is potential trouble. If the string “/*” were to replace, say, “over”, the parser wouldn’t know it was in an intended string literal and would see that as a block comment start, ignoring lines until it reached a */ or EOF.

          The parser does have a notion that some languages have delimiters for multiline strings. But its model for that is Python – it expects the start and end delimiters to be “”” or ”’. It would take a significant parser extension to fix this.

          For now I will document it as a known limitation.

          1. I think it’s unlikely in reality to have new lines in raw strings. As a usage it seems obfuscatory. They make sense for uses like defining regular expressions. But I expect to be surprised…

            1. Be surprised. Multiline raw strings are useful to include an SQL query, a JSON document, or even a piece of text output in a C++ program. No, not all of these it makes sense to move to a config file.

              1. I used a lot of these in unit tests where the input was some form of Protobuf – just express everything in human-readable text and run it through a protobuf parser to feed to the code being tested.

  9. @ESR: “Declarative is greater than imperative!”

    I was somewhat surprised to see you describe “table driven” as being “declarative”.

    I’ve written lots of table-driven code (even on 8-bit micros) but never really thought of table-driven as being a declarative “language”, per se. When I think of declarative, my mind wanders to things like SQL or QML.

    I suppose my working definition of declarative is not really correct.

    But wholeheartedly agree with your article. Wondering if our thought process ought always to be “look for a declarative solution, go imperative only if necessary”.

    1. >I suppose my working definition of declarative is not really correct.

      No doubt you bind this term to declarative markup languages. But the syntax of those is just a container; the semantic part is that they declare entities and relationships and properties. Sometimes a plain old table can do that just as well.

    1. >What kind of interest do you have in supporting terrible features? The one which comes to mind are digraphs/trigraphs.

      I’m really tempted to just say “Fsck those and the source they rode in on.”

      But reading the article I don’t see any case where they’d cause me a parsing problem. I think I can just ignore them.

      1. Most of the time, but it (i|wa)s possible for a ??/ trigraph to escape an end-of-line in a comment, turning the next physical line into a continuation of the comment.

        http://www.gotw.ca/gotw/086.htm

        Fortunately, trigraphs are removed in C++17.

      2. (Going back to the previous blog entry, I note that trigraphs were the C workaround for the limitations imposed by a deliberate feature of the ASCII/ISO-646 project. Thus demonstrating the sort of troubles binary protocol design can cause even when there’s no alternative to designing one.)

  10. One of my principles for development over the last several years has been “complexity in s system should be an emergent property”. What I mean is the system should be relatively simple in the absence of data. Over time as content is added then complexity increases proportionally to the amount added.
    The Unix way of simple tools that can be composed into something much greater than the sum of the parts support this principle. Your approach to handling input complexity is another good illustration.
    BTW – just used Zeon tarpit in a conversation the other day. Thanks for sharing your insights – it has reinforced a lot of my recent thinking on development in general.

  11. I’m wondering of the next step isn’t to detatch the table from the code and make it a config file. Then people like Stephan Beal can add languages you don’t know about themselves, verify that it works and then, if they want, submit an addition to the file. It’s true the config file will need to know about the half dozen special parser bits but doesn’t seem like a big hassle.

    1. >I’m wondering of the next step isn’t to detatch the table from the code and make it a config file.

      I did describe Level 4 as the traits table becoming a YAML file. I had in mind using it to generate the initializer for the table at build time. To handle Stephan Beal’s case that wouldn’t be good enough – the program would have to pick up an update to the table at runtime.

      Might happen sometime but it’s way down my priority list.

    2. To me, this is a near given. Once you’ve done the conceptual work to change something from “code” to “data”, the next step is to remove the data part from the source code, and put it in a config file.

      Coincidentally, I was working up a little script to handle an admin issue today, and even though it’s probably a one-off, I wrote it to read a file rather than embed that same information in the script. That’s probably from decades of experience with such “one-off” things getting repurposed, and it’s easier to do that if the data live in a separate file.

  12. Alternately, the ability to add a client-specific yaml file to the build dir. If found, the yaml-to-table build process would include it after including the main yaml. e.g. custom.yaml includes my own local hacks, and i simply have to drop that into the loccount checkout before building. Hypothetically. My particular case is not worthy of inclusion into the main/checked-in table – AFAIK i’m the only person in the world using that particular language, so it clearly qualifies as esoteric.

    1. >If found, the yaml-to-table build process would include it after including the main yaml. e.g. custom.yaml includes my own local hacks

      That’s a pretty good plan. It ought to be doable.

  13. Though experiment, what if you remove this array from code and place this special data to beginning of file you are processing, after that at program stat you load this “header” and continue normal operations. Then code become imperative? Effective only one line changed in implementation, and only thing left is imperative code without any “declarations”.

    I think this code never stop being imperative, but this is not any imperative code but generic/abstract one. Similar to comparing hand written `quick_sort_int`, `quick_sort_double` with `qsort` or even better `std::sort`.

    Because core problem is regular then you can easy have clean imperative code that can handle 99% cases and all differences put in one table. but other problems can’t be that easy to abstract out. This is why doclifter is mess, not because it imperative. If my conclusion is wrong then you can prove that by rewrite it using declarative methods. Is it even possible?

    btw how you program would handle MySQL code like this:
    “`
    CREATE TABLE t1(
    a INT,
    KEY (a))
    /*!50110 KEY_BLOCK_SIZE=1024 */
    “`
    Last line isn’t comment, depending on MySQL version this is comment or part of command.

  14. I’ve really been enjoying your writing recently. Thanks. If I had money, I’d shoot you some, but I’m pretty broke these days. :(

  15. How about adding support for SAS (Statistical Analysis Systems)? Although SAS itself is closed source, lots of code written in the SAS language is available open source.
    It does have several methods of commenting, and string literals in single or double quotes.

    1. >How about adding support for SAS (Statistical Analysis Systems)?

      It could happen. Read this, follow the directions, and file an issue.

  16. I read through loccount.go. A MATLAB file need not contain the string “end”, and block comments are actually fairly uncommon, because the native editor’s features encourage the use of winged comments.

    In fact, it is in general not possible to disambiguate Objective-C from MATLAB, as it is possible to construct a file that is valid under both languages. Consider the following three files:

    trivial.m
    i=1;

    objectivec.m
    #import
    int main() {
    int i;
    #include "trivial.m"
    NSLog(@"The value of i is %d.\n", i);
    return 0; }

    matlab.m
    trivial;
    fprintf('The value of i is %d.\n', i);

    The Objective-C compiler and the MATLAB interpreter both consider this legal input and issue no errors or warnings. But trivial.m is part of both, so it can’t be neatly determined what language it represents just from its extension and contents.

    1. >The Objective-C compiler and the MATLAB interpreter both consider this legal input and issue no errors or warnings.

      All I can do is add more heuristics. Are “//” or “/*” ever legal in a Matlab file?

      1. Yes, many functions can be invoked either as foo('filepath') or as foo filepath so I could see that arising in a context like load data/*.mat for example. I’m not sure when you’d see a doubled forward slash, but it’s legal in filenames so I guess it might crop up if some tool autogenerates the code.

    2. >In fact, it is in general not possible to disambiguate Objective-C from MATLAB, as it is possible to construct a file that is valid under both languages.

      The approach I’m taking now is to consider it MATLAB if it contains either a MATLAB winged comment, a MATLAB block commend, or “end” at start of some line.

      1. Ignorant question of ignorance: Why not an option for “Treat the file as source code in such-and-such language, ignoring what the extension might claim”?

  17. “Declarative is greater than imperative”

    Very catchy. I bet you have a lot of these.

    Have you compiled them anywhere?

  18. Your definition of Pascal comments — at least as described in this post (but maybe you were just simplifying?) — is incomplete:

    1) Curly braces, { and }, are the original Wirth begin- and end-comment markers. IIRC (* and *) were originally just replacements for those whose keyboards couldn’t easily generate curly braces. They may have fallen out of widespread use, but at least the Borland-derived Pascals still use them for compiler directives.

    2) Some Pascal dialects do have “winged” comments, // to end-of-line. Most notably Delphi, and AFAIK Free Pascal too, since it usually tracks Delphi rather closely.

    Since Delphi and FP probably constitute most of the Pascal code written nowadays, I’d say these are pretty important to get in there.

    I’ll see if I can post this to the link you mention above sometime later, when I’m at a real computer in stead of the phone. Unless, that is, you already have this and were just eliding it above?

    1. >Your definition of Pascal comments — at least as described in this post (but maybe you were just simplifying?) — is incomplete:

      The code handles { } and //.

  19. How would loccount compare with tokei, the rust SLOC-counter that I’ve been using lately?

    It looks like tokei supports more languages (175), but I’m not sure if it places the same restrictions on adding new languages. It’s also optimized for runtime speed rather than the speed at which one can add a new language, but I actually have no idea where it falls on the declarative/imperative spectrum.

    EDIT: Actually, it looks like tokei is at what you called Level 4: you add a new language by adding a JSON file with the appropriate syntax. Which might explain why tokei supports so many languages

    1. >How would loccount compare with tokei, the rust SLOC-counter that I’ve been using lately?

      Looks like tokei does 37 more languages, mostly web-framework stuff. Not doing GC means it’s probably a bit faster, though maybe not if I turn off GC. But loccount can do LLOC as well as SLOC.

      1. Following up on this (and using the list of languages you posted below) it looks like there’s significantly less overlap between the loccount languages and the tokei languages than I was expecting—I guess there are just a ton of languages out there! Tokei currently supports the following languages that loccount is missing:

        ABAP
        ActionScript
        Agda
        Alex
        ASP
        ASP.NET
        Assembly
        AutoHotKey
        Autoconf
        Automake
        BASH
        Batch
        BrightScript
        C Header
        C Shell
        Cabal
        Cassius
        Ceylon
        ClojureC
        ClojureScript
        Cogent
        ColdFusion
        ColdFusion CFScript
        Coq
        C++ Header
        Device Tree
        Dockerfile
        .NET Resource
        Dream Maker
        Edn
        Elvish
        Emacs Dev Env
        FEN
        Fish
        F*
        GDScript
        GLSL
        Hamlet
        Handlebars
        Happy
        HCL
        HEX
        HLSL
        Intel HEX
        Isabelle
        JAI
        JSX
        Julius
        Kakoune script
        Lean
        LESS
        LD Script
        Liquid
        Logtalk
        Lucius
        Madlang
        Meson
        Mint
        Module-Definition
        MSBuild
        Mustache
        Nix
        Not Quite Perl
        OCaml
        Objective C++
        Org
        Oz
        PSL Assertion
        PHP
        Polly
        Processing
        PureScript
        QCL
        QML
        Rakefile
        Razor
        ReStructuredText
        Ruby HTML
        SRecode Template
        Sass
        Standard ML (SML)
        Solidity
        Specman e
        Spice Netlist
        SVG
        SWIG
        SystemVerilog
        Plain Text
        Twig
        Unreal Markdown
        Unreal Plugin
        Unreal Project
        Unreal Script
        Unreal Shader
        Unreal Shader Header
        Ur/Web
        Ur/Web Project
        VB6
        VBScript
        Verilog Args File
        Visual Basic
        Visual Studio Project
        Visual Studio Solution
        Vue
        Wolfram
        XSL
        XAML
        XCode Config
        Xtend
        Zig
        Zsh

        Conversely, loccount supports these languages that tokei is missing:

        ABC
        Algol60
        Arc
        Asciidoc
        Asm
        Autotools
        Awk
        B
        BASIC
        BCPL
        BLISS
        Batchfile
        CLU
        CML
        Chapel
        ChucK
        Cobra
        Csh
        Dylan
        Eiffel
        Es6
        Expect
        Factor
        Fantom
        Frege
        Hy
        Icon
        Io
        J
        Lex
        Livescript
        Logo
        M4
        MATLAB
        ML
        MUMPS
        Mal
        Man
        Metafont
        Modula
        Modula2
        Modula3
        Nroff/troff
        Oberon
        Occam
        PHP
        PL/1
        POP-11
        PostScript
        PowerShell
        Rebol
        Rexx
        SETL
        SGML
        SNOBOL4
        Sather
        Sed
        Seed7
        Simula
        Skew
        Smalltalk
        Texinfo
        Turing
        VRML
        VisualBasic
        Waf
        WebAssembly
        Wish
        Yacc
        Yorick
        Zephir

        These lists could potentially be useful to people wanting to add languages to either project.

        1. >These lists could potentially be useful to people wanting to add languages to either project.

          Oh, hell yes. Gives me targets.

          Some of these aren’t real differences. Yiu should run another check with the comparisons caseblinded. There are also some things like my “asm” being tokei’s “Assembly”.

  20. I am probably being dense here, but reinventing the config file, or even something one level below the config file does not sound like a radical innovation for me, but more like business as usual?

    OK we come from different programming traditions, as mine is database oriented, and business, meaning sadly Windows. We like making things very configurable because we have to deal with power users who are afraid of programming, and sales demos are often oriented around how much you can do with just configuration, without coding. And closed source and all that…

    But this cannot be too alien to the Unix and open source tradition either. After all the Unix tradition championed scripting languages like Perl, where you have the huge advantage of being able to write the configuration file in the syntax of the language itself and just evaluate it.

    Surely I must be misunderstanding something as you surely wouldn’t call something as trivial as a configurable program “declarative” and smart data”. What am I missing? What I would call smart data is putting, depending on the language, function pointers or lambda-functions or suchlike into data structures. Effectively plug-ins.

    1. >Surely I must be misunderstanding something as you surely wouldn’t call something as trivial as a configurable program “declarative” and smart data”.

      Sure I would, when I’m contrasting a data-driven design with one that relies on a lot of ad-hoc imperative code.

      The clever part about this program isn’t that it’s configurable; it actually isn’t yet, not at runtime – I have to modify a table in code to add a language. It’s the combination of a traits table with a generic source-code parser.

  21. I didn’t see a list of currently supported languages in the repository. Maybe I overlooked it.

    Right now I am learning R, the Open Source statistics language that is something like the old Matlab on steroids. (Technically R is the system, and the language is called S, but everyone in the R community just talks about R.) Highly recommended.

    https://www.r-project.org/

    It looks like you’ve already implemented R support. Skimming the description file you linked makes me think that it was probably another trivial implementation.

    I’m not sure I would want to take the time to learn a language that couldn’t be trivially supported here; almost by definition, that means that it violates the rules that our brains have come to expect from code, making it harder to learn and intuit for little gain.

    1. >I didn’t see a list of currently supported languages in the repository. Maybe I overlooked it.

      Do “loccount -s”.

      Currently it’s: ABC Ada Algol60 Arc B BASIC BCPL BLISS C C# C++ CLU CML COBOL CSS Chapel ChucK Clojure Clojurescript Cobra CoffeeScript Crystal D Dart Dylan Eiffel Elixir Elm Erlang Expect F# Factor Fantom Forth Fortran Fortran03 Fortran90 Fortran95 Frege Go Groovy HTML Haskell Haxe Hy INI Icon Idris Io J JSON Java Javascript Julia Kotlin Lex Lisp Livescript Logo MATLAB ML MUMPS Markdown Metafont Modula Modula2 Modula3 Nim Oberon Objective-C PHP PL/1 POP-11 Pascal Perl Perl6 PostScript PowerShell Prolog ProtocolBuffers Python R Racket Rebol Rexx Ruby Rust SETL SGML SNOBOL4 SQL Sather Scala Scheme Scons Seed7 Simula Skew Smalltalk Swift TOML Tcl Tex Texinfo Turing Typescript VHDL VRML Vala Verilog Vimscript VisualBasic WebAssembly Wish XML YAML Yacc Yorick Zephir asciidoc asm autotools awk batchfile cmake csh elisp es6 lua m4 makefile mal man nroff/troff occam sed shell waf

      >Skimming the description file you linked makes me think that [R] was probably another trivial implementation.

      It was.

      1. I’m a bit confused that asm is just listed as one language. Wouldn’t you need to support, e.g, Intel and AT&T syntaxes for x86 separately (they seem to use different comment delimiters), let alone the potential quirks of asm for other architectures?

        1. >I’m a bit confused that asm is just listed as one language. Wouldn’t you need to support, e.g, Intel and AT&T syntaxes for x86 separately (they seem to use different comment delimiters), let alone the potential quirks of asm for other architectures?

          Ir’s a catch-all. Recognizes either ; or # or * as a winged-comment leader and /* */ as block-comment delimiters. Assemblers are assumed never to have string literals as such, so these things are all loccount needs to know to do its job.

          1. And I guess it’s assumed that while some assembly languages might use the winged comment leaders as something other than a comment leader (e.g, # for immediates on the PDP-11), they will never occur at the beginning of a line, so they won’t throw off the count? And, of course, I’ve never heard of /* having any syntactic meaning anywhere other than “begin block comment”.

            1. >And I guess it’s assumed that while some assembly languages might use the winged comment leaders as something other than a comment leader (e.g, # for immediates on the PDP-11), they will never occur at the beginning of a line, so they won’t throw off the count? And, of course, I’ve never heard of /* having any syntactic meaning anywhere other than “begin block comment”.

              Correct on both points.

          2. > Assemblers are assumed never to have string literals as such

            Old PDP-11 assembler had an .ASCIZ directive which specified a null-delimited list of character codes which was initialized with a quoted string. E.g.:

            .ASCIZ “Hello, world!”

            GNU as supports this also. Does this count as a “string literal”?

            1. >GNU as supports this also. Does this count as a “string literal”?

              Yes. It happens that the way loccount is written, both single- and double-quoted string/char literals are processed by default (e.g. comment starts and ends inside them are ignored) The asm entry does that default.

  22. “no fad-of-the-week webdev crap”

    Is there anything else, these days?

    (Having had to touch some “flavor of the week framework” nonsense lately I am currently surprised that anything anyone does on the web with “modern” tools can work for more than a week, and that it’s not all even MORE insecure than it is.)

  23. Moonscript would be a trivial one to add. It serves a similar purpose to CoffeeScript, but targets lua as the backend. It uses many of the same conventions as lua (comments, strings), so I suspect it would work properly with the same settings as lua. File extension is .moon

    1. >Given your status as a maintainer, might I suggest INTERCAL? ;)

      Nah. If INTERCAL syntax weren’t too weird to fit my generic parser I’d add it for the giggle value, but there’s not enough utility in counting INTERCAL LOC to justify a parser extension.

      1. Hey, it’s INTERCAL. The LOC counter doesn’t have to be helpful or accurate, nor do I see why the LOC count need be an integer, positive, or even real. A potential algorithm:

        Count the number of “DO” statements. Use this to select at random from the list of other languages supported by loccount, and count lines as if it were that language. Use this to generate a complex number of the form e^i*($NUMLINES). Then count the number of “PLEASE” and “PLEASE DO” statements, use each of these to randomly select a language to do a line count, and use some function of the resulting line counts to scale the unit-magnitude complex number generated in the first step.

        1. For INTERCAL, wouldn’t the appropriate thing be to treat all the lines of source code as comments? And to count (a) the total number of lines, (b) the number of lines that have any character as the first character on the line (and thus are comments), and (c) the number of blank lines, and then calculate the lines of code as a-(b+c)? So the LOC count returned would always be zero, but the program would go round Robin Hood’s barn to determine this.

      2. Well, I was really just hoping to make some people cringe with the mere suggestion. From what little I understand of INTERCAL, you’d need a full parser to determine which statements were invalid. Although a special handler for explicitly not handled languages with some random responses could be amusing.

  24. Hi ESR.

    Above, in the list of languages which loccount handles, you list J, but not APL, or other APL variants such as A, A+, K, etc.

    Is that because of the idiosyncratic graphic symbols common in APL, lack of demand, or for some other reason(s)?

    Thank you.

    1. >Above, in the list of languages which loccount handles, you list J, but not APL, or other APL variants such as A, A+, K, etc.

      It’s a combination of things.

      The APL character set certainly doesn’t make it easier. There is a utf-8 encoding of the APL character set, and working in Go helps because utf-8 is the native encoding of go source code. But in my experience, even with those kinds of advantages you can always count on this sort of thing to be a pain in the ass.

      Also trying to count lines doesn’t have the utility in APL that it would in most other languages. You’re certainly not going to get figures that are readily comparable to, say, C.

      That second fact has reduced my motivation to work around the problems I would inevitably encounter.

      That said, if someone were to send me a patch and a test load that does a reasonable job of verifying it, I’d take the patch.

  25. It would be nice to describe how to install for people outside golang world.

    Ignores directories with parent, e.g. “../../sources”.

    What about to output just comments or just programm for grepping?
    Is it feasible? I have no particular use case for it, just brainstorming..

    Regards!

    1. >Well, I just learned of another line counter tool that also uses language description file and written in go

      That looks pretty well constructed.

      1. I’m going to put that on my resume.

        Happy to answer any questions about it, I too am a victim of my urges and spent far more time working on scc than any rational person would.

Leave a Reply to esr Cancel reply

Your email address will not be published. Required fields are marked *