Designing tasteful CLIs: a case study

Yesterday evening my apprentice, Ian Bruene, tossed a design question at me.

Ian is working on a utility he calls “igor” intended to script interactions with GitLab, a major public forge site. Like many such sites, it has a sort of remote-procedure-call interface that allows you, as an alternative to clicky-dancing on the visible Web interface, to pass it JSON datagrams and get back responses that do useful things like – for example – publishing a release tarball of a project where GitLab users can easily find it.

Igor is going to have (actually, already has) one mode that looks like a command interpreter for a little minilanguage, with each command being an action verb like “upload” or “release”. The idea is not so much for users to drive this manually as for them to be able to write scripts in the minilanguage which become part of a project’s canned release procedure. (This is why GUIs are irrelevant to this whole discussion; you can’t script a GUI.)

Ian, quite reasonably, also wants users to be able to run simple igor commands in a fire-and-forget mode by typing “igor” followed by command-line arguments. Now, classically, under Unix, you would expect a single-line “release” command to be designed to look something like this:

$ igor -r -n fooproject -t 1.2.3 foo-1.2.3.tgz

(To be clear, the dollar sign on the left is a shell prompt, put in to emphasize that this is something you type direct to a shell.)

In this invocation, the “-r” option says “I want to do a release”, the -n option says “This is the GitLab name of the project I’m shipping a release of”, the -t option specifies a release tag, and the following filename argument is the name of the tarball you want to publish.

It might not look exactly like this. Maybe there’d be yet another switch that lets you attach a release notes file. Maybe you’d have the utility deduce the project name from the directory it’s running in. But the basic style of this CLI (= Command Line Interface), with option flags like -r that act as command verbs and other flags that exist to attach their arguments to the request, is very familiar to any Unix user. This what most Unix system commands look like.

One of the design rules of the old-school style is that the first token on the line that is not a switch argument terminates recognition of switches. It, and all tokens after it, are treated as arguments to be passed to the program and are normally expected to be filenames (or, in the 21st century, filename-like things like URLs).

Another characteristic of this style is that the order of the switch clauses is not fixed. You could write

$ igor -t 1.2.3 -n fooproject -r foo-1.2.3.tgz

and it would mean the same thing. (Order of the following arguments, on the other hand, will usually be significant if there is more than one.)

For purposes of this post I’m going to call this style old-school UNIX CLI, because Ian’s puzzlement comes from a collision he’s having with a newer style of doing things. And, actually, with a third interface style, also ancient but still vigorous.

When those of us in Unix-land only had the old-school CLI style as a model it was difficult to realize that all of those switches, though compact and easy to type, imposed a relatively high cognitive load. They were, and still are, difficult to remember. But we couldn’t really notice this until we had something to contrast it with that met similar functional requirements with lower cognitive effort.

Though there may have been earlier precedents, the first well-known program to use something recognizably like what I will call new-school CLI was the CVS version control system. The distinguishing trope was this: Each CVS command begins with a subcommand verb, like “cvs update” or “cvs checkout”. If there are switches, they normally follow the subcommand rather than preceding it. And there are fewer switches.

Later version-control systems like Subversion and Mercurial picked up on the subcommand idea and used it to further reduce the number of arbitrary-looking switches users had to remember. In Subversion, especially, your normal workflow could consist of a sequence of svn add, svn update, svn status, and svn commit commands during which you’d never type anything that looked like an old-school Unixy switch at all. This was easy to remember, easy to document, and users liked it.

Users liked it because humans are used to remembering associations between actions and natural-language verbs; “release” is less of a memory load than “-r” even if it takes longer to type. Which illuminates one of the drivers of the old-school style; it was shaped back in the 1970s by 110-baud Teletypes on which terseness and only having to type few characters was a powerful virtue.

After Subversion and Mercurial Git came along, with its CLI written in a style that, though it uses leading subcommand verbs, is rather more switch-heavy. From the point of view of being comfortable for users (especially new users), this was a pretty serious regression from Subversion. But then the CLI of git wasn’t really a design at all, it was an accretion of features that there was little attempt to simplify or systematize. It’s fair to say that git has succeeded despite its rather spiky UI rather than because of it.

Git is, however a digression here; I’ve mainly described it to make clear that you can lose the comfort benefits of the new-school CLI if a lot of old-school-style switches crowd in around the action verbs.

Next we need to look at a third UI style, which I’m going to call “GDB style” because the best-known program that uses it today is the GNU symbolic debugger. It’s almost as ancient as old-school CLIs, going back to the early 1980s at least.

A program like GDB is almost never invoked as a one-liner at all; a command is something you type to its internal command prompt, not the shell. As with new-school CLIs like Subversuon’s, all commands begin with an action verb, but there are no switches. Each space-separated token after the verb on the command line is passed to the command handler as a positional argument.

Part of Igor’s interface is intended to be a GDB-style interpreter. In that, the release command should logically look something like this, with igor’s command prompt at the left margin.

igor> release fooproject 1.2.3 foo-1.2.3.tgz

Note that this is the same arguments in the same order as our old-school “igor -r” command, but now -r has been replaced by a command verb and the order of what follows it is fixed. If we were designing Igor to be Subversion-like, with a fire-and-forget interface and no internal command interpreter at all, it would correspond to a shell command line like this:

$ igor release fooproject 1.2.3 foo-1.2.3.tgz

This is where we get to the collision of design styles I referred to earlier. What was really confusing Ian, I think, is that part of his experience was pulling for old-school fire-and-forget with switches, part of his experience was pulling for new-school as filtered through git’s rather botched version of it, and then there is this internal GDB-like interpreter to reconcile with how the command line works.

My apprentice’s confusion was completely reasonable. There’s a real question here which the tradition he’s immersed in has no canned, best-practices answer for. Git and GDB evade it in equal and opposite ways – Git by not having any internal interpreter like GDB, GDB by not being designed to do anything in a fire-and-forget mode without going through its internal interpreter.

The question is: how do you design a tool that (a) has a GDB like internal interpreter for a command minilanguage, (b) also allows you to write useful fire-and-forget one-liners in the shell without diving into that interpreter, (c) has syntax for those one liners that looks like an old-school CLI, and (d) has only one syntax for each command?

And the answer is: you can’t actually satisfy all four of those constraints at once. One of them has to give. It’s trivially true that if you abandon (a) or (b) you evade the problem, the way Git and GDB do. The real problem is that an old-school CLI wants to have terse switch clauses with flexible order, a GDB-style minilanguage wants to have more verbose commands with positional arguments, and never these twain shall meet.

The only one-syntax-for-each-command choice you can make is to have the same command interpreter parse your command line and what the user types to the internal prompt.

I bit this bullet when I designed reposurgeon, which is why a fire-and-forget command to read a stream dump of a Subversion repository and build a live repository from it looks like this:

$ reposurgeon "read 

Each of those string arguments is just fed to reposurgeon's internal interpreter; any attempt to look like an old-school CLI has been abandoned. This way, I can fire and forget multiple reposurgeon commands; for Igor, it might be more appropriate to pass all the tokens on the command line as a single command.

The other possible way Igor could go is to have a command language for the internal interpreter in which each line looks like a new-school shell command with a command verb followed by switch clusters:

igor> release -t 1.2.3 -n fooproject foo-1.2.3.tgz

Which is fine except that now we've violated some of the implicit rules of the GDB style. Those aren't simple positional arguments, and we're back to the higher cognitive load of having to remember cryptic switches.

But maybe that's what your prospective users would be comfortable with, because it fits their established habits! This seems to me unlikely but possible.

Design questions like these generally come down to having some sense of who your audience is. Who are they? What do they want? What will surprise them the least? What will fit into their existing workflows and tools best?

I could launch into a panegyric on the agile-programming practice of design-by-user-story at this point; I think this is one of the things agile most clearly gets right. Instead, I'll leave the reader with a recommendation to read up on that idea and learn how to do it right. Your users will be grateful.

Published
Categorized as General

96 comments

  1. First, you can use long names of parameters in slighly more modern “old CLI” style, that is write `–release` (that is dash-dash-release, if WordPress turns it into ndash instead) instead of `-r`.

    Second, the whole conflict between “old CLI” and GDB-like style (with “subcommand CLI” somewhat in the middle) looks a bit like distinction between positional parameters and named parameters in programing languages. In my opinion named parameters win if you have many parameters and most are optional, positional parameters win if you have few of them and the ordering is obvious. Or you can try to mix and match like Python and Kotlin among others do, like “subcommand CLI” style.

    1. >First, you can use long names of parameters in slighly more modern “old CLI” style, that is write `–release` (that is dash-dash-release, if WordPress turns it into ndash instead) instead of `-r`.

      Unfortunately, the language I’m doing most of my new work in fights this convention, The Go flags package actually likes long names led by a single dash. Unfortunate; I think the GNU convention of using a double dash to indicate a long option name following is useful.

      >Second, the whole conflict between “old CLI” and GDB-like style (with “subcommand CLI” somewhat in the middle) looks a bit like distinction between positional parameters and named parameters in programing languages.

      I too detect this resemblance.

      > In my opinion named parameters win if you have many parameters and most are optional, positional parameters win if you have few of them and the ordering is obvious.

      You’d like the way reposurgeon’s command language splits this up, then. I use double-dash options for optional switches and positional parameters for required arguments.

        1. >Check out the spf13/cobra package for go.

          I don’t like it even a little bit. It wants to dictate the structure of my code! It strikes me as over-engineered and over-elaborate. I’d sooner live with the limitations of the standard-library flags package.

      1. I also like GNU-style long options, and so Go’s flag package always brushes me the wrong way. Perhaps the biggest irritation is that, with their single-dash approach, options can’t group since that would be ambiguous. As a result, I wrote my own option parser with an interface close to the classic getopt() interface in C, and it’s filled my own need perfectly:

        nullprogram.com/x/optparse

      2. My own searching found pflag, which claims to be a drop-in replacement for the standard library flag package, but changes long-name options to use double-dash, and allows for synonyms (and minimal changes to retrofit shortflags if/when desired). There are a couple of others by the same name with no or fewer recent changes;

        This one is probably the best-maintained, as it’s used by cobra under the hood: https://github.com/spf13/pflag

        The other alternatives I noted are the original at https://pkg.go.dev/github.com/ogier/pflag, last updated 2016, and another fork with fewer commits since divergence and recent updates: https://pkg.go.dev/github.com/opencoff/pflag.

        Both were found via https://pkg.go.dev/search?q=pflag.

      3. Unfortunately, the language I’m doing most of my new work in fights this convention, The Go flags package actually likes long names led by a single dash. Unfortunate; I think the GNU convention of using a double dash to indicate a long option name following is useful.

        I’m trying to think of a reason for double dashes and coming up dry. Why have double dashes at all? Why not let a single dash introduce both short and long commands?

        1. Option clusters. “-abc” means “options A, B and C”, whereas “–abc” means “option Abc”.

          (Insert rant about BSD ps. Or just read man ps and scream.)

          1. Ohhh, right. And they’re useful enough to keep around, dammit.

            Although, if an option cluster is likely enough to overlap with a long form command, I could easily argue that there’s unacceptable cognitive overload in that command. Imagine if ps had an “aux” option.

            Also, how many programs really have the need for this much clustering? Perhaps they could be set aside as exceptions?

            1. Perhaps they could be set aside as exceptions?

              Exceptions are cognitive load too. I’d rather have a set of consistent option styles (visually distinct and unambiguous to parse), and ideally commands supporting both, than have to remember “does this command take clusters or longopts?” Sadly, X already crapped on that dream (not to mention the ugly mess that is -geometry‘s value format), but at least the rest of the system is sane.

            2. >And they’re useful enough to keep around, dammit.

              Yes. The most annoying thing about the Go flags package is that it doesn’t give you option clusters.

              1. As I noted above, pflag is essentially an extension of the Go flags package API, adding support for option clusters; I fully intend to use it anytime I write anything in Go.

        2. Single dashes are nice when a number of single dash arguments can be combined and lead with just one dash. Consider $ rysnc -vulpix . This is an equivalent invocation to $ rsync -v -u -l -p -i -x , but requires less typing. Double dashes to distinguish long names, say $ rsync –ignore-errors –force –specials, tells rsync that this invocation is not equivalent to $ rsync -i -g -n -o -r -e -s -f -c -p -e -a -l, but something else entirely. I personally find this distinction useful; it lets me be explicit when I am not combining arguments, but still combine arguments when it is convenient to do so.

  2. The good part about the classic switches is that you can remember the whole set of them as one word. Such as “czvf” and “xzvf” for tar, and there is no need to remember whether that “x” stands for “extract” or “expand”. I always get the long words confused, which of the synonyms did the author decide to use? This what drives me nuts about PowerShell with its long switch names. The thing that really could use improvement with the classic switches is the man pages. They should give the typical examples of use right up front.

    In the department of long argument-like commands, Subversion and others actually follow the style from CVS (which was born as a wrapper over the RCS commands, so the RCS commands became CVS subcommands). And it does have switches, moreover the commands like this typically have TWO kinds of switches: one set relates to the command as a whole and goes before the sub-command, and the second set of switches relates to the subcommand, and goes after the subcommand.

  3. > you can’t script a GUI

    Actually, you can – if you’re running on an OS that provides a suitable API for doing that, and if programmers remember (or care) to make use of that API.

    For example, there are apps for MacOS that use the accessibility API to provide keyboard accelerators in place of the mouse; you can also write scripts that do the same. On Windows (last I used it) there were tools like AutoIT that did similar.

    Now, whether you _want_ to or not is a different matter :) I always treated GUI scripting as a measure of last resort, myself.

    1. I was gonna say selenium and similar tools (since the example mentioned the alternative of “clicky-dancing on the visible Web interface”).

      But yeah, also seriously painful.

    2. Ah, AutoIT. I love it. Not because it’s particularly wonderful, but because of what it let me get away with. Like automate interactions with a Windows GUI application that was designed to *only* accept mouse-click inputs, and (deliberately) had *no* hooks for any sort of command-line input. There’s a few hundred (maybe more?) old industrial controllers based on Win95, 98, and 7, out there that are probably still running my little AutoIT kludge.

      Then there was the (possibly apocryphal) grognard who, according to legend, rigged a robot to move&click his actual physical mouse, and controlled said robot using a Python script. That probably stretches the definition of “scripting a GUI,” but….

      1. Is AutoIt a “send a mouse click to coordinates x, y?” kind of thing? Because if yes, there exist similar tools that are used for automated GUI testing and they are likely something more advanced, perhaps offering a better scripting language as well. I don’t know their name I just know a dude who used to work as a tester and they used such tools. But of course if the requirement is that it should run on Windows 98 or so then likely forget them.

        1. Oh, not testing — *production*. There were a *lot* of industrial controllers that used “skinned” W95 or W98 for their user interface (often using VB6). The UIs generally only accepted clicky-button input (actual hardware buttons around the screen, in the pre-touchscreen days), but on a lot of those controllers, those buttons were dual-mapped to both the hardware buttons, *and* the screen objects. And if you knew the right tricks, you could get access to the raw Windows architecture underneath, and pull some tricks. AutoIT gave me a way to automate certain user-interface tasks by emulating mouse clicks, and abusing the Windows Task Scheduler. I generally used it for automated jounraling backups when the original manufacturer wanted $$$$ for the “official” option (or didn’t offer one at all).

          1. I understood that, I just thought those testing tools could be better at this job as well as AutoIt.

            BTW. There is this story from a friend of mine that around 1997 or so he was working at a purchaser of a computer parts distribution company in my neck of – rather backwards neck of – Europe. Beyond the actual purchasing stuff, he was also responsible for entering item master data into SAP R/2. Which meant salespeople sending him Excel files of hundreds of parts they wanted to be purchased. This is obviously a shitty boring task for someone who is intelligent and qualified enough to be a purchaser, so despite having no kind of programming education, he figured out that the old SAP R/2, which worked in terminal emulation mode on a PC, the terminal emulator was scriptable. That is, it was possible to send data to it like “Item001, TAB, TAB, Partname, Tab, Whatever, Enter, Enter”. It being full textmode made it easier, but it was of course not a real API, he still had to do stuff like send Enter, wait 0.2 seconds, send Enter. So he fiddled with an autogenerated Excel macro until he could write the VBScript code to generate this, well, not even script just “batch inputs”, so to speak, for the terminal emulator. So he would receive an Excel file of 200 new parts, launch the macro, and then go home and not come to the office until next afternoon as everybody assumed he is busy entering data.

            This definitely sounds anachronistic in this age when we can just generate a JSON file and feed it into a REST web service and achive the same thing a whole lot cleaner.

  4. Heh, you’re omitting one major misstep of yester-year. Just skip the namespace tie-in of using subcommands and make everything it’s own top level command. It’s not like there might be *other* packages on the system wanting to do the same thing, it’ll be fine! It baffles me how ImageMagick *still* uses top-level subcommands for everything, `convert` instead of `imagemagic convert` and so on.

    1. Plan 9 has/had an interesting take on that: the shell accepted partial paths relative to the (single) bin directory. So /bin/venti/read would be run as venti/read.

      (Rather than a search path, it used what in current unix-land are called union mounts, where multiple directories are mounted on top of each other.)

    2. > It baffles me how ImageMagick *still* uses top-level subcommands for everything, `convert` instead of `imagemagic convert` and so on.

      This can cause some fun in Windows if ImageMagick is not installed as the “convert” command in Windows is a native CMD utility that converts a FAT volume to NTFS.

      Fortunately more recent versions of ImageMagic do not install those “legacy” programs by default, requiring you to now invoke it with “magick convert …”

      1. >Fortunately more recent versions of ImageMagic do not install those “legacy” programs by default, requiring you to now invoke it with “magick convert …”

        Not true yet under Ubuntu 2.04 LTS. I rather wish it were.

    3. There’s a whole bunch of ancient scripts out there that expect those top level sub-commands. How much breakage do you get when you change all of that?

      I’m not disagreeing that it should change, but also figure how much of a s*tfight it would be to get a whole bunch of developers to agree that we’re *just* going to move all the commands one level down and prepend “imagemagic” rather than retrofit the *ENTIRE* thing.

      1. That is true enough, but it would also be rather trivial to create a compatibility package, especially if instead of putting it into a parsed subcommand you just renamed the programs. `image-convert instead of `imagemagic convert` or similar. The compatibility package would then just be a pile of symlinks, or maybe a wrapper program which could emit warnings and then wrap.

        There are a fair number of use who use imagemagick without having scripted any of it, so we’d be able to skip that compatibility package, while the wordpress sites could drop it in and forget it.

    1. >Of topic: Did you know that https://esr.ibiblio.org gives an invalid certificate

      Again? Sigh…this is a chronic screwup at UNC that affects other websites besides mine. They will fix it, but I never seem to be able to affect when they fix it.

  5. I’ll put one point that I see as important. If your tool uses options, *do not* make an option for the argument that would be a direct object in a sentence. Make it a positional argument.

    LXC 1.x got that wrong. I always cursed when I had to say ”lxc start -n my_container”. Fortunately, they fixed it in 2.0.

  6. I’m designing a website generation tool [strangely in C++ after a long time; I have pretty much given up on Python since a bunch of my old Python 2 code no longer runs, as distributions have pulled out (or in the process of pulling out) all Python 2 related packages] that takes a bunch of markdown files in a source folder tree (any level of subfolders supported) and converts it (using templates) to a HTML tree of website full with an index page and navigation . I am doing it CLI since I think it won’t be much useful as a GUI.

    I want to make it as configuration free as possible – most of the config should be part of the templates themselves (I feel – static, site-wide stuff can be hardcoded into the templates by the user before generating the site, instead of having to configure them separately either by CLI switches or config iles), however it will require user to provide at least 3 options: (1) the input folder containing markdown sources (2) the template folder containing the template files to transform the input and (3) the output folder.

    something like?
    mytool -i input-folder -t template-folder dest-folder
    or simply
    mytool config-file
    where config-file will contain:
    source=/src/folder
    destdir=/dest/folder
    templates=/templates/folder

    Which will parse the config file specified and produce the output

    I don’t want to make it too complicated, but what would be the best approach for such a CLI tool? I thought of going the config file route and avoiding CLI option parsing altogether, but this is such a simple tool that it may be more fussy to edit a config file just for these three options.

    I really wanted to use asciidoc for this, but I see no usable C/C++ bindings for asciidoc which I can plug into my code.

    1. [strangely in C++ after a long time; I have pretty much given up on Python since a bunch of my old Python 2 code no longer runs, as distributions have pulled out (or in the process of pulling out) all Python 2 related packages]

      You may be aware that Eric (and Peter A. Donis) wrote a guide to writing version-agnostic Python code. This might help you out in getting any useful legacy programs you have to run correctly under Python 3.

      1. Thanks. I’ll read that. Along with that, I also need to find out how to restore Qt functionality to my programs, since support for Python Qt 4 (both Python 2.x and 3.x) also seems to have mysteriously disappeared (at least on Debian).

      2. >This might help you out in getting any useful legacy programs you have to run correctly under Python 3.

        Relevant news: The Debian auditing bots now flag packages for deletion if it looks like they’re Python-2-dependent. I know this because I got notifications about one of mine, cvs-fast-export, and it was even briefly dropped from the public repositories until I demonstrated that flagging it was a mistake.

        The last Python code has been removed from reposurgeon as of the 4.8 release 5 days ago. It’s now all Go for the production tools with shell glue for testing.

        As of a couple of weeks ago I’m no longer bothering to make new Python scripts polyglot. I figure by the time anything I write now reaches wide deployment, the Python 3 transition will be complete.

        1. Probably not relevant, but I know a few places where I’d bet my next paycheck that RHEL 5 is still in “production” and might extend past November.

          RHEL 6 will probably exist in those places until at least 2025.

          I don’t know what you’re working on, but these are “mission critical” systems for *real* values of “critical”.

          You’re probably fine with not supporting those systems and truthfully it would be unlikely that anything new will be installed on them (modulo some management/configuration software, and the systems admins could always deploy Python 3.x to them, if they can get it to build against those libraries).

          1. >RHEL 6 will probably exist in those places until at least 2025.

            Word from Red Hat itself is that it does not consider these deployments good reason to maintain Python 2.x support. Given they are saying that, I’m not gonna argue with them.

            Here’s an example of a RH statement on the issue.

    2. For anything not embedding the interpreter, the migration path for python2 EOL is pypy2. There are no plans to drop python2 support for pypy, pypy itself is still written in python2 syntax. It’s pretty much API compatible with CPython2, so even extension modules can just get recompiled and work (CTypes/CFFI modules don’t even need the recompile). The only significant downside is the embedding API is totally incompatible, and there is no sub-interpreter support in the C API. So if you need either of those, you’re in for significantly more work.

      I wish rather strongly that distros with pypy available would migrate that way rather than ripping out packages which hard require python 2.

  7. It appears the blog software deleted instead of saved my edit — here goes again. Apologies if it comes back and we wind up with two copies.

    But the basic style of this CLI (= Command Line Interface), with option flags like -r that act as command verbs and other flags that exist to attach their arguments to the request, is very familiar to any Unix user. This what most Unix system commands look like.

    I think this is somewhat inaccurate — the Unix “do one thing well” philosophy kind of sidesteps having command verbs as subsidiary to a program you’re invoking. The verb is typically the name of the program rather than a flag, switch, or argument. Unix offers cp, ls, mv, and rm separately rather than asking you to invoke some monolithic file manipulator program and setting a c, l, m, or r flag.

    Newer tools are expected to do many things, so we structure them to use subcommands; that’s fine — the Unix way is not the only way — but shoehorning the modern structure into the classical Unix style produces a mismatch. The semantics of option flags are those of adjectives (sometimes adjectival phrases, with the flag as the preposition and its argument as the object), not verbs. It’s a prevalent and well-recognised hammer of a pattern, which leads to people squinting at their problems until they convince themselves they’re dealing with a nail. A syntax that was effective for setting options isn’t necessarily convenient for invoking subcommands, which is probably why this UI feels “spiky”.

    You might consider a two-pass approach to setting arguments, similar to how R works:
    1. A first pass matches and sets any named parameter, irrespective of order.
    2. A second pass assigns all remaining values, in positional order, to unset arguments.
    3. Any unset arguments after both passes are set to defaults.
    For the purposes of matching on the first pass, any prefix that uniquely identifies a particular argument is adequate; the whole name need not be spelled out.

    So if you had igor release(name, tag, output="$name-$tag.tgz"), you could perform any of the following invocations, and the parser would treat them equivalently.

    # All arguments set on first pass
    igor release -o foo-1.2.3.tgz -n fooproject -t 1.2.3
    # All arguments set on second pass
    igor release fooproject 1.2.3 foo-1.2.3.tgz
    # Output and tag set on first pass, name on second
    igor release -o foo-1.2.3.tgz -t 1.2.3 fooproject
    # Tag set on first pass, name on second, output by default
    igor release -t 1.2.3 fooproject

    The question is: how do you design a tool that (a) has a GDB like internal interpreter for a command minilanguage, (b) also allows you to write useful fire-and-forget one-liners in the shell without diving into that interpreter, (c) has syntax for those one liners that looks like an old-school CLI, and (d) has only one syntax for each command?

    The R-inspired example above accepts input formatted according to the conventions of (a) and (b) squarely, and (c) with the exception that the subcommand isn’t an option flag. One could extend the concept to have the parser do prefix-matching of commands too, as long as their names don’t collide with parameter names — then -r could be a synonym for release — but it does strike me as somewhat of an abuse of notation to pretend a subcommand is an option.

    The only one-syntax-for-each-command choice you can make is to have the same command interpreter parse your command line and what the user types to the internal prompt.

    Even though this style of argument parsing admits a multiplicity of syntaxes, it doesn’t require separate parsers because each syntax conforms to the rather looser (compared to single-pass parsers like getopt()) requirements afforded by taking two passes to set the arguments. So although your one-syntax-for-each-command criterion isn’t satisfied, it might not be such an obstacle if the real concern is having only one parser.

    1. >The verb is typically the name of the program rather than a flag, switch, or argument.

      This is the old-school ideal, yes, but it’s breached even in some core commands. Consider for example, that programs with a CLI like a C compiler may do either actual compilation or just a link pass, depending on the options and whether they are fed .c or .o files. As another example, consider tar and tar-like file-archiving tools. I don’t think we can dismiss these as outliers; they’re too old and too central.

      On the other hand, I think you have a good point about people trying to hammer CLI designs into an old-school mode that they don’t necessarily fit. It’s a chronic illness of Unix programmers.

      Your R example makes my head hurt. That tells me it’s not a good strategy for reducing cognitive load.

      1. > As another example, consider tar and tar-like file-archiving tools.

        Correct me if I am wrong here, but is not the exception case of tar and like tools an artifact of externally-connected tape storage drives? tar, cpio before it, etc, acting as an RPC layer interfacing with disparate drive types. The point being, the verb is “tell the tape deck something” and the args are “here is what to tell it.” That, today, these tools are used more with ordinary fs objects instead of device files seems incidental.

        I could be wrong on my history, and it may be a distinction without a difference anyway, but I think it’s worth remembering the reasons for the outliers when considering examples for or against a command style.

        1. >I could be wrong on my history, and it may be a distinction without a difference anyway

          You got the history right, but I think this is indeed a distinction without a difference. We could have ended up – say – three commands, one equivalent to tar c, one to tar x, and one to tar t, all sharing a common tape service library. We didn’t. There’s nothing about the application domain that actually forces the choice one way or the other.

          1. >We could have ended up – say – three commands, one equivalent to tar c, one to tar x, and one to tar t, all sharing a common tape service library.

            Well, one factor that could tip the scales between one program and three programs sharing a library (in the colloquial sense of “sharing a library”) is the presence of a shared library mechanism (in the strict technical sense of “shared library”), which PDP-11 Unix didn’t have, so there may actually be a reason that we got what we did.

            >This is the old-school ideal, yes, but it’s breached even in some core commands. Consider for example, that programs with a CLI like a C compiler may do either actual compilation or just a link pass, depending on the options and whether they are fed .c or .o files. As another example, consider tar and tar-like file-archiving tools. I don’t think we can dismiss these as outliers; they’re too old and too central.

            Arguably, the ancientry and centrality of these examples is exactly what lets us dismiss them as outliers. The Unix tradition came from Thompson and Ritchie, but Thompson and Ritchie didn’t come from the Unix tradition. When they started working on Unix, they had less experience developing Unix software than many college freshmen today, and nobody to instruct them in Unix-specific traditions or best practices. v6 Unix is at once quite familiar and eerily alien to the modern Linux-head (speaking of terseness, v6 doesn’t recognize “cd”! The command is there, but the name is “chdir”).

            Now, in the case of tar, that was introduced in v7 Unix, but its predecessor, tp, was similar in function and syntax. I’m not sure how far tp goes back, but I wouldn’t be surprised if it goes back to well before Unix escaped Bell Labs.

            It would be *very* interesting to hear Thompson’s take on this.

  8. Congratulations on completing the project, it’s a odd feeling, isn’t it? And glad to hear that you’re healing. How are the medical bills at this point, if I can ask?

  9. One thing I think is lacking is a standard API for programs to export their CLI. As a first approximation, the only way to get tab completion in bash is by writing completion scripts. These may call the underlying command to get information such as to get branch names if doing ‘git checkout <tab><tab>’ but that isn’t a direct result of the shell and the program, but a mediation between specific versions thereof via a separate script. There is no way for the program to export the API in such a way that another program can make sense of it without knowing what that program is. There is no generic completion script which can work reasonably well for all programs absent assuming GNU getopt and a lot of screen scraping and parsing.

    1. >Huh, I hardly ever used “svn commit” without “-m” actually.

      Your change comments are probably too short, then.

      Many changes deserve more than a one-line commit comment.

      1. I’ve never seen a good explanation of where that boundary should be between a one-line comment and multiple lines.

        For new features or changed behavior, I try to update the documentation in sync, and as that lives in the same repository, I feel like that covers most of it…the only reason then that I’d use a commit comment would be to explain specific technical decisions of e.g. why use a hashtable instead of a linked list for X. In this case, I usually have to write it 2-3 times in different places anyway, so it becomes very repetetitive.

        When I’m doing a major refactoring pass, a lot of the changes are small, incremental ones, so unless I broke something (and had to rewrite to fix), it really is a one-line message.

        Explaining _why_ the refactoring was deemed necessary may be much longer, of course, but that doesn’t feel to me like something that should be in the commit message (more of a merge/pull request top-level comment).

        To be fair, a lot of that wouldn’t matter so much if GitHub/GitLab took everything after the first line of the log message and pulled it into either the PR description or, for later pushes, the comments. That would make it much lower friction, and I wouldn’t feel so much like I’m writing something no one will read…it would also help if whatever bugtracker would pull it in also, e.g. adding the commit message as a comment on the issue if the issue is mentioned, as that means that I only have to write it once (and no copy-pasting). I think the chat-connector webhooks are also limited to first-line-only, so e.g. at $DAYJOB if we set up the M$ Teams hook to show commits, it doesn’t help.

        To be fair, this is less of an issue in personal projects, since I use the log as the full history of things, but when interleaving with issue trackers and other sources, the challenge is to keep the log self-contained without writing the same exact content for 2-4 different places…has anyone found any solutions for that problem?

        1. In a couple of my past jobs we did exactly that – we had a pre-commit hook that looked for a ticket number, and inserted a comment with a link to the commit, and whatever other info was in the comment. That way when a CS rep would enter a bug ticket for a customer, he’d get a notification when it was updated and know that most likely the fix would go in that day’s push.

          I think this really is a difference between a small org and a large one. You need a single source of truth for the changes, and if it’s just you, or a couple of developers, the commit log is fine, but any more than that and you need it somewhere else, where non-developers can reference it.

  10. One of the design rules of the old-school style is that the first token on the line that is not a switch argument terminates recognition of switches. It, and all tokens after it, are treated as arguments to be passed to the program and are normally expected to be filenames (or, in the 21st century, filename-like things like URLs).

    One of the GNU standards for options parsing I really like is that file names don’t terminate option processing, you can intermix them as much as you like; it can be pretty handy when recalling a command from the history and simply appending new options to it. The -- option on its own explicitly terminates option processing so all further arguments are treated like names (or URLs, or whatever).

    I find it useful enough that I am always annoyed if a program does not behave this way.

  11. One way old-school Unix approached this, which I’m surprised you omitted to mention, is to have a -c option (command; occasionally -e expression as in sed(1)) for fire-and-forget of command interpreters. Often it could be repeated for multiple commands; and it allows mixing with conventional option switches (e.g. ‘sed -i’).

    It works best when the command language looks very different to shell; reading a ‘sh -c’ command line full of escaped and quoted dollar signs can get very confusing (shell’s ‘single’ vs. “double” quoting rules make this especially baroque; will a given $variable be expanded in my shell, the subshell, or not at all?) and if you’ve ever had to deal with nested ‘sh -c’ then you know the true meaning of pain.

    The most curious thing, though, is that fire-and-forget command lines are arguably completely unnecessary, because here-documents exist. What’s wrong with

    $ igor <<EOF
    release fooproject 1.2.3 foo-1.2.3.tgz
    EOF

    or
    $ reposurgeon <<EOF
    read <project.svn
    prefer git
    rebuild ../overthere
    EOF

    ?

    I can think of one answer: it’s annoying to have to type three lines for the single-command case. There’s always echo release fooproject 1.2.3 foo-1.2.3.tgz | igor, but then you have to worry about quoting, and having the ‘igor’ at the end of the line doesn’t read right (unless you’re German). Maybe shell needs here-line syntactic sugar igor <| release fooproject 1.2.3 foo-1.2.3.tgz?

    1. >The most curious thing, though, is that fire-and-forget command lines are arguably completely unnecessary, because here-documents exist.

      Good point. Please file an issue noting this in the tPW repository to remind me to write about this.

      1. IIRC from my early *nix history –

        even the earliest shells (Mashey and the early revisions of the Bourne) had I/O redirection for input, output, append, etc.

        “here documents” were a later invention. So, since the other basic (V6, V7, PWB) Unix commands were co-evolving with the command interpreter, the command line arguments for them were first created before there was the idea of in-line input text.

    2. Maybe shell needs here-line syntactic sugar igor <| release fooproject 1.2.3 foo-1.2.3.tgz?

      Try: igor <<<'release fooproject 1.2.3 foo-1.2.3.tgz'

      1. Aha! I thought “it seems unlikely that this doesn’t already exist”, but ABS didn’t mention it anywhere and I couldn’t quite be bothered to dig through the bash manual as well.
        Still slightly annoying that you have to quote it, but it does mean you can also do:
        cat <<<'foo
        bar'

        for a slightly lighter-weight here-doc if there’s no ' in your body (and the here-string doesn’t have to be the last thing on the line like my here-line would, so you can e.g. & the command).

      2. Try: igor <<<‘release fooproject 1.2.3 foo-1.2.3.tgz’

        and “here strings” are strictly a Bash-ism. No Other Shells Need Apply.

        (Truth In Commenting – TIL about “here strings”. Clever idea, but not generalizable to other contexts)

          1. >ksh and zsh also use here strings.

            That <<< syntax is cute, but I don't see what I get from 'foo <<<"bar"' that 'echo "bar" | foo' doesn't already give me?

            1. The primary use case is when the input contains multiple lines. ‘echo “line1\nline2\nline3” | foo’ is unwieldy, harder to read, and instantly puts you in Backslash Uncertainty Land, a hellish scape where backslashes may or may not modify the characters who follow them, depending on whether the traveler can see quotes in the nearby fog.

              By contrast:

              $ foo <<<
              line1
              line2
              line3
              ^Z
              $

              Still not perfect (esp. if you like indented script code), but arguably better.

              (I hope the three less-thans appears correctly above. I started typing them as actual less-thans and apparently managed to forum’s preview pane. If not, well, you know what it’s supposed to say.)

              1. >I hope the three less-thans appears correctly above

                Fixed that for ya. The prefix char needed to be &, not %.

            2. Same thing you get from “<bar foo” as opposed to “foo <bar”. It makes the line read in a more logical order (or at least it can, depending on what you’re doing).

              Before that was available — I think it’s a bashism? — people often UUOCed for the same reason; there’s actually an example of that in TAOUP. “We use it here to emphasize the order of operations.”

              Like I said upthread, echo verb adverb adverb noun | verb reads like German. You don’t know what the ‘sentence’ is about until you get to the end. Acceptable when you’re just typing into a shell, not so acceptable when you’re writing a shellscript, Makefile etc. which will need to be maintained some day.

              1. I’d parse that as something like “say ‘verb adverb adverb noun’ to the verbifier.”

  12. I think it’s worth noting that the first model (switch and argument, in any order) is also helpful on the other side, where the program’s author has to pull in those switches and interpret them, because that syntax can be easily digested by one of several standard libraries (e.g. getopt) with a minimum of fuss and opportunity for mistakes. This is especially valuable if a switch parameter contains funky characters like spaces or dollar signs; without standard syntax, the user has to worry about whether the program will treat their switches in a nonstandard manner, and the program author will have to worry if they’ll get nonstandard arguments.

    One thing I like about subcommands is that they naturally inform thinking of the main command as an interactive shell. Inside such a shell, they look just like normal commands in a Unix shell. The corollary to this is that if one is going to pursue this model, go whole hog – think of everything after your program’s name in a Unix shell as something that could be its own command in a special shell handled by your program, whether they be subcommands, temporary variable assignments, or even flow control statements. Done right, that entire shell could be written as a simple front end to the main program.

  13. >The question is: how do you design a tool that (a) has a GDB like internal interpreter for a command minilanguage, (b) also allows you to write useful fire-and-forget one-liners in the shell without diving into that interpreter, (c) has syntax for those one liners that looks like an old-school CLI, and (d) has only one syntax for each command?

    How’s about this? Make “dash dash” your new statement marker for the internal interpreter, e.g,

    if foo
    then bar
    fi

    Could also be written on one line as:

    –if foo –then bar –fi

    Then just have the interpreter interpret the command line at startup.

    For terseness, you could have a DCL-style interpreter where any command can be shortened to as few letters as is unambiguous. So you get the following being equivalent:

    igor> foo fooarg
    igor> bar
    igor> baz
    igor> release fooproject 1.2.3 foo-1.2.3.tgz

    igor> foo fooarg –bar –baz –release fooproject 1.2.3 foo-1.2.3.tgz

    igor> f fooarg –bar –baz –rel fooproject 1.2.3 foo-1.2.3.tgz

    $ igor f fooarg –bar –baz –rel fooproject 1.2.3 foo-1.2.3.tgz

    $ igor –f fooarg –bar –baz –rel fooproject 1.2.3 foo-1.2.3.tgz

    But of course, “igor” is the wrong name for the CLI version of this program. That should be “im”. “igor” should be the GUI version with the obsequious talking paperclip.

    1. >How’s about this? Make “dash dash” your new statement marker for the internal interpreter [etc]

      Clever, but what does it buy me that reposurgeon’s rule for interpreting command-line arguments doesn’t? And if the answer is “Uh…”, why should I buy the extra notational complexity?

      1. Well, it gives you at least a fig leaf of resemblance to the traditional option syntax, and, at least to my mind –foo is easier to type than “foo”: two quick unshifted taps to the same key in sequence, followed by a word, as opposed to a single shifted keypress, the word, and then returning to the first key for another shifted keypress.

  14. Transposability of the switches and terseness don’t seem that valuable, as such, the old-school style doesn’t seem to have a lot going for it, so why is it a desirable requirement?

    1. >so why is it a desirable requirement?

      Have you ever used a 110-baud Teletype?

      I have.

      Think of that style as a fossil of 1970s conditions that has not been so dysfunctional as to be generally replaced.

      It remains “desirable” only when you’re addressing an audience that has had their habits formed by it and would find anything else surprising.

      1. I’ve never used a terminal for useful work (as opposed to retrocomputing for fun) that didn’t take input as fast as I could type it, or that wasn’t capable of blasting out multiple screens of output faster than I could blink, but I’ve never found the terse style dysfunctional or otherwise undesirable. As long as I have good tab completion available, it’s fairly neutral. But my burst typing speed of 90 wpm works out to about 60 bit/s, and my sustained speed of 60 wpm to 40 bit/s, so as we descend from zsh (seamless) to bash (mostly seamless) to powershell (capable of being mostly seamless, but basically the same as CMD by default) to CMD (*technically* has tab completion, but *ugh*) to straight POSIX /bin/sh (no completion), terse commands and options become more critical.

        But then again, I maybe am more linguistically neuroplastic than most people, so I might have an easier time with terse semantics and syntax.

        1. Right, so to summarize these comments, it sounds like the desirability comes from that some users find more familiar with it. However a new user would not find any advantage to it.
          As such, it’s best to make it a special mode rather than a primary mode. One way to implement this is through a legacy subcommand for these users, let’s call it l for terseness.
          An example invocation would be:

          igor l -t 1.2.3 -n fooproject -r foo-1.2.3.tgz

          However, as presented, I’m still not convinced there’s enough value in supporting these options to justify the additional complexity.

        2. AIUI, many early teletype keyboards were quite hard to work with from a modern POV. Think more like a typewriter than a modern keyboard. Though this situation had definitely improved by the early-to-mid 1980s at least, I can easily see how extreme terseness might have been considered a value. It’s not just about the baud rate.

          We even get something comparable nowadays with touchscreen-only keyboards (and those also introduce a lot of inaccuracy, of course).

  15. Write scripts vs. fire-and-forget. Hm. I might have been seeing this wrong. I have always thought this Unix tradition of command lines with a lot of switches has always been intended to be wrapped into a script because just who could type them all right without a typo into the shell? And if you are not going to reuse those scripts you will have a lot of throwaway123.sh lying around. Ugly.

    Then the Windows guys borrowed the whole thing so now if you ask someone how to do this or that you get answer of a PowerShell line with a lot of switches and again no hint at that you should be wrapping it into a script, they, too, sort of expect you to be able to type it right into the shell.

    Well, maybe other people are able to do type them right in without typos. But here is an idea. There should be a special switch that merely tests your input, that your switches are legal, that the files actually do exist and so on. It might give a simulated output like I am going to upload this and delete that file. It might also give you a warning if you are trying something potentially dangerous, the equivalent of rm -r -f.

    Then there should be something like a simple database, like an Emacs mode, that treats every two text line, the first a comment and the second such a shell command with a lot of switches as one record, you select a record and send it in test mode, that is, it automatically adds the testing switch. And if you are happy with it, you could also send it in live mode, without that switch.

    1. But here is an idea. There should be a special switch that merely tests your input, that your switches are legal, that the files actually do exist and so on. It might give a simulated output like I am going to upload this and delete that file.

      Many commands have -n/--dry-run to do exactly this.

      1. Thank you. Then perhaps ESR could add this to the requirements for good CLI.

        I am working in a different field, mostly databases. And for us having a live and a test system is sacrosanct. And that is good practice. Everything should have a live version and a test version.

  16. I suggest to take a good look at some recent version of GDB. Because GDB’s internal CLI does support switches (and many commands already have them), and you can fire up one-off GDB commands specified on the command line by using the `-ex` (a.k.a. `–eval-command`) command-line option, of which there could be several consecutive ones.

    1. >Because GDB’s internal CLI does support switches

      I know about things like the format options on p and x, of course. OK, switches sort of. They’re more like optional positional arguments; you can’t put them anywhere, only at the head of the argument list.

      I didn’t know about -ex. I guess I’m glad it’s there for completeness’s sake, but…what’s the actual use case? Have you ever seen anybody use it? Has there ever been any such thing as a fire-and-forget gdb invocation?

      Clarification: This is a bit of a swamp. After writing the first ‘graph above, I realized that gdb /x and friends are indeed like old-school Unix switches because processing of them stops at the first positional argument. What they’re not like is switches in GNU-land, which (as I failed to mention in the OP) can be mixed with positional arguments.

      I think I have a headache.

      1. > I know about things like the format options on p and x, of course.

        Those are the veteran ones. Nowadays there are many more. For example, `print -elements`, `watch -location`, `skip -file`, `backtrace -no-filters`, and many others.

        > I didn’t know about -ex. I guess I’m glad it’s there for completeness’s sake, but…what’s the actual use case? Have you ever seen anybody use it?

        You see it being used on the GDB mailing lists all the time, yes. It is more compact when you need to run a small number of commands known in advance.

        1. >Those are the veteran ones. Nowadays there are many more. For example, `print -elements`, `watch -location`, `skip -file`, `backtrace -no-filters`, and many others.

          Interestingly, reposurgeon – which has a gdb-like internal interpreter in part because it was originally implemented using the Python Cmd library, which makes GDB style easy to do – has also evolved towards using switches for optional command modifiers in some circumstances. Now that I’ve looked back at the GDB documentation, this is a clear case of independent parallel evolution under similar selective pressure.

          Ironically, my internal options look a bit more like old-school GNU than GDB’s newer ones do – I deliberately chose double dash as option-leader in order to preserve the possibility of using single-dash as a leader for single-character options. I haven’t exercised this possibility yet, and won’t unless I’m driven to it.

          >You see it being used on the GDB mailing lists all the time, yes.

          Sorry, I don’t consider that a real use case. That’s communicating about the tool, not applying it.

  17. > (d) has only one syntax for each command?

    AIUI, this might be the key condition in the overall question you raise, and I think it’s overall quite solvable. One just needs to endow your internal interpreter itself with position-independent switches, applying only to the current command or optionally settable as session-wide variables within the interpreter. Hence

    igor> –name=fooproject –tag=1.2.3 release foo-1.2.3.tgz

    (equivalent to

    igor> NAME=fooproject TAG=1.2.3 release foo-1.2.3.tgz

    .) ISTM that this is a simple and intuitive option.

    1. That imports all of the ugliness of traditional options and their parsing into the interpreter. The initial prototype for Igor uses this, the wrongness of it is part of what prompted the question that prompted this blog post.

  18. Ah, AutoIT. I love it. Not because it’s particularly wonderful, but because of what it let me get away with. Like automate interactions with a Windows GUI application that was designed to *only* accept mouse-click inputs, and (deliberately) had *no* hooks for any sort of command-line input. There’s a few hundred (maybe more?) old industrial controllers based on Win95, 98, and 7, out there that are probably still running my little AutoIT kludge.

    Then there was the (possibly apocryphal) grognard who, according to legend, rigged a robot to move&click his actual physical mouse, and controlled said robot using a Python script. That probably stretches the definition of “scripting a GUI,” but….

    1. I will admit to having jammed a key down on a keyboard a time or two when something needed unscriptable repeated input (back in the windows 2000 days, so no xdotool). I also rigged up an erector set oil derrick model to push a key repeatedly when I needed key up events.

  19. For me, I’ve always found the GNU style options more difficult to memorize, because while I can understand what --output-dir=/directory/tree means, I get it mixed sometimes with (say) --output-path=/directory/tree. But I agree that when a program has many many switches, it makes sense to have longer equivalent of the single char switches.

    In my current project, since I have only three main switches (and one switch for quiet mode, -q) , I have not bothered with the GNU style long switches. Doesn’t seem to make sense for this kind of use case.

    1. Question is if a future user of the program will know what the short switches do at a glance. The major advantage of long-form options is they are usually listed in a compact way when invoking `–help`, and the option names indicate what they do (or at least remind the occasional user). Short-form options generally do this much less well, especially when they don’t match the semi-standard uses (is -v verbose, or version? and so on).

      1. I agree in general with your point. But in my case, with just three switches, I felt implementing long options is overkill. Besides my application will display the help for usage when invoked without options. So I think it should be OK.

        The thing I wanted to point out is that having long options does not necessarily make application more intuitive so if I am forced to refer the manual having forgotten the actual option (long or short), I might as well save some typing and invoke the short option.

  20. > how do you design a tool that (a) has a GDB like internal interpreter for a command minilanguage, (b) also allows you to write useful fire-and-forget one-liners in the shell without diving into that interpreter, (c) has syntax for those one liners that looks like an old-school CLI, and (d) has only one syntax for each command?

    I would fix this problem by removing the requirement (a).

    I’d write the program in Python, so if someone wants write code for it instead of learning the quirks of my minilanguage they’d write it in Python. Command-line arguments would be parsed with Python’s argparse, which itself ensures Unixy conventions are used. If someone wants to give the tool multiple instructions they can either write a Python program or a bash one with multiple commands.

  21. Hi Eric, hope this is not considered spam or off topic. But I would love some feedback from anybody here about the ReadMe I’ve written for a small project of mine I created recently https://gitlab.com/harishankarv/biaweb2 – It’s a static website generator tool and has a very basic command line, but the usage of the tool itself is more elaborate.

    If there is any part of the document that requires more explanation or something is not quite clear, would love for it to be pointed out so that I can fix it. I hope the ReadMe conveys what this tool is about. But being so familiar with the tool, I may be suffering from some blind spot. I have a tendency to be very hasty when writing documentation so any stylistic critique is also welcome.

  22. I’d be interested in reading your design-by-user-story panegyric. In my experience in real-world agile development at corporate shops, “story” is nothing more than a synonym for “trackable unit of work”.

    A different perspective from open-source praxis, free of the strictures and doublethink of corporate development, would be appreciated.

    1. >I’d be interested in reading your design-by-user-story panegyric. In my experience in real-world agile development at corporate shops, “story” is nothing more than a synonym for “trackable unit of work”.

      “User story” is supposed to mean something much more specific – a kind of roleplaying exercise where you imagine a person and the person’s use case as a way of getting an outside perspective on the design, the documentation, and especially the UI.

      For example:

      Meet Joe. He works for Randomcorp, who has a nasty huge old Subversion repository they want him to convert to Git. Joe is a recent grad who got thrown at the problem because he’s new on the job and his manager figures this is a good performance test in a place where the damage will be easily contained if he screws up. Joe himself doesn’t know this, but his teammates have figured it out.

      Joe is smart and ambitious but has little experience with large projects yet. He knows there’s an open-source culture out there, but isn’t part of it – he’s thought about running Linux at home because the more senior geeks around him all seem to do that, but hasn’t found a good specific reason to jump yet. In truth most of what he does with his home machine is play games. He likes “Elite: Dangerous” and the Bioshock series.

      Joe knows Git pretty well, mainly through the Tortoise GUI under Windows; he learned it in school. He has only used Subversion just enough to know basic commands. He found reposurgeon by doing web searches. Joe is fairly sure reposurgeon can do the job he needs and has told his boss this, but he has no idea where to start.

      What does Joe’s discovery process looks like? Read the first two chapters of “Repository Editing with Reposurgeon” using Joe’s eyes. Is he going to hit this wall of text and bounce? If so, what could be done to make it more accessible? Is there some way to write a FAQ that would help him? If so, can we start listing the questions in the FAQ?

      Joe has used gdb a little as part of a class assignment but has not otherwise seen programs with a CLI resembling reposurgeon’s. When he runs it, what is he likely to try to do first to get oriented? Is that going to help him feel like he knows what’s going on, or confuse him?

      “Repository Editing” says he ought to use repotool to set up a Makefile and stub scripts for the standard conversion workflow. What will Joe’s eyes tell him when he looks at the generated Makefile? What parts are likeliest to confuse him? What could be done to fix that?

      Design by user story is a trick you play on your social-monkey brain that uses its fondness for narrative and characters to get you to step out of your own shoes. Joe is about as little like me as as is plausible at a programming shop in 2020, and that’s the point. If I ask abstractly “What can I do to improve reposurgeon’s UI?”, it is likely I will just end up spinning my wheels; if, instead, I ask “What does Joe see when he looks at this?” I am more likely to get a useful answer.

      In fact, the technique is so powerful that I got a good idea while writing this example. Maybe in reposurgeon’s interactive mode it should issue a first like that says “Interactive help is available; type ‘help’ for a topic menu.”

      UPDATE: I think I may need to write a blog post about this..

      1. “User story” is supposed to mean something much more specific – a kind of roleplaying exercise where you imagine a person and the person’s use case as a way of getting an outside perspective on the design, the documentation, and especially the UI.

        This is really quite impressive and detailed — the sort of persona-based thinking lots of companies pay lip service to, but few actually put into practice (Microsoft during the 90s being one of the major exceptions). Usually, in corporate agile development, a story is very short and will often take a standard form, like “As an X, I want to Y so that I can Z.” The idea, I suppose, is to describe the system’s putative features or capabilities in terms of their dual: the customer needs and wants those features/capabilities are intended to fulfill. But the reality is often far different, as the JIRA backlog gets filled with “stories” of the form “As a developer I want so that I can ” or worse — formless to-do lists laden with technical detail like “sales rep page’s error box needs to be abstracted into React component” or something. Some “agile” methodologies, such as SAFe, explicitly condone such “enabler stories”.

        One thing I’ve noticed is that much of the terminology surrounding agile development has an exoteric meaning for programmers, and an esoteric meaning for management. Exoteric-agile — agile as it is sold to programmers and technical leads — is presented as a collection of common-sense practices and principles to deliver software rapidly, iteratively, through a close working relationship and ongoing conversations with your customer/user base. Esoteric-agile — agile as it is sold to the actual buyers, corporate upper management — is a data-driven management strategy that promotes microaccountability by dividing the work into small units of functionality, assigning estimates to them, gathering data, and measuring time and progress to completion against metrics on an ongoing, fine-grained basis. Iterative development, a state of “flow” among the developers, and a close working relationship with the user base are nice but the important bit — the essential feature — is the accountability and the tracking. Thus in the exoteric sense a story is just what I have described above and you have elaborated in the parent post, whereas in the esoteric sense a story fits my original definition — a trackable unit of work which is often part of a near-to-mid-term organizational goal called an “epic” and may be further divided into “subtasks” which may carry their own estimates, data, and metrics. Other work units, such as bugs and spikes, may not be as extensively tracked, but in general a “story” is any description of functionality which has yet to be implemented and must be estimated, tracked, measured, and approved upon completion.

        I think you may have taken an agile principle, imagined what it must mean in a world where working software is what actually matters, and elaborated it into something quite different and more useful than even agile gurus intended. I await that blog post eagerly.

        1. >I think you may have taken an agile principle, imagined what it must mean in a world where working software is what actually matters, and elaborated it into something quite different and more useful than even agile gurus intended. I await that blog post eagerly.

          It’s up.

          I believe the XP guys did in fact intend something like this, but weren’t as self-aware or analytical about it as they should have been.

  23. Then there’s also the Cisco / HP cli where you can type your command all by yourself, or you can autocomplete things. In Cisco’s case by using tab to autocomplete a parameter name or value, and then tab-tab to see a list of possibilities. You can see a rudimentary version of that in bash. In HP (https://en.wikipedia.org/wiki/HP_64000)’s case, they used softkeys to autocomplete or for that matter completely enter a command.

Leave a Reply to esr Cancel reply

Your email address will not be published. Required fields are marked *