Jun 26

A user story about user stories

The way I learned to use the term “user story”, back in the late 1990s at the beginnings of what is now called “agile programming”, was to describe a kind of roleplaying exercise in which you imagine a person and the person’s use case as a way of getting an outside perspective on the design, the documentation, and especially the UI of something you’re writing.

For example:

Meet Joe. He works for Randomcorp, who has a nasty huge old Subversion repository they want him to convert to Git. Joe is a recent grad who got thrown at the problem because he’s new on the job and his manager figures this is a good performance test in a place where the damage will be easily contained if he screws up. Joe himself doesn’t know this, but his teammates have figured it out.

Joe is smart and ambitious but has little experience with large projects yet. He knows there’s an open-source culture out there, but isn’t part of it – he’s thought about running Linux at home because the more senior geeks around him all seem to do that, but hasn’t found a good specific reason to jump yet. In truth most of what he does with his home machine is play games. He likes “Elite: Dangerous” and the Bioshock series.

Joe knows Git pretty well, mainly through the Tortoise GUI under Windows; he learned it in school. He has only used Subversion just enough to know basic commands. He found reposurgeon by doing web searches. Joe is fairly sure reposurgeon can do the job he needs and has told his boss this, but he has no idea where to start.

What does Joe’s discovery process looks like? Read the first two chapters of “Repository Editing with Reposurgeon” using Joe’s eyes. Is he going to hit this wall of text and bounce? If so, what could be done to make it more accessible? Is there some way to write a FAQ that would help him? If so, can we start listing the questions in the FAQ?

Joe has used gdb a little as part of a class assignment but has not otherwise seen programs with a CLI resembling reposurgeon’s. When he runs it, what is he likely to try to do first to get oriented? Is that going to help him feel like he knows what’s going on, or confuse him?

“Repository Editing…” says he ought to use repotool to set up a Makefile and stub scripts for the standard conversion workflow. What will Joe’s eyes tell him when he looks at the generated Makefile? What parts are likeliest to confuse him? What could be done to fix that?

Joe, my fictional character, is about as little like me as as is plausible at a programming shop in 2020, and that’s the point. If I ask abstractly “What can I do to improve reposurgeon’s UI?”, it is likely I will just end up spinning my wheels; if, instead, I ask “What does Joe see when he looks at this?” I am more likely to get a useful answer.

It works even better if, even having learned what you can from your imaginary Joe, you make up other characters that are different from you and as different from each other as possible. For example, meet Jane the system administrator, who got stuck with the conversion job because her boss thinks of version-control systems as an administrative detail and doesn’t want to spend programmer time on it. What do her eyes see?

In fact, the technique is so powerful that I got an idea while writing this example. Maybe in reposurgeon’s interactive mode it should issue a first like that says “Interactive help is available; type ‘help’ for a topic menu.”

However. If you search the web for “design by user story”, what you are likely to find doesn’t resemble my previous description at all. Mostly, now twenty years after the beginnings of “agile programming”, you’ll see formulaic stuff equating “user story” with a one-sentence soundbite of the form “As an X, I want to do Y”. This will be surrounded by a lot of talk about processes and scrum masters and scribbling things on index cards.

There is so much gone wrong with this it is hard to even know where to begin. Let’s start with the fact that one of the original agile slogans was “Individuals and Interactions Over Processes and Tools”. That slogan could be read in a number of different ways, but under none of them at all does it make sense to abandon a method for extended insight into the reactions of your likely users for a one-sentence parody of the method that is surrounded and hemmed in by bureaucratic process-gabble.

This is embedded in a larger story about how “agile” went wrong. The composers of the Agile Manifesto intended it to be a liberating force, a more humane and effective way to organize software development work that would connect developers to their users to the benefit of both. A few of the ideas that came out of it were positive and important – besides design by user story, test-centric development and refactoring leap to mind,

Sad to say, though, the way “user stories” became trivialized in most versions of agile is all too representative of what it has often become under the influence of two corrupting forces. One is fad-chasers looking to make a buck on it, selling it like snake oil to managers forever perplexed by low productivity, high defect rates, and inability to make deadlines. Another is the managers’ own willingness to sacrifice productivity gains for the illusion of process control.

It may be too late to save “agile” in general from becoming a deadening parody of what it was originally intended to be, but it’s not too late to save design by user story. To do this, we need to bear down on some points that its inventors and popularizers were never publicly clear about, possibly because they themselves didn’t entirely understand what they had found.

Point one is how and why it works. Design by user story is a trick you play on your social-monkey brain that uses its fondness for narrative and characters to get you to step out of your own shoes.

Yes, sure, there’s a philosophical argument that stepping out of your shoes in this sense is impossible; Joe, being your fiction, is limited by what you can imagine. Nevertheless, this brain hack actually works. Eppure, si muove; you can generate insights with it that you wouldn’t have had otherwise.

Point two is that design by user story works regardless of the rest of your methodology. You don’t have to buy any of the assumptions or jargon or processes that usually fly in formation with it to get use out of it.

Point three is that design by user story is not a technique for generating code, it’ s a technique for changing your mind. If you approach it in an overly narrow and instrumental way, you won’t imagine apparently irrelevant details like what kinds of video games Joe likes. But you should do that sort of thing; the brain hack works in exact proportion to how much imaginative life you give your characters.

(Which in particular, is why stopping at a one-sentence “As an X, I want to do Y” is such a sadly reductive parody. This formula is designed to stereotype the process, but stereotyping is the enemy of novelty, and novelty is exactly what you want to generate.)

A few of my readers might have the right kind of experience for this to sound familiar. The mental process is similar to what in theater and cinema is called “method acting.” The goal is also similar – to generate situational responses that are outside your normal habits.

Once again: you have to get past tools and practices to discover that the important part of software design – the most difficult and worthwhile part – is mindset. In this case, and temporarily, someone else’s.

Jun 04

Rules for rioters

I had business outside today. I needed to go in towards Philly, closer to the riots, to get a new PSU put into the Great Beast. I went armed; I’ve been carrying at all times awake since Philadelphia started to burn and there were occasional reports of looters heading into the suburbs in other cities.

I knew I might be heading into civil unrest today. It didn’t happen. But it still could.

Therefore I’m announcing my rules of engagement should any of the riots connected with the atrocious murder of George Floyd reach the vicinity of my person.

  1. I will shoot any person engaging in arson or other life-threatening behavior, issuing a warning to cease first if safety permits.
  2. Blacks and other minorities are otherwise safe from my gun; they have a legitimate grievance in the matter of this murder, and what they’re doing to their own neighborhoods and lives will be punishment enough for the utter folly of their means of expression once the dust settles.
  3. White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends; them I consider “enemies-general of all mankind, to be dealt with as wolves are” and will shoot immediately, without mercy or warning.

UPDATE: I didn’t mention white nationalists because I judge my chances of encountering any member of that tiny, ineffectual movement to be effectively zero, and I refuse to cooperate with the mass-media fiction that they are a significant factor in this crisis.

We don’t have a problem with white nationalists attempting to burn down our country using black people as tools and proxies. We have a problem with Communists doing that. I insist on naming – and if necessary, shooting – the real enemy.

May 17

Designing tasteful CLIs: a case study

Yesterday evening my apprentice, Ian Bruene, tossed a design question at me.

Ian is working on a utility he calls “igor” intended to script interactions with GitLab, a major public forge site. Like many such sites, it has a sort of remote-procedure-call interface that allows you, as an alternative to clicky-dancing on the visible Web interface, to pass it JSON datagrams and get back responses that do useful things like – for example – publishing a release tarball of a project where GitLab users can easily find it.

Igor is going to have (actually, already has) one mode that looks like a command interpreter for a little minilanguage, with each command being an action verb like “upload” or “release”. The idea is not so much for users to drive this manually as for them to be able to write scripts in the minilanguage which become part of a project’s canned release procedure. (This is why GUIs are irrelevant to this whole discussion; you can’t script a GUI.)

Ian, quite reasonably, also wants users to be able to run simple igor commands in a fire-and-forget mode by typing “igor” followed by command-line arguments. Now, classically, under Unix, you would expect a single-line “release” command to be designed to look something like this:

$ igor -r -n fooproject -t 1.2.3 foo-1.2.3.tgz

(To be clear, the dollar sign on the left is a shell prompt, put in to emphasize that this is something you type direct to a shell.)

In this invocation, the “-r” option says “I want to do a release”, the -n option says “This is the GitLab name of the project I’m shipping a release of”, the -t option specifies a release tag, and the following filename argument is the name of the tarball you want to publish.

It might not look exactly like this. Maybe there’d be yet another switch that lets you attach a release notes file. Maybe you’d have the utility deduce the project name from the directory it’s running in. But the basic style of this CLI (= Command Line Interface), with option flags like -r that act as command verbs and other flags that exist to attach their arguments to the request, is very familiar to any Unix user. This what most Unix system commands look like.

One of the design rules of the old-school style is that the first token on the line that is not a switch argument terminates recognition of switches. It, and all tokens after it, are treated as arguments to be passed to the program and are normally expected to be filenames (or, in the 21st century, filename-like things like URLs).

Another characteristic of this style is that the order of the switch clauses is not fixed. You could write

$ igor -t 1.2.3 -n fooproject -r foo-1.2.3.tgz

and it would mean the same thing. (Order of the following arguments, on the other hand, will usually be significant if there is more than one.)

For purposes of this post I’m going to call this style old-school UNIX CLI, because Ian’s puzzlement comes from a collision he’s having with a newer style of doing things. And, actually, with a third interface style, also ancient but still vigorous.

When those of us in Unix-land only had the old-school CLI style as a model it was difficult to realize that all of those switches, though compact and easy to type, imposed a relatively high cognitive load. They were, and still are, difficult to remember. But we couldn’t really notice this until we had something to contrast it with that met similar functional requirements with lower cognitive effort.

Though there may have been earlier precedents, the first well-known program to use something recognizably like what I will call new-school CLI was the CVS version control system. The distinguishing trope was this: Each CVS command begins with a subcommand verb, like “cvs update” or “cvs checkout”. If there are switches, they normally follow the subcommand rather than preceding it. And there are fewer switches.

Later version-control systems like Subversion and Mercurial picked up on the subcommand idea and used it to further reduce the number of arbitrary-looking switches users had to remember. In Subversion, especially, your normal workflow could consist of a sequence of svn add, svn update, svn status, and svn commit commands during which you’d never type anything that looked like an old-school Unixy switch at all. This was easy to remember, easy to document, and users liked it.

Users liked it because humans are used to remembering associations between actions and natural-language verbs; “release” is less of a memory load than “-r” even if it takes longer to type. Which illuminates one of the drivers of the old-school style; it was shaped back in the 1970s by 110-baud Teletypes on which terseness and only having to type few characters was a powerful virtue.

After Subversion and Mercurial Git came along, with its CLI written in a style that, though it uses leading subcommand verbs, is rather more switch-heavy. From the point of view of being comfortable for users (especially new users), this was a pretty serious regression from Subversion. But then the CLI of git wasn’t really a design at all, it was an accretion of features that there was little attempt to simplify or systematize. It’s fair to say that git has succeeded despite its rather spiky UI rather than because of it.

Git is, however a digression here; I’ve mainly described it to make clear that you can lose the comfort benefits of the new-school CLI if a lot of old-school-style switches crowd in around the action verbs.

Next we need to look at a third UI style, which I’m going to call “GDB style” because the best-known program that uses it today is the GNU symbolic debugger. It’s almost as ancient as old-school CLIs, going back to the early 1980s at least.

A program like GDB is almost never invoked as a one-liner at all; a command is something you type to its internal command prompt, not the shell. As with new-school CLIs like Subversuon’s, all commands begin with an action verb, but there are no switches. Each space-separated token after the verb on the command line is passed to the command handler as a positional argument.

Part of Igor’s interface is intended to be a GDB-style interpreter. In that, the release command should logically look something like this, with igor’s command prompt at the left margin.

igor> release fooproject 1.2.3 foo-1.2.3.tgz

Note that this is the same arguments in the same order as our old-school “igor -r” command, but now -r has been replaced by a command verb and the order of what follows it is fixed. If we were designing Igor to be Subversion-like, with a fire-and-forget interface and no internal command interpreter at all, it would correspond to a shell command line like this:

$ igor release fooproject 1.2.3 foo-1.2.3.tgz

This is where we get to the collision of design styles I referred to earlier. What was really confusing Ian, I think, is that part of his experience was pulling for old-school fire-and-forget with switches, part of his experience was pulling for new-school as filtered through git’s rather botched version of it, and then there is this internal GDB-like interpreter to reconcile with how the command line works.

My apprentice’s confusion was completely reasonable. There’s a real question here which the tradition he’s immersed in has no canned, best-practices answer for. Git and GDB evade it in equal and opposite ways – Git by not having any internal interpreter like GDB, GDB by not being designed to do anything in a fire-and-forget mode without going through its internal interpreter.

The question is: how do you design a tool that (a) has a GDB like internal interpreter for a command minilanguage, (b) also allows you to write useful fire-and-forget one-liners in the shell without diving into that interpreter, (c) has syntax for those one liners that looks like an old-school CLI, and (d) has only one syntax for each command?

And the answer is: you can’t actually satisfy all four of those constraints at once. One of them has to give. It’s trivially true that if you abandon (a) or (b) you evade the problem, the way Git and GDB do. The real problem is that an old-school CLI wants to have terse switch clauses with flexible order, a GDB-style minilanguage wants to have more verbose commands with positional arguments, and never these twain shall meet.

The only one-syntax-for-each-command choice you can make is to have the same command interpreter parse your command line and what the user types to the internal prompt.

I bit this bullet when I designed reposurgeon, which is why a fire-and-forget command to read a stream dump of a Subversion repository and build a live repository from it looks like this:

$ reposurgeon "read <project .svn" "prefer git" "rebuild ../overthere"

Each of those string arguments is just fed to reposurgeon’s internal interpreter; any attempt to look like an old-school CLI has been abandoned. This way, I can fire and forget multiple reposurgeon commands; for Igor, it might be more appropriate to pass all the tokens on the command line as a single command.

The other possible way Igor could go is to have a command language for the internal interpreter in which each line looks like a new-school shell command with a command verb followed by switch clusters:

igor> release -t 1.2.3 -n fooproject foo-1.2.3.tgz

Which is fine except that now we’ve violated some of the implicit rules of the GDB style. Those aren’t simple positional arguments, and we’re back to the higher cognitive load of having to remember cryptic switches.

But maybe that’s what your prospective users would be comfortable with, because it fits their established habits! This seems to me unlikely but possible.

Design questions like these generally come down to having some sense of who your audience is. Who are they? What do they want? What will surprise them the least? What will fit into their existing workflows and tools best?

I could launch into a panegyric on the agile-programming practice of design-by-user-story at this point; I think this is one of the things agile most clearly gets right. Instead, I’ll leave the reader with a recommendation to read up on that idea and learn how to do it right. Your users will be grateful.

May 13

Two graceful finishes

I’m having a rather odd feeling.

Reposurgeon. It’s…done; it’s a finished tool, fully fit for its intended purposes. After nine years of work and thinking, there’s nothing serious left on the to-do list. Nothing to do until someone files a bug or something in its environment changes, like someone writing an exporter/importer pair it doesn’t know about and should.

When you wrestle with a problem that is difficult and worthy for long enough, the problem becomes part of you. Having that go away is actually a bit disconcerting, like putting your foot on a step that’s not there. But it’s OK; there are lots of other interesting problems out there and I’m sure one will find me to replace reposurgeon’s place in my life.

I might try to write a synoptic look back on the project at some point.

Looking over some back blog posts on reposurgeon, I became aware that I never told my blog audience the last bit of the saga following my ankle surgery. That’s because there was no drama. The ankle is now fully healed and as solidly functional as though I never injured it at all – I’ve even stopped having residual aches in damp weather.

Evidently the internal cartilage healed up completely, which is far from a given with this sort of injury. My thanks to everyone who was supportive when I literally couldn’t walk.

May 08

Term of the day: builder gloves

Another in my continuing series of attempts to coin, or popularize, terms that software engineers don’t know they need yet. This one comes from my apprentice, Ian Bruene.

“Builder gloves” is the special knowledge possessed by the builder of a tool which allows the builder to use it without getting fingers burned.

Software that requires builder gloves to use is almost always faulty. There are rare exceptions to this rule, when the application area of the software is so arcane that the builder’s specialist knowledge is essential to driving it. But usually the way to bet is that if your code requires builder gloves it is half-baked, buggy, has a poorly designed UI or is poorly documented.

When you ship software that you know requires builder gloves, or someone else tells you that it seems to require builder gloves, it could ruin someone else’s day and reflect badly on you. But if you believe in releasing early and often, sometimes half-baked is going to happen. Here’s how to mitigate the problem.

1. Warn the users what’s buggy and unstable in your release notes and the rest of your documentation.

2. Document your assumptions where the user can see them,

3. Work harder at not being a terrible UI designer.

4. Watch the issues list/user’s forum/mailing list, and actually respond.

5. When someone tells you it requires builder gloves, believe them. And fix it so it doesn’t.

Becoming really good at software engineering requires that you care about the experience the user sees, not just the code you can see.

Apr 26

Lassie errors

I didn’t invent this term, but boosting the signal gives me a good excuse for a rant against its referent.

Lassie was a fictional dog. In all her literary, film, and TV adaptations the most recurring plot device was some character getting in trouble (in the print original, two brothers lost in a snowstorm; in popular memory “Little Timmy fell in a well”, though this never actually happened in the movies or TV series) and Lassie running home to bark at other humans to get them to follow her to the rescue.

In software, “Lassie error” is a diagnostic message that barks “error” while being comprehensively unhelpful about what is actually going on. The term seems to have first surfaced on Twitter in early 2020; there is evidence in the thread of at least two independent inventions, and I would be unsurprised to learn of others.

In the Unix world, a particularly notorious Lassie error is what the ancient line-oriented Unix editor “ed” does on a command error. It says “?” and waits for another command – which is especially confusing since ed doesn’t have a command prompt. Ken Thompson had an almost unique excuse for extreme terseness, as ed was written in 1973 to run on a computer orders of magnitude less capable than the embedded processor in your keyboard.

Herewith the burden of my rant: You are not Ken Thompson, 1973 is a long time gone, and all the cost gradients around error reporting have changed. If you ever hear this term used about one of your error messages, you have screwed up. You should immediately apologize to the person who used it and correct your mistake.

Part of your responsibility as a software engineer, if you take your craft seriously, is to minimize the costs that your own mistakes or failures to anticipate exceptional conditions inflict on others. Users have enough friction costs when software works perfectly; when it fails, you are piling insult on that injury if your Lassie error leaves them without a clue about how to recover.

Really this term is unfair to Lassie, who as a dog didn’t have much of a vocabulary with which to convey nuances. You, as a human, have no such excuse. Every error message you write should contain a description of what went wrong in plain language, and – when error recovery is possible – contain actionable advice about how to recover.

This remains true when you are dealing with user errors. How you deal with (say) a user mistake in configuration-file syntax is part of the user interface of your program just as surely as the normally visible controls are. It is no less important to get that communication right; in fact, it may be more important – because a user encountering an error is a user in trouble that he needs help to get out of. When Little Timmy falls down a well you constructed and put in his path, your responsibility to say something helpful doesn’t lessen just because Timmy made the immediate mistake.

A design pattern I’ve seen used successfully is for immediate error messages to include both a one-line summary of the error and a cookie (like “E2317”) which can be used to look up a longer description including known causes of the problem and remedies. In a hypothetical example, the pair might look like this:

Out of memory during stream parsing (E1723)

E1723: Program ran out of memory while building the deserialized internal representation of a stream dump. Try lowering the value of GOGC to cause more frequent garbage collections, increasing the size of your swap partition, or moving to hardware with more RAM.

The key point here is that the user is not left in the lurch. The messages are not a meaningless bark-bark, but the beginning of a diagnosis and repair sequence.

If the thought of improving user experience in general leaves you unmoved, consider that the pain you prevent with an informative error message is rather likely to be your own, as you use your software months or years down the road or are required to answer pesky questions about it.

As with good comments in your code, it is perhaps most motivating to think of informative error messages as a form of anticipatory mercy towards your future self.

Apr 19

Payload, singleton, and stride lengths

Once again I’m inventing terms for useful distinctions that programmers need to make and sometimes get confused about because they lack precise language.

The motivation today is some issues that came up while I was trying to refactor some data representations to reduce reposurgeon’s working set. I realized that there are no fewer than three different things we can mean by the “length” of a structure in a language like C, Go, or Rust – and no terms to distinguish these senses.

Before reading these definitions, you might to do a quick read through The Lost Art of Structure Packing.

The first definition is payload length. That is the sum of the lengths of all the data fields in the structure. No padding is included in this length.

The second is stride length. This is the length of the structure with any interior padding and with the trailing padding or dead space required when you have an array of them. This padding is forced by the fact that on most hardware, an instance of a structure normally needs to have the alignment of its widest member for fastest access. If you’re working in C, sizeof gives you back a stride length in bytes.

I derived the term “stride length” for individual structures from a well-established traditional use of “stride” for array programming in PL/1 and FORTRAN that is decades old.

Stride length and payload length coincide if the structure has no interior or trailing padding. This can sometimes happen when you get an arrangement of fields exactly right, or your compiler might have a pragma to force tight packing even though fields may have to be accessed by slower multi-instruction sequences.

“Singleton length” is the term you’re least likely to need. It’s the length of a structure with interior padding but without trailing padding. The reason I’m dubbing it “singleton” length is that it might be relevant in situations where you’re declaring or passing a single instance of a struct not in an array.

Consider the following declarations in C on a 64-bit machine:

struct {int64_t a; int32_t b} x;
char y

That structure has a payload length of 12 bytes. Instances of it in an array would normally have a stride length of 16 bytes, with the last four bytes being padding. But your compiler might generate a 12-byte copy when you ask it to assign the value of x.

This struct has a singleton length of 12, same as its payload length. But these are not necessarily identical, Consider this:

struct {int64_t a; char b[6]; int32_t c} x;

The way this is normally laid out in memory it will have two bytes of interior padding after b, then 4 bytes of trailing padding after c. Its payload length is 8 + 6 + 4 = 18; its stride length is 8 + 8 + 8 = 24; and its singleton length is 8 + 6 + 2 + 4 = 20.

To avoid confusion, you should develop a habit: any time someone speaks or writes about the “length” of a structure, stop and ask: is this payload length, stride length, or singleton length?

Most usually the answer will be stride length. But someday, most likely when you’re working close to the metal on some low-power embedded system, it might be payload or singleton length – and the difference might actually matter.

Even when it doesn’t matter, having a more exact mental model is good for reducing the frequency of times you have to stop and check yourself because a detail is vague. The map is not the territory, but with a better map you’ll get lost less often.

Apr 14

Insights need you to keep your nerve

This is a story I’ve occasionally told various friends when one of the subjects it touches comes up. I told it again last night, and it occurred to me that I ought to put in the blog. It’s about how, if you want to have productive insights, you need a certain kind of nerve or self-belief.

Many years ago – possibly as far back as the late 80s – I happened across a film of a roomful of Sufi dervishes performing a mystical/devotional exercise called “dhikr”. The film was very old, grainy B&W footage from the early 20th century. It showed a roomful of bearded, turbaned, be-robed men swaying, spinning, and chanting. Some were gazing at bright objects that might have been lamps, or polished metal or jewelry reflecting other lamps – it wasn’t easy to tell from the footage.

I can’t find the footage I saw, but the flavor was a bit like this. No unison movement in what I saw, though – individuals doing different things and ignoring each other, more inward-focused.

The text accompanying the film explained that the intention of “dhikr” is to shut out the imperfect sensory world so the dervish can focus on the pure and holy name of Allah. “Right,” I thought, already having had quite a bit of experience as an experimental mystic myself, “I get this. In Zen language, they’re shutting down the drunken monkeys. Autohypnosis inducing a serene mind, nothing surprising here.”

But there was something else. Something about the induction methods they were using. It all seemed oddly familiar, more than it ought to. I had seen behaviors like this before somewhere, from people who weren’t wearing pre-Kemalist Turkish garb. I watched the film…and it hit me. This was exactly like watching a roomful of people with serious autism!

The rocking. The droning. The fixated behavior, or in the Sufi case the behavior designed to induce fixation. Which immediately led to the next question: why? I think the least hypothesis in cases where you observe parallel behaviors is that they have parallel causation. We know what the Sufis tell us about what they’re doing; might it tell us what the autists are doing what they’re doing?

The Sufis are trying to shut out sense data. What if the autists are too? That would imply that the autists live in a state of what is, for them, perpetual sensory overload. Their dhikr-like behaviors are a coping mechanism, an attempt to turn down the gain on their sensors so they can have some peace inside their own skulls.

The first applications of nerve I want to talk about here are (a) the nerve to believe that autistic behaviors have an explanation more interesting than “uhhh…those people are randomly broken”, and (b) the nerve to believe that you can apply a heuristic like “parallel behavior, parallel causes” to humans when you picked it up from animal ethology.

Insights need creativity and mental flexibility, but they also need you to keep your nerve. I think there are some very common forms of failing to keep your nerve that people who would like to have good and novel ideas self-sabotage with. One is “If that were true, somebody would have noticed it years ago”. Another is “Only certified specialists in X are likely to have good novel ideas about X, and I’m not a specialist in X, so it’s a bad risk to try following through.”

You, dear reader, are almost certainly browsing this blog because I’m pretty good at not falling victim to those, and duly became famous by having a few good ideas that I didn’t drop on the floor. However, in this case, I failed to keep my nerve in another bog-standard way: I believed an expert who said my idea was silly.

That was decades ago. Nowadays, the idea that autists have a sensory-overload problem is not even controversial – in fact it’s well integrated into therapeutic recommendations. I don’t know when that changed, because I haven’t followed autism research closely enough. Might even be the case that somewhere in the research literature, someone other than me has noticed the similarity between semi-compulsive autistic behaviors and Sufi dhikr, or other similar autohypnotic practices associated with mystical schools.

But I got there before the experts did. And dropped the idea because my nerve failed.

Now, it can be argued that there were good reasons for me not to have pursued it. Getting a real hearing for a heterodox idea is difficult in fields where the experts all have their own theories they’re heavily invested in, and success is unlikely enough that perhaps it wasn’t an efficient use of my time to try. That’s a sad reason, but in principle a sound one.

But losing my nerve because an expert laughed at me, that was not sound. I think I wouldn’t make that mistake today; I’m tougher and more confident than I used to be, in part because I’ve had “crazy” ideas that I’ve lived to see become everyone’s conventional wisdom.

You can read this as a variation on a theme I developed in Eric and the Quantum Experts: A Cautionary Tale. But it bears repeating. If you want to be successfully creative, your insights need you to keep your nerve.

Mar 23

PSA: COVID-19 is a bad reason to get a firearm

I’m a long-time advocate of more ordinary citizens getting themselves firearms and learning to use them safely and competently. But this is a public-service announcement: if you’re thinking of running out to buy a gun because of COVID-19, please don’t.

There are disaster scenarios in which getting armed up in a hurry makes sense; the precondition for all of them is a collapse of civil order. That’s not going to happen with COVID-19 – the mortality rate is too low.

Be aware that the gun culture doesn’t like and doesn’t trust panic buyers; they tend to be annoying flake cases who are more of liability than an asset. We prefer a higher-quality intake than we can get in the middle of a plague panic. Slow down. Think. And if you’ve somehow formed the idea that you’re in a zombie movie or a Road Warrior sequel, chill. That’s not a useful reaction; it can lead to panic shootings and those are never good.

I don’t mean to discourage anyone from buying guns in the general case – more armed citizens are a good thing on multiple levels. After we’re through the worst of this would be a good time for it. But do it calmly, learn the Four Rules of Firearms Safety first, and train, train, train. Get good with your weapons, and confident enough not to shoot unless you have to, before the next episode of shit-hits-the-fan.

Feb 27

The right to be rude

The historian Robert Conquest once wrote: “The behavior of any bureaucratic organization can best be understood by assuming that it is controlled by a secret cabal of its enemies.”

Today I learned that the Open Source Initiative has reached that point of bureaucratization. I – OSI’s co-founder and its president for its first six years – was kicked off their lists for being too rhetorically forceful in opposing certain recent attempts to subvert OSD clauses 5 and 6. This despite the fact that I had vocal support from multiple list members who thanked me for being willing to speak out.

It shouldn’t be news to anyone that there is an effort afoot to change – I would say corrupt – the fundamental premises of the open-source culture. Instead of meritocracy and “show me the code”, we are now urged to behave so that no-one will ever feel uncomfortable.

The effect – the intended effect – is to diminish the prestige and autonomy of people who do the work – write the code – in favor of self-appointed tone-policers. In the process, the freedom to speak necessary truths even when the manner in which they are expressed is unpleasant is being gradually strangled.

And that is bad for us. Very bad. Both directly – it damages our self-correction process – and in its second-order effects. The habit of institutional tone policing, even when well-intentioned, too easily slides into the active censorship of disfavored views.

The cost of a culture in which avoiding offense trumps the liberty to speak is that crybullies control the discourse. To our great shame, people who should know better – such as the OSI list moderators and BOD – have internalized anticipatory surrender to crybullying. They no longer even wait for the soi-disant victims to complain before wielding the ban-hammer.

We are being social-hacked from being a culture in which freedom is the highest value to one in which it is trumped by the suppression of wrongthink and wrongspeak. Our enemies – people like Coraline Ada-Ehmke – do not even really bother to hide this objective.

Our culture is not fatally damaged yet, but the trend is not good. OSI has been suborned and is betraying its founding commitment to freedom. “Codes of Conduct” that purport to regulate even off-project speech have become all too common.

Wake up and speak out. Embrace the right to be rude – not because “rude” in itself is a good thing, but because the degenerative slide into suppression of disfavored opinions has to be stopped right where it starts, at the tone policing.

The OSI membership page is here.

Feb 07

Chinese bioweapon II: Electric Boogaloo

Yikes. Despite the withdrawal of the Indian paper arguing that the Wuhan virus showed signs of engineering, the hypothesis that that it’s an escaped bioweapon looks stronger than ever.

Why do I say this? Because it looks like my previous inclination to believe the rough correctness of the official statistics – as conveyed by the Johns Hopkins tracker – was wrong. I now think the Chinese are in way deeper shit than they’re admitting.

Continue reading

Jan 31

Head-voice vs. quiet-mind

I’m utterly boggled. Yesterday, out of nowhere, I learned of a fundamental divide in how peoples’ mental lives work about which I had had no previous idea at all.

From this: Today I Learned That Not Everyone Has An Internal Monologue And It Has Ruined My Day.

My reaction to that title can be rendered in language as – “Wait. People actually have internal monologues? Those aren’t just a cheesy artistic convention used to concretize the welter of pre-verbal feelings and images and maps bubbling in peoples’ brains?”

Apparently not. I’m what I have now learned to call a quiet-mind. I don’t have an internal narrator constantly expressing my thinking in language; in shorthand, I’m not a head-voice person. So much not so that when I follow the usual convention of rendering quotes from my thinking as though they were spoken to myself, I always feel somewhat as though I’m lying, fabulating to my readers. It’s not like that at all! I justify writing as though there had been a voice in my head only because the full multiordinality of my actual thought-forms won’t fit through my typing fingers.

But, apparently, for others it often is like that. Yesterday I learned that the world is full of head-voice people who report that they don’t know what they’re thinking until the narratizer says it. Judging by the reaction to the article it seems us quiet-minds are a minority, one in five or fewer. And that completely messes with my head.

What’s the point? Why do you head-voice people need a narrator to tell you what your own mind is doing? I fully realize this question could be be reflected with “Why don’t you need one, Eric?” but it is quite disturbing in either direction.

So now I’m going to report some interesting detail. There are exactly two circumstances under which I have head-voice. One is when I’m writing or consciously framing spoken communication. Then, my compositional output does indeed manifest as narratizing head-voice. The other circumstances is the kind of hypnogogic experience I reported in Sometimes I hear voices.

Outside of those two circumstances, no head-voice. Instead, my thought forms are a jumble of words, images, and things like diagrams (a commenter on Instapundit spoke of “concept maps” and yeah, a lot of it is like that). To speak or write I have to down-sample this flood of pre-verbal stuff into language, a process I am not normally aware of except as an occasional vague and uneasy sense of how much I have thrown away.

(A friend reports Richard Feynman observing that ‘You don’t describe the shape of a camshaft to yourself.” No; you visualize a camshaft, then work with that visualization in your head. Well, if you can – some people can’t. I therefore dub the pre-verbal level “camshaft thinking.”)

To be fully aware of that pre-verbal, camshaft-thinking level I have to go into a meditative or hypnogogic state. Then I can observe that underneath my normal mental life is a vast roar of constant free associations, apparently random memory retrievals, and weird spurts of logic connecting things, only some of which passes filters to present to my conscious attention.

I don’t think much or any of this roar is language. What it probably is, is the shock-front described in the predictive-processing model of how the brain works – where the constant inrush of sense-data meets the brain’s attempt to fit it to prior predictive models.

So for me there are actually three levels: (1) the roaring flood of free association, which I normally don’t observe; (2) the filtered pre-verbal stream of consciousness, mostly camshaft thinking, that is my normal experience of self, and (3) narratized head-voice when I’m writing or thinking about what to say to other people.

I certainly do not head-voice when I program. No, that’s all camshaft thinking – concept maps of data structures, chains of logic. processing that is like mathematical reasoning though not identical to it. After the fact I can sometimes describe parts of this process in language, but it doesn’t happen in language.

Learning that other people mostly hang out at (3), with a constant internal monologue…this is to me unutterably bizarre. A day later I’m still having trouble actually believing it. But I’ve been talking with wife and friends, and the evidence is overwhelming that it’s true.

Language…it’s so small. And linear. Of course camshaft thinking is intrinsically limited by the capabilities of the brain and senses, but less so. So why do most people further limit themselves by being in head-voice thinking most of the time? What’s the advantage to this? Why are quiet-minds a minority?

I think the answers to these questions might be really important.

UPDATE: My friend, Jason Azze, found the Feynman quote. It’s from “It’s As Simple As One, Two, Three…” from the second book of anecdotes, What Do You Care What Other People Think?:

When I was a kid growing up in Far Rockaway, I had a friend named Bernie Walker. We both had “labs” at home, and we would do various “experiments.” One time, we were discussing something — we must have been eleven or twelve at the time — and I said, “But thinking is nothing but talking to yourself inside.”

“Oh yeah?” Bernie said. “Do you know the crazy shape of the crankshaft in a car?”

“Yeah, what of it?”

“Good. Now, tell me: how did you describe it when you were talking to yourself?”

So I learned from Bernie that thoughts can be visual as well as verbal.

Jan 26

Missing documentation and the reproduction problem

I recently took some criticism over the fact that reposurgeon has no documentation that is an easy introduction for beginners.

After contemplating the undeniable truth of this criticism for a while, I realized that I might have something useful to say about the process and problems of documentation in general – something I didn’t already bring out in How to write narrative documentation. If you haven’t read that yet, doing so before you read the rest of this mini-essay would be a good idea.

“Why doesn’t reposurgeon have easy introductory documentation” would normally have a simple answer: because the author, like all too many programmers, hates writing documentation, has never gotten very good at it, and will evade frantically when under pressure to try. But in my case none of that description is even slightly true. Like Donald Knuth, I consider writing good documentation an integral and enjoyable part of the art of software engineering. If you don’t learn to do it well you are short-changing not just your users but yourself.

So, with all that said, “Why doesn’t reposurgeon have easy introductory documentation” actually becomes a much more interesting question. I knew there was some good reason I’d never tried to write any, but until I read Elijah Newren’s critique I never bothered to analyze for the reason. He incidentally said something very useful by mentioning gdb (the GNU symbolic debugger), and that started me thinking, and now think I understand something general.

Continue reading

Sep 04

Be the America Hong Kong thinks you are

I think this is my favorite Internet meme ever.

Yeah, Hong Kong, we actually have a problem with Communist oppression here, too. Notably in our universities, but metastasizing through pop culture and social media censorship too. They haven’t totally captured the machinery of state yet, but they’re working on that Long March all too effectively.

And you are absolutely right when you say you need a Second-Amendment-equivalent civil rights guarantee. Our Communists hate that liberty as much as yours do – actually, noticing who is gung-ho for gun confiscation is one of the more reliable ways to unmask Communist tools.

We need to be the America you think we are, too. Some of us are still trying.

Jun 27

Loadsharers has a logo

Nobody stepped up to design a Loadsharers logo, so I did it myself. Here it is:

Loadsharers logo

Yeah, I’m not much of a graphic artist, but I can do a semi-competent job of whacking together a simple logo when I need to. If you’re an actual pro and think you can fix this or do better, have at it. The XCF SVG I made this from is in the Loadsharers repository at https://gitlab.com/esr/loadsharers

Continue reading

Jun 18

While I was making other plans, teil vier

I can walk again.

Wearing a joint-immobilizing boot brace, so I lurch around with a gait even more graceless than my usual palsied semi-stumble, but I can walk. And shower. And make my own breakfast. Hallelujah!

Better news: my prognosis is good. The joint had osteoarthritic damage that may be trouble down the road, but I’ve been osteoathritic in both feet for years now without symptoms. The big good news is that the joint cartilage wasn’t damaged, so I should get full use of the ankle back.

Boot brace for three weeks, physical therapy to strengthen the ankle after that. I won’t be back in kung-fu class for a while. Still, the medical level of this saga is going as well as could be expected.

The financial level, not so much, We got socked with a surgery bill of $2,238 today. Followup and PT…I don’t know what that will cost,but it won’t be cheap.

What’s worse, healthcare.gov chose this perfect time to yank our ACA subsidy because we can’t document the regular income streams. Of course we can’t document them because we don’t have them. Which means we have to pay another $2000 to keep our existing coverage for just the next month, and the bureaucrats have told us to apply for Medicaid. Which we may not be able to get before open enrollment in January.

This means the amount of money I need to pull in without burning savings just went up by $2000 a month. Which is doing a good job of keeping me focused on getting Loadsharers off the ground. If it does well, I’ll do well, and have successfully attacked the larger problem of LBIP funding.

There’s going to be a Linux Journal article, and at least one technology-press interview. I’ve even (gasp!) tweeted about this, something that happens approximately once every other blue moon.

I have a list of 11 people who have taken the pledge. I think we need around 11,000 (mostly supporting LBIPs other than me) to make a real dent in the problem. So please, go out and prosyletize to your tech-industry friends, and ask them to spread the word. We need this to go viral.

Jun 15

In a blatant attempt to attract more Institutional supporters…

Anybody who has visited my Patreon page should know that I have two special support tiers.

At Bronze ($20 per month) level, you get included in the credits of the project pages for all my solo stuff. Here’s a recent example.

Today I’m announcing two new perks for Institutional ($100 permonth) supporters. This tier is intended for people with corporate budgets behind them.

When you sign up, you get to chose a name (possibly your corporation’s) and at your option a URL to back the name; this will be included in the credits pages. You will also get an individual shout-out in the “Acknowledgements” section of my forthcoming book “The Programmer’s Way: A Guide to Right Mindset”

By joining my feed at Institutional level, your company can demonstrate good Internet citizenship through supporting the often thankless and obscure work needed to keep the infrastructure humming.

Thanks in advance for your support.

Jun 11

While I was making other plans, pars tres

A day or so after the not-so-thrilling last installment of my medical troubles (previous post), I get my hands on a knee scooter. Rented from a local pharmacy.

This is a big improvement over the wheelchair. I’m more mobile on it, and can pee standing up. If you think that last bit doesn’t matter, pray you never find out what an epic getting on and off a toilet seat is when you don’t have the use of your dominant leg.

Unfortunately, life is tradeoffs. You can fall off a knee scooter; I have, twice. No serious harm done, but that kind of thing is another increment of pain and exhaustion in a process with quite a sufficiency of those, thank you.

The ankle started hurting yesterday, seriously enough that I briefly considered actually taking a Percocet, but only when it was horizontal in bed. I think my sensory feed from that extremity must still be a bit disturbed by the aftermath of the nerve block, because it took me almost a day to figure out that the pain was due to external pressure from some of the support scaffolding in the ankle dressing.

I think those rigid parts have shifted around in an unhelpful way due to my crawling around and/or mounting the knee scooter. We’re going to try to get an appointment at the orthopedist’s office today to have someone there inspect and rewrap the thing.

Which means I’m going to have to deal with getting down (and later up) the steps in front of my house. Yeek! Neither scooter nor wheelchair is well adapted to this; I’ll probably have to bust out the kneepads and crawl again.

It hasn’t been all bad. Saturday a couple of my friends from our Friday night game group came over to boardgame with me; Xia: Legends of a Drift System. A good time was had by all.

My far friends on the net have come through as well. My Patreon feed is $356 per month thicker than it was a week ago. And there have been a bunch of one-off donations. John Carmack (yes, the John Carmack) sent $1000, for which I am humbly grateful.

SELF is definitely a scrub, but we’re making plans for me to video in my keynote. Topic change: I’m going to talk about infrastructure sustainability considered as a problem of load-bearing people, and the fact that the hacker culture doesn’t have any customs about how to support our old maintainer/warhorses or arrange an orderly succession when they die in harness.

I’ve actually been worrying about this problem for years, but mostly because of load-bearing people near me who weren’t me. Now my attention is seriously focused. :-)

I am able to work, and a good thing too or I’d be going bonkers. NTPsec is in a bit of a quiet period right now; we’ve delivered NTS, and the next big push I have in mind will have to wait until Go has a TLS 3.0 binding. So I’ve been making progress on the Go port of reposurgeon.

The Patreon is up to $1709 now. At $2000 it would cover monthly mortgage and bills; that’s starting to look attainable, which is a damn good thing given that I still have no clearer idea what the medical stuff will end up costing than “a lot”.

I increasingly think that software users and engineers who care about the infrastructure commons they rely on not collapsing out from under them are going to have to adopt something like the old custom of tithing, in self-defense.

In simpler times, before state-welfare schemes or insurance companies, community citizens in good standing were often religiously required or strongly encouraged to “tithe” – give a small fraction of their income to a church expressly for relief of the poor.

Nowadays we can cut out the middlemen and attendant risk of corruption. We have Patreon and SubscribeStar. Can we grow a social norm that hackers with regular jobs in the profit-making sector should use services like these to split (say) $30 a month among three infrastructure developers of their choice?

Yes, I am asking this for me, now. But I noticed the problem before it was personal. The problem is bigger than me, and the solution should be too.

The point of suggesting a fan-out of three is to avoid a situation where all that goodwill gets captured by a handful of hackers with high visibility (like, er, myself), and people who want to help have a reason to seek out developers who are doing important work in more obscurity.

UPDATE: A few hours after I initially posted this, I went in in to have my dressing remade – something had gone awry inside it and was griping my foot. A mere resident was enough for that, but since I was there anyway Dr. Miller quite properly came for a look-in.

It was pretty amusing to see the “WTF?” expression on his face when he got a look at the week-old incision site on my foot. Completely healed over, no drainage, the only way to tell there’d been an 5-inch-long entry wound was by the purple stitches. As a friend of mine who’s a GP put it on Saturday, contemplating the place on my scalp where I’d gotten a laceration from that fall two weeks before that required three surgical staples, “Who are you? Wolverine?”

Good genes. Goes with the factory-installed brawler package. Makes me guardedly optimistic about the cartilage in the joint repairing itself. Dr. Miller hadn’t been planning to even see me until the 25th, but now he wants the stitches out a week ahead of the original schedule.

And still later in the day J. Storrs Hall (yeah, the nanotech pioneer guy) showed up on my doorstep with a knee scooter his wife had had to use for a while after knee surgery. Means we can stop renting one.