Sep 25

Last phase of the desktop wars?

The two most intriguing developments in the recent evolution of the Microsoft Windows operating system are Windows System for Linux (WSL) and the porting of their Microsoft Edge browser to Ubuntu.

For those of you not keeping up, WSL allows unmodified Linux binaries to run under Windows 10. No emulation, no shim layer, they just load and go.

Microsoft developers are now landing features in the Linux kernel to improve WSL. And that points in a fascinating technical direction. To understand why, we need to notice how Microsoft’s revenue stream has changed since the launch of its cloud service in 2010.

Ten years later, Azure makes Microsoft most of its money. The Windows monopoly has become a sideshow, with sales of conventional desktop PCs (the only market it dominates) declining. Accordingly, the return on investment of spending on Windows development is falling. As PC volume sales continue to fall off , it’s inevitably going to stop being a profit center and turn into a drag on the business.

Looked at from the point of view of cold-blooded profit maximization, this means continuing Windows development is a thing Microsoft would prefer not to be doing. Instead, they’d do better putting more capital investment into Azure – which is widely rumored to be running more Linux instances than Windows these days.

Our third ingredient is Proton. Proton is the emulation layer that allows Windows games distributed on Steam to run over Linux. It’s not perfect yet, but it’s getting close. I myself use it to play World of Warships on the Great Beast.

The thing about games is that they are the most demanding possible stress test for a Windows emulation layer, much more so than business software. We may already be at the point where Proton-like technology is entirely good enough to run Windows business software over Linux. If not, we will be soon.

So, you’re a Microsoft corporate strategist. What’s the profit-maximizing path forward given all these factors?

It’s this: Microsoft Windows becomes a Proton-like emulation layer over a Linux kernel, with the layer getting thinner over time as more of the support lands in the mainline kernel sources. The economic motive is that Microsoft sheds an ever-larger fraction of its development costs as less and less has to be done in-house.

If you think this is fantasy, think again. The best evidence that it’s already the plan is that Microsoft has already ported Edge to run under Linux. There is only one way that makes any sense, and that is as a trial run for freeing the rest of the Windows utility suite from depending on any emulation layer.

So, the end state this all points at is: New Windows is mostly a Linux kernel, there’s an old-Windows emulation over it, but Edge and the rest of the Windows user-land utilities don’t use the emulation. The emulation layer is there for games and other legacy third-party software.

Economic pressure will be on Microsoft to deprecate the emulation layer. Partly because it’s entirely a cost center. Partly because they want to reduce the complexity cost of running Azure. Every increment of Windows/Linux convergence helps with that – reduces administration and the expected volume of support traffic.

Eventually, Microsoft announces upcoming end-of-life on the Windows emulation. The OS itself , and its userland tools, has for some time already been Linux underneath a carefully preserved old-Windows UI. Third-party software providers stop shipping Windows binaries in favor of ELF binaries with a pure Linux API…

…and Linux finally wins the desktop wars, not by displacing Windows but by co-opting it. Perhaps this is always how it had to be.

Jul 14

Documentation as knowledge capture

Maybe you’re one of the tiny minority of programmers that, like me, already enjoys writing documentation and works hard at doing it right. If so,the rest of this essay is not for you and you can skip it.

Otherwise, you might want to re-read (or at least re-skim) Ground-Truth Documents before continuing. Because ground-truth documents are a special case of a more general reason why you might want to try to change your mindset about documentation.

In that earlier essay I used the term “knowledge capture” in passing. This is a term of art from AI; it refers to the process of extracting domain knowledge from the heads of human experts into a form that can be expressed as an algorithm executable by the literalistic logic of a computer.

What I invite you to think about now is how writing documentation for software you are working on can save you pain and effort by (a) capturing knowledge you have but don’t know you have, and (b) eliciting knowledge that you have not yet developed.

Humans, including me and you, are sloppy and analogical thinkers who tend to solve problems by pattern-matching against noisy data first and checking our intuitions with logic after the fact (if we actually get that far). There’s no point in protesting that it shouldn’t be that way, that we should use rigorous logic all the way down, because our brains simply aren’t wired for that. Evolved cognition is a kludge – more properly, multiple stacks of kludges – developed under selection to be just barely adequate at coping.

This kludginess is revealed by, for example, optical illusions. And by the famous 7±2 result about the very limited sized of the human working set. And the various well-documented ways that human beings are extremely bad at statistical reasoning. And in many other ways…

When you do work that is as demanding of rigor as software engineering, one of your central challenges is hacking around the limitations of your own brain. Sometimes this develops in very obvious ways; the increasing systematization of testing during development during the last couple of decades, for example.

Other brain hacks are more subtle. Which is why I am here to suggest that you try to stop thinking of documentation as a chore you do for others, and instead think of it as a way to explore your problem space. and the space in your head around your intuitions about the problem, so you can shine light into the murkier corners of both. Writing documentation can function as valuable knowledge capture about your problem domain even when you are the only expert about what you are trying to do.

This is why my projects often have a file called “designer’s notes” or “hacking guide”. Early in a project these may just be random jottings that are an aid to my own memory about why things are the way they are. They tend to develop into a technical briefing about the code internals for future contributors. This is a good habit to form if you want to have future contributors!

But even though the developed version of a “designer’s notes” looks other-directed, it’s really a thing I do to reduce my own friction costs. And not just in the communication-to-my-future-self way either. Yes, it’s tremendously valuable to have a document that, months or years after I wrote it, reminds me of my assumptions when I have half-forgotten them. And yes, a “designer’s notes” file is good practice for that reason alone. But its utility does not even start there, let alone end there.

Earlier, I wrote of (a) capturing knowledge you have but don’t know you have, and (b) eliciting knowledge that you have not yet developed. The process of writing your designer’s notes can be powerful and catalytic that way even if they’re never communicated. The thing you have to do in your brain to narratize your thoughts so they can be written down is itself an exploratory tool.

As with “designer’s notes” so with every other form of documentation from the one-line code comment to a user-oriented HOWTO. When you achieve right mindset about these they are no longer burdens; instead they become an integral part of your creative process, enabling you to design better and write better code with less total effort.

I understand that to a lot of programmers who now experience writing prose as difficult work this might seem like impossible advice. But I think there is a way from where you are to right mindset. That way is to let go of the desire for perfection in your prose, at least early on. Sentence fragments are OK. Misspellings are OK. Anything you write that explores the space is OK, no matter how barbarous it would look to your third-grade grammar teacher or the language pedants out there (including me).

It is more important to do the discovery process implied by writing down your ideas than it is for the result to look polished. If you hold on to that thought, get in the habit of this kind of knowledge capture, and start benefiting from it, then you might find that over time your standards rise and it gets easier to put more effort into polishing.

If that happens, sure; let it happen – but it’s not strictly necessary. The only thing that is necessary is that you occasionally police what you’ve recorded so it doesn’t drift into reporting something the software no longer does. That sort of thing is a land-mine for anyone else who might read your notes and very bad form.

Other than that, though, the way to get to where you do the documentation-as-knowledge-capture thing well is by starting small; allow a low bar for polish and completeness and grow the capability organically. You will know you have won when it starts being fun.

Jun 26

A user story about user stories

The way I learned to use the term “user story”, back in the late 1990s at the beginnings of what is now called “agile programming”, was to describe a kind of roleplaying exercise in which you imagine a person and the person’s use case as a way of getting an outside perspective on the design, the documentation, and especially the UI of something you’re writing.

For example:

Meet Joe. He works for Randomcorp, who has a nasty huge old Subversion repository they want him to convert to Git. Joe is a recent grad who got thrown at the problem because he’s new on the job and his manager figures this is a good performance test in a place where the damage will be easily contained if he screws up. Joe himself doesn’t know this, but his teammates have figured it out.

Joe is smart and ambitious but has little experience with large projects yet. He knows there’s an open-source culture out there, but isn’t part of it – he’s thought about running Linux at home because the more senior geeks around him all seem to do that, but hasn’t found a good specific reason to jump yet. In truth most of what he does with his home machine is play games. He likes “Elite: Dangerous” and the Bioshock series.

Joe knows Git pretty well, mainly through the Tortoise GUI under Windows; he learned it in school. He has only used Subversion just enough to know basic commands. He found reposurgeon by doing web searches. Joe is fairly sure reposurgeon can do the job he needs and has told his boss this, but he has no idea where to start.

What does Joe’s discovery process looks like? Read the first two chapters of “Repository Editing with Reposurgeon” using Joe’s eyes. Is he going to hit this wall of text and bounce? If so, what could be done to make it more accessible? Is there some way to write a FAQ that would help him? If so, can we start listing the questions in the FAQ?

Joe has used gdb a little as part of a class assignment but has not otherwise seen programs with a CLI resembling reposurgeon’s. When he runs it, what is he likely to try to do first to get oriented? Is that going to help him feel like he knows what’s going on, or confuse him?

“Repository Editing…” says he ought to use repotool to set up a Makefile and stub scripts for the standard conversion workflow. What will Joe’s eyes tell him when he looks at the generated Makefile? What parts are likeliest to confuse him? What could be done to fix that?

Joe, my fictional character, is about as little like me as as is plausible at a programming shop in 2020, and that’s the point. If I ask abstractly “What can I do to improve reposurgeon’s UI?”, it is likely I will just end up spinning my wheels; if, instead, I ask “What does Joe see when he looks at this?” I am more likely to get a useful answer.

It works even better if, even having learned what you can from your imaginary Joe, you make up other characters that are different from you and as different from each other as possible. For example, meet Jane the system administrator, who got stuck with the conversion job because her boss thinks of version-control systems as an administrative detail and doesn’t want to spend programmer time on it. What do her eyes see?

In fact, the technique is so powerful that I got an idea while writing this example. Maybe in reposurgeon’s interactive mode it should issue a first like that says “Interactive help is available; type ‘help’ for a topic menu.”

However. If you search the web for “design by user story”, what you are likely to find doesn’t resemble my previous description at all. Mostly, now twenty years after the beginnings of “agile programming”, you’ll see formulaic stuff equating “user story” with a one-sentence soundbite of the form “As an X, I want to do Y”. This will be surrounded by a lot of talk about processes and scrum masters and scribbling things on index cards.

There is so much gone wrong with this it is hard to even know where to begin. Let’s start with the fact that one of the original agile slogans was “Individuals and Interactions Over Processes and Tools”. That slogan could be read in a number of different ways, but under none of them at all does it make sense to abandon a method for extended insight into the reactions of your likely users for a one-sentence parody of the method that is surrounded and hemmed in by bureaucratic process-gabble.

This is embedded in a larger story about how “agile” went wrong. The composers of the Agile Manifesto intended it to be a liberating force, a more humane and effective way to organize software development work that would connect developers to their users to the benefit of both. A few of the ideas that came out of it were positive and important – besides design by user story, test-centric development and refactoring leap to mind,

Sad to say, though, the way “user stories” became trivialized in most versions of agile is all too representative of what it has often become under the influence of two corrupting forces. One is fad-chasers looking to make a buck on it, selling it like snake oil to managers forever perplexed by low productivity, high defect rates, and inability to make deadlines. Another is the managers’ own willingness to sacrifice productivity gains for the illusion of process control.

It may be too late to save “agile” in general from becoming a deadening parody of what it was originally intended to be, but it’s not too late to save design by user story. To do this, we need to bear down on some points that its inventors and popularizers were never publicly clear about, possibly because they themselves didn’t entirely understand what they had found.

Point one is how and why it works. Design by user story is a trick you play on your social-monkey brain that uses its fondness for narrative and characters to get you to step out of your own shoes.

Yes, sure, there’s a philosophical argument that stepping out of your shoes in this sense is impossible; Joe, being your fiction, is limited by what you can imagine. Nevertheless, this brain hack actually works. Eppure, si muove; you can generate insights with it that you wouldn’t have had otherwise.

Point two is that design by user story works regardless of the rest of your methodology. You don’t have to buy any of the assumptions or jargon or processes that usually fly in formation with it to get use out of it.

Point three is that design by user story is not a technique for generating code, it’ s a technique for changing your mind. If you approach it in an overly narrow and instrumental way, you won’t imagine apparently irrelevant details like what kinds of video games Joe likes. But you should do that sort of thing; the brain hack works in exact proportion to how much imaginative life you give your characters.

(Which in particular, is why stopping at a one-sentence “As an X, I want to do Y” is such a sadly reductive parody. This formula is designed to stereotype the process, but stereotyping is the enemy of novelty, and novelty is exactly what you want to generate.)

A few of my readers might have the right kind of experience for this to sound familiar. The mental process is similar to what in theater and cinema is called “method acting.” The goal is also similar – to generate situational responses that are outside your normal habits.

Once again: you have to get past tools and practices to discover that the important part of software design – the most difficult and worthwhile part – is mindset. In this case, and temporarily, someone else’s.

May 28

Looking for C-to-anything transpilers

I’m looking for languages that have three properties:

(1) Must have weak memory safety. The language is permitted to crash on an out -of-bounds array reference or null pointer, but may not corrupt or overwrite memory as a result.

(2) Must have a transpiler from C that produces human-readable, maintainable code that preserves (non-perverse) comments. The transpiler is allowed to not do a 100% job, but it must be the case that (a) the parts it does translate are correct, and (b) the amount of hand-fixup required to get to complete translation is small.

(3) Must not be Go, Rust, Ada, or Nim. I already know about these languages and their transpilers.

Apr 26

Lassie errors

I didn’t invent this term, but boosting the signal gives me a good excuse for a rant against its referent.

Lassie was a fictional dog. In all her literary, film, and TV adaptations the most recurring plot device was some character getting in trouble (in the print original, two brothers lost in a snowstorm; in popular memory “Little Timmy fell in a well”, though this never actually happened in the movies or TV series) and Lassie running home to bark at other humans to get them to follow her to the rescue.

In software, “Lassie error” is a diagnostic message that barks “error” while being comprehensively unhelpful about what is actually going on. The term seems to have first surfaced on Twitter in early 2020; there is evidence in the thread of at least two independent inventions, and I would be unsurprised to learn of others.

In the Unix world, a particularly notorious Lassie error is what the ancient line-oriented Unix editor “ed” does on a command error. It says “?” and waits for another command – which is especially confusing since ed doesn’t have a command prompt. Ken Thompson had an almost unique excuse for extreme terseness, as ed was written in 1973 to run on a computer orders of magnitude less capable than the embedded processor in your keyboard.

Herewith the burden of my rant: You are not Ken Thompson, 1973 is a long time gone, and all the cost gradients around error reporting have changed. If you ever hear this term used about one of your error messages, you have screwed up. You should immediately apologize to the person who used it and correct your mistake.

Part of your responsibility as a software engineer, if you take your craft seriously, is to minimize the costs that your own mistakes or failures to anticipate exceptional conditions inflict on others. Users have enough friction costs when software works perfectly; when it fails, you are piling insult on that injury if your Lassie error leaves them without a clue about how to recover.

Really this term is unfair to Lassie, who as a dog didn’t have much of a vocabulary with which to convey nuances. You, as a human, have no such excuse. Every error message you write should contain a description of what went wrong in plain language, and – when error recovery is possible – contain actionable advice about how to recover.

This remains true when you are dealing with user errors. How you deal with (say) a user mistake in configuration-file syntax is part of the user interface of your program just as surely as the normally visible controls are. It is no less important to get that communication right; in fact, it may be more important – because a user encountering an error is a user in trouble that he needs help to get out of. When Little Timmy falls down a well you constructed and put in his path, your responsibility to say something helpful doesn’t lessen just because Timmy made the immediate mistake.

A design pattern I’ve seen used successfully is for immediate error messages to include both a one-line summary of the error and a cookie (like “E2317”) which can be used to look up a longer description including known causes of the problem and remedies. In a hypothetical example, the pair might look like this:

Out of memory during stream parsing (E1723)

E1723: Program ran out of memory while building the deserialized internal representation of a stream dump. Try lowering the value of GOGC to cause more frequent garbage collections, increasing the size of your swap partition, or moving to hardware with more RAM.

The key point here is that the user is not left in the lurch. The messages are not a meaningless bark-bark, but the beginning of a diagnosis and repair sequence.

If the thought of improving user experience in general leaves you unmoved, consider that the pain you prevent with an informative error message is rather likely to be your own, as you use your software months or years down the road or are required to answer pesky questions about it.

As with good comments in your code, it is perhaps most motivating to think of informative error messages as a form of anticipatory mercy towards your future self.

Apr 19

Payload, singleton, and stride lengths

Once again I’m inventing terms for useful distinctions that programmers need to make and sometimes get confused about because they lack precise language.

The motivation today is some issues that came up while I was trying to refactor some data representations to reduce reposurgeon’s working set. I realized that there are no fewer than three different things we can mean by the “length” of a structure in a language like C, Go, or Rust – and no terms to distinguish these senses.

Before reading these definitions, you might to do a quick read through The Lost Art of Structure Packing.

The first definition is payload length. That is the sum of the lengths of all the data fields in the structure. No padding is included in this length.

The second is stride length. This is the length of the structure with any interior padding and with the trailing padding or dead space required when you have an array of them. This padding is forced by the fact that on most hardware, an instance of a structure normally needs to have the alignment of its widest member for fastest access. If you’re working in C, sizeof gives you back a stride length in bytes.

I derived the term “stride length” for individual structures from a well-established traditional use of “stride” for array programming in PL/1 and FORTRAN that is decades old.

Stride length and payload length coincide if the structure has no interior or trailing padding. This can sometimes happen when you get an arrangement of fields exactly right, or your compiler might have a pragma to force tight packing even though fields may have to be accessed by slower multi-instruction sequences.

“Singleton length” is the term you’re least likely to need. It’s the length of a structure with interior padding but without trailing padding. The reason I’m dubbing it “singleton” length is that it might be relevant in situations where you’re declaring or passing a single instance of a struct not in an array.

Consider the following declarations in C on a 64-bit machine:

struct {int64_t a; int32_t b} x;
char y

That structure has a payload length of 12 bytes. Instances of it in an array would normally have a stride length of 16 bytes, with the last four bytes being padding. But your compiler might generate a 12-byte copy when you ask it to assign the value of x.

This struct has a singleton length of 12, same as its payload length. But these are not necessarily identical, Consider this:

struct {int64_t a; char b[6]; int32_t c} x;

The way this is normally laid out in memory it will have two bytes of interior padding after b, then 4 bytes of trailing padding after c. Its payload length is 8 + 6 + 4 = 18; its stride length is 8 + 8 + 8 = 24; and its singleton length is 8 + 6 + 2 + 4 = 20.

To avoid confusion, you should develop a habit: any time someone speaks or writes about the “length” of a structure, stop and ask: is this payload length, stride length, or singleton length?

Most usually the answer will be stride length. But someday, most likely when you’re working close to the metal on some low-power embedded system, it might be payload or singleton length – and the difference might actually matter.

Even when it doesn’t matter, having a more exact mental model is good for reducing the frequency of times you have to stop and check yourself because a detail is vague. The map is not the territory, but with a better map you’ll get lost less often.

Mar 09

shellcheck: boosting the signal

I like code-validation tools, because I hate defects in my software and I know that there are lots of kinds of defects that are difficult for an unaided human brain to notice.

On my projects, I throw every code validater I can find at my code. Standbys are cppcheck for C code, pylint for Python, and go lint for Go code. I run these frequently – usually they’re either part of the “make check” I use to run regression tests, or part of the hook script run when I push changes to the public repository.

A few days ago I found another validator that I now really like: shellcheck Yes, it’s a lint/validator for shell scripts – and in retrospect shell, as spiky and irregular and suffused with multilevel quoting as it is, has needed something like this for a long time.

I haven’t done a lot of shell scripting in the last couple of decades. It’s not a good language for programming at larger orders of magnitude than 10 lines or so – too many tool dependencies, too difficult to track what’s going on. These problems are why Perl and later scripting language became important; if shell had scaled up better the space they occupy would have remained shell code as far as the eye can see.

But sometimes you write a small script, and then it starts to grow, and you can end up in an awkward size range where it isn’t quite unmanageable enough to drive you to port it to (say) Python yet. I have some cases like this in the reposurgeon suite.

For this sort of thing a shell validater/linter can be a real boon, enabling you to have much more confidence that you’ll catch silly errors when you modify the script, and actually increasing the upper limit of source-line count at which shell remains a viable programming language.

So it is an excellent thing that shellcheck is a solid and carefully-thought-out piece of work. It does catch lot of nits and potential errors, hardening your script against cases you probably haven’t tested yet. For example. it’s especially good at flagging constructs that will break if a shell variable like $1 gets set to a value with embedded whitspace.

It has other features you want in a code validator, too. You can do line-by-line suppression of specific spellcheck warnings with magic comments, telling the tool “Yes, I really meant to do that” so it will shut up. This means when you get new warnings they are really obvious.

Also, it’s fast. Fast enough that you can run it on all your shellscripts up front of all your regular regression tests and probably barely ever notice the time cost.

It’s standard practice for me to have a “make check” that runs code validators and then the regression tests. I’m going back and adding shellcheck validation to those check productions on all my projects that ship shell scripts. I recommend this as a good habit to everybody.

Feb 21

Reposurgeon defeats all monsters!

On January 12th 2020, reposurgeon performed a successful conversion of its biggest repository ever – the entire history of the GNU Compiler Collection, 280K commits with a history stretching back through 1987. Not only were some parts CVS, the earliest portions predated CVS and had been stored in RCS.

I waited this long to talk about it to give the dust time to settle on the conversion. But it’s been 5 weeks now and I’ve heard nary a peep from the GCC developers about any problems, so I think we can score this as reposurgeon’s biggest victory yet.

Continue reading

Jan 24

30 Days in the Hole

Yes, it’s been a month since I posted here. To be more precise, 30 Days in the Hole – I’ve been heads-down on a project with a deadline which I just barely met. and then preoccupied with cleanup from that effort.

The project was reposurgeon’s biggest conversion yet, the 280K-commit history of the Gnu Compiler Collection. As of Jan 11 it is officially lifted from Subversion to Git. The effort required to get that done was immense, and involved one hair-raising close call.

Continue reading

Jun 22

Segfaults and Twitter monkeys: a tale of pointlessness

For a few years in the 1990s, when PNG was just getting established as a Web image format, I was a developer on the libpng team.

One reason I got involved is that the compression patent on GIFs was a big deal at the time. I had been the maintainer of GIFLIB since 1989; it was on my watch that Marc Andreesen chose that code for use in the first graphics-capable browser in ’94. But I handed that library off to a hacker in Japan who I thought would be less exposed to the vagaries of U.S. IP law. (Years later, after the century had turned and the LZW patents expired, it came back to me.)

Then, sometime within a few years of 1996, I happened to read the PNG standard, and thought the design of the format was very elegant. So I started submitting patches to libpng and ended up writing the support for six of the minor chunk types, as well as implementing the high-level interface to the library that’s now in general use.

As part of my work on PNG, I volunteered to clean up some code that Greg Roelofs had been maintaining and package it for release. This was “gif2png” and it was more or less the project’s official GIF converter.

(Not to be confused, though, with the GIFLIB tools that convert to and from various other graphics formats, which I also maintain. Those had a different origin, and were like libgif itself rather better code.)

Continue reading

Jun 13

Fear of COMITment

I shipped the first release of another retro-language revival today: COMIT. Dating from 1957 (coincidentally the year I was born) this was the first string-processing language, ancestral to SNOBOL and sed and ed and Unix shell. One of the notational conventions invented in COMIT, the use of $0, $1…etc. as substitution variables, survives in all these languages.

I actually wrote most of the interpreter three years ago, when a copy of the COMIT book fell into my hands (I think A&D regular Daniel Franke was responsible for that). It wasn’t difficult – 400-odd lines of Python, barely enough to make me break a sweat. That is, until I hit the parts when the book’s description of the language is vague and inadequate.

It was 1957 and nobody knew hardly anything about how to describe computer language systematically, so I can’t fault Dr. Victor Yngve too harshly. Where I came a particular cropper was trying to understand the intended relationship between indices into the workspace buffer and “right-half relative constituent numbers”. That defeated me, so I went off and did other things.

Over the last couple days, as part of my effort to promote my Patreon feed to a level where my medical expenses are less seriously threatening, I’ve been rebuilding all my project pages to include a Patreon button and an up-to-date list of Bronze and Institutional patrons. While doing this I tripped over the unshipped COMIT code and pondered what to do with it.

What I decided to do was ship it with an 0.1 version number as is. The alternative would have been to choose from several different possible interpretations of the language book and quite possibly get it wrong.

I think a good rule in this kind of situation is “First, do no harm”. I’d rather ship an incomplete implementation that can be verified by eyeball, and that’s what I’ve done – I was able to extract a pretty good set of regression tests for most of the features from the language book.

If someone else cares enough, some really obsessive forensics on the documentation and its code examples might yield enough certainty about the author’s intentions to support a full reconstruction. Alas, we can’t ask him for help, as he died in 2012.

A lot of the value in this revival is putting the language documentation and a code chrestomathy in a form that’s easy to find and read, anyway. Artifacts like COMIT are interesting to study, but actually using it for anything would be perverse.

Jun 01

The dangerous folly of “Software as a Service”

Comes the word that Saleforce.com has announced a ban on its customers selling “military-style rifles”.

The reason this ban has teeth is that the company provides “software as a service”; that is, the software you run is a client for servers that the provider owns and operates. If the provider decides it doesn’t want your business, you probably have no real recourse. OK, you could sue for tortious interference in business relationships, but that’s chancy and anyway you didn’t want to be in a lawsuit, you wanted to conduct your business.

This is why “software as a service” is dangerous folly, even worse than old-fashioned proprietary software at saddling you with a strategic business risk. You don’t own the software, the software owns you.

It’s 2019 and I feel like I shouldn’t have to restate the obvious, but if you want to keep control of your business the software you rely on needs to be open-source. All of it. All of it. And you can’t afford it to be tethered to a service provider even if the software itself is nominally open source.

Otherwise, how do you know some political fanatic isn’t going to decide your product is unclean and chop you off at the knees? It’s rifles today, it’ll be anything that can be tagged “hateful” tomorrow – and you won’t be at the table when the victim-studies majors are defining “hate”. Even if you think you’re their ally, you can’t count on escaping the next turn of the purity spiral.

And that’s disregarding all the more mundane risks that come from the fact that your vendor’s business objectives aren’t the same as yours. This is ground I covered twenty years ago, do I really have to put on the Mr. Famous Guy cape and do the rubber-chicken circuit again? Sigh…

Business leaders should learn to fear every piece of proprietary software and “service” as the dangerous addictions they are. If Salesforce.com’s arrogant diktat teaches that lesson, it will have been a service indeed.

Apr 17

Contributor agreements considered harmful

Yesterday I got email from a project asking me to wear my tribal-elder hat, looking for advice on how to re-invent its governance structure. I’m not going to name the project because they haven’t given me permission to air their problems in public, but I need to write about something that came up during the discussion, when my querent said they were thinking about requiring a contributor release form from people submitting code, “the way Apache does”.

“Don’t do it!” I said. Please don’t go the release-form route. It’s bad for the open-source community’s future every time someone does that. In the rest of this post I’ll explain why.

Continue reading

Mar 19

Am I really shipper’s only deployment case?

I released shipper 1.14 just now. It takes advantage of the conventional asciidoc extension – .adoc – that GitHub and GitLab have established, to do a useful little step if it can detect that your project README and NEWS files are asciidoc.

And I wondered, as I usually do when I cut a shipper release: am I really the only user this code has? My other small projects (things like SRC and irkerd) tend to attract user communities that stick with them, but I’ve never seen any sign of that with shipper – no bug reports or RFEs coming in over the transom.

This time, it occurred to me that if I am shipper’s only user, then maybe the typical work practices of the open-source community are rather different than I thought they were. That’s a question worth raising in public, so I’m posting it here to attract some comment.

Continue reading

Mar 08

Declarative is greater than imperative

Sometimes I’m a helpless victim of my urges.

A while back -very late in 2016 – I started work on a program called loccount. This project originally had two purposes.

One is that I wanted a better, faster replacement for David Wheeler’s sloccount tool, which I was using to collect statistics on the amount of virtuous code shrinkage in NTPsec. David is good people and sloccount is a good idea, but internally it’s a slow and messy pile of kludges – so much so that it seems to have exceed his capacity to maintain, at time of writing in 2019 it hadn’t been updated since 2004. I’d been thinking about writing a better replacement, in Python, for a while.

Then I realized this problem was about perfectly sized to be my learn-Go project. Small enough to be tractable, large enough to not be entirely trivial. And there was the interesting prospect of using channels/goroutines to parallelize the data collection. I got it working well enough for NTP statistics pretty quickly, though I didn’t issue a first public release until a little over a month later (mainly because I wanted to have a good test corpus in place to demonstrate correctness). And the parallelized code was both satisfyingly fast and really pretty. I was quite pleased.

The only problem was that the implementation, having been a deliberately straight translation of sloccount’s algorithms in order to preserve comparability of the reports, was a bit of a grubby pile internally. Less so than sloccount’s because it was all in one language. but still. It’s difficult for me to leave that kind of thing alone; the urge to clean it up becomes like a maddening itch.

The rest of this post is about what happened when I succumbed. I got two lessons from this experience: one reinforcement of a thing I already knew, and one who-would-have-thought-it-could-go-this-far surprise. I also learned some interesting things about the landscape of programming languages.

Continue reading

Mar 05

How not to design a wire protocol

A wire protocol is a way to pass data structures or aggregates over a serial channel between different computing environments. At the very lowest level of networking there are bit-level wire protocols to pass around data structures called “bytes”; further up the stack streams of bytes are used to serialize more complex things, starting with numbers and working up to aggregates more conventionally thought of as data structures. The one thing you generally cannot successfully pass over a wire is a memory address, so no pointers.

Designing wire protocols is, like other kinds of engineering, an art that responds to cost gradients. It’s often gotten badly wrong, partly because of clumsy technique but mostly because people have poor intuitions about those cost gradients and optimize for the wrong things. In this post I’m going to write about those cost gradients and how they push towards different regions of the protocol design space.

My authority for writing about this is that I’ve implemented endpoints for nearly two dozen widely varying wire protocols, and designed at least one wire protocol that has to be considered widely deployed and successful by about anybody’s standards. That is the JSON profile used by many location-aware applications to communicate with GPSD and thus deployed on a dizzying number of smartphones and other embedded devices.

I’m writing about this now because I’m contemplating two wire-protocol redesigns. One is of NTPv4, the packet format used to exchange timestamps among cooperating time-service programs. The other is an unnamed new protocol in IETF draft, deployed in prototype in NTPsec and intended to be used for key exchange among NTP daemons authenticating to each other.

Here’s how not to do it…

Continue reading

Feb 23

Announcing loccount 2.0 – now up to 74 languages

I just released the 2.0 version of loccount.

This is a major release with many new features and upgrades. It’s gone well beyond just being a faster, cleaner, bug-fixed port of David A. Wheeler’s sloccount. The count of supported languages is now up to 74 from sloccount’s 30. But the bigger change is that for 33 of those languages the tool can now deliver a statement count (LLOC = Logical Lines Of Code) as well as opposed to a line count (SLOC = Source Lines of Code, ignoring whitespace and comments)

To go with this, the tool can now perform COCOMO II cost and schedule estimation based on LLOC as well as COCOMO I based on SLOC.

The manual page includes the following cautions:

Continue reading

Dec 24

Pessimism about parallelism

Massive concurrency and hardware parallelism are sexy topics in the 21st century. There are a couple of good reasons for this and one rather unfortunate one.

Two good reasons are the combination of eye-catching uses of Graphics Processing Units (GPUs) in games and their unexpected secondary uses in deep-learning AI – these exploit massive hardware parallelism internally. The unfortunate reason is that single-processor execution speeds hit a physics wall in about 2006. Current leakage and thermal runaway issues now sharply limit increases in clock frequency, and the classic way out of that bind – lowering voltage – is now bumping up against serious quantum-noise issues.

Hardware manufacturers competing for attention have elected to do it by putting ever more processing cores in each chip they ship and touting the theoretical total throughput of the device. But there have also been rapidly increasing amounts of effort put into pipelining and speculative execution techniques that use concurrency under the hood in attempts to make the serial single processors that programmers can see crank instructions more rapidly.

The awkward truth is that many of our less glamorous computing job loads just can’t use visible concurrency very well. There are different reasons for this that have differing consequences for the working programmer, and a lot of confusion abroad among those reasons. In this episode I’m going to draw some distinctions that I hope will help all of us think more clearly.

First, we need to be clear about where harnessing hardware parallelism is easy and why that seems to be the case. We look at computing for graphics, neural nets, signal processing, and Bitcoin mining, and we see a pattern: parallelizing algorithms work best on hardware that is (a) specifically designed to execute them, and (b) can’t do anything else!

We also see that the inputs to the most successful parallel algorithms (sorting, string matching, fast-Fourier transform, matrix operations, image reverse quantization, and the like) all look rather alike. They tend to have a metric structure and an implied distinction between “near” and “far” in the data that allows it to be carved into patches such that coupling between elements far from each other is negligible.

In the terms of an earlier post on semantic locality, parallel methods seem to be applicable mainly when the data has good locality. And they run best on hardware which – like like the systolic-array processors at the heart of GPUs – is designed to support only “near” communication, between close-by elements.

By contrast, writing software that does effective divide-and-conquer for input with bad locality on a collection of general-purpose (Von Neumann architecture) computers is notoriously difficult.

We can sum this up with a heuristic: Your odds of being able to apply parallel-computing techniques to a problem are inversely proportional to the degree of irreducible semantic nonlocality in your input data.

Another limit on parallel computing is that some important algorithms can’t be parallelized at all – provably so. In the blog post where I first explored this territory I coined the term “SICK algorithm”, with the SICK expanded to “Serial, Intrinscally – Cope, Kiddo!” Important examples include but are not limited to: Dijkstra’s n-least-paths algorithm; cycle detection in directed graphs (with implications for 3-SAT solvers); depth first search; computing the nth term in a cryptographic hash chain; network-flow optimization.

Bad locality in the input data is implicated here, too, especially in graph- and tree-structure contexts. Cryptographic hash chains can’t be parallelized because their entries have to be computed in strict time order – a strictness which is actually important for validating the chain against tampering.

There’s a blocking rule here: You can’t parallelize if a SICK algorithm is in the way.

We’re not done. There are at least two other classes of blocker that you will frequently hit.

One is not having the right tools. Most languages don’t support anything but mutex-and-mailbox, which has the advantage that the primitives are easy to implement but the disadvantage that it induces horrible complexity explosions and is nigh-impossible to model accurately in your head at scales over about four interacting locks.

If you are lucky you may get some use out of a more tractable primitive set like Go channels (aka Communicating Sequential Processes) or the ownership/send/sync system in Rust. But the truth is, we don’t really know what the “right” language primitives are for parallelism on von-Neuman-architecture computers. And there may not even be one right set of primitives; there might be two, three, or more different sets of primitive appropriate for different problem domains but as incommensurable as one and the square root of two. At the present state of the art in 2018 nobody actually knows.

Last but not least, the limitations of human wetware. Even given a tractable algorithm, a data representation with good locality, and sharp tools, parallel programming seems to be just plain difficult for human beings even when algorithm being applied is quite simple. Our brains are not all that good at modelling the simpler state spaces of purely serial programs, and much less so at parallel ones.

We know this because there is plenty of real-world evidence that debugging implementations of parallelizing code is worse than merely _difficult_ for humans. Race conditions, deadlocks, livelocks, and insidious data corruption due to subtly unsafe orders of operation plague all such attempts.

Having a grasp on these limits has, I think, has been growing steadily more important since the collapse of Dennard scaling. Due to all of these bottlenecks in the supply of code that can use multiple cores effectively, some percentage of the multicore hardware out there must be running software that will never saturate its cores; or, to look at it from the other end, the hardware is overbuilt for its job load. How much money and effort are we wasting this way?

Processor vendors would love you to overestimate the functional gain from snazzy new silicon with ever larger multi-core counts; however else will they extract enough of your money to cover the eye-watering cost of their chip fabs and still make a profit? So there’s a lot of marketing push out there that aims to distract capacity planners from ever wondering when those gains are real.

And, to be fair, some places they are. The kind of servers that live in rack mounts and handle hundreds of thousands of concurrent transactions per second probably have their core count matched to their job load fairly well. Smartphones or embedded systems, too – in both these extreme cases a lot of effort goes into minimizing build costs and power budgets, and that’s going to exert selective pressure against overprovisioning.

But for typical desktop and laptop users? I have dark suspicions. It’s hard to know, because we’ve been collecting real performance gains due to other technology changes like the shift from spinning-rust to solid-state mass storage. Gains like that are easy to mistake for an effect of more CPU throughput unless you’re profiling carefully.

But here’s the shape of my suspicion:

1. For most desktop/laptop users the only seriously parallel computing that ever takes place on their computers is in their graphics chips.

2. More than two processor cores is usually just wasteful hotrodding. Operating systems may be able to parcel out applications between them, but the general run of application software is unable to exploit parallelism and it is rare for most users to run enough different processor-hungry applications simultaneously to saturate their hardware that way.

3. Consequently, most of the processing units now deployed in 4-core-and-up machines are doing nothing most of the time but generating waste heat.

My regulars include a lot of people who are likely to be able to comment intelligently on this suspicion. It will be interesting to see what they have to say.

UPDATE: A commenter on G+ points out that one interesting use case for multicores is compiling code really quickly. Source for a language like C has good locality – it can be compiled in well-separated units (source files) into object files that are later joined by a linker.