Jun 22

Segfaults and Twitter monkeys: a tale of pointlessness

For a few years in the 1990s, when PNG was just getting established as a Web image format, I was a developer on the libpng team.

One reason I got involved is that the compression patent on GIFs was a big deal at the time. I had been the maintainer of GIFLIB since 1989; it was on my watch that Marc Andreesen chose that code for use in the first graphics-capable browser in ’94. But I handed that library off to a hacker in Japan who I thought would be less exposed to the vagaries of U.S. IP law. (Years later, after the century had turned and the LZW patents expired, it came back to me.)

Then, sometime within a few years of 1996, I happened to read the PNG standard, and thought the design of the format was very elegant. So I started submitting patches to libpng and ended up writing the support for six of the minor chunk types, as well as implementing the high-level interface to the library that’s now in general use.

As part of my work on PNG, I volunteered to clean up some code that Greg Roelofs had been maintaining and package it for release. This was “gif2png” and it was more or less the project’s official GIF converter.

(Not to be confused, though, with the GIFLIB tools that convert to and from various other graphics formats, which I also maintain. Those had a different origin, and were like libgif itself rather better code.)

Continue reading

Jun 13

Fear of COMITment

I shipped the first release of another retro-language revival today: COMIT. Dating from 1957 (coincidentally the year I was born) this was the first string-processing language, ancestral to SNOBOL and sed and ed and Unix shell. One of the notational conventions invented in COMIT, the use of $0, $1…etc. as substitution variables, survives in all these languages.

I actually wrote most of the interpreter three years ago, when a copy of the COMIT book fell into my hands (I think A&D regular Daniel Franke was responsible for that). It wasn’t difficult – 400-odd lines of Python, barely enough to make me break a sweat. That is, until I hit the parts when the book’s description of the language is vague and inadequate.

It was 1957 and nobody knew hardly anything about how to describe computer language systematically, so I can’t fault Dr. Victor Yngve too harshly. Where I came a particular cropper was trying to understand the intended relationship between indices into the workspace buffer and “right-half relative constituent numbers”. That defeated me, so I went off and did other things.

Over the last couple days, as part of my effort to promote my Patreon feed to a level where my medical expenses are less seriously threatening, I’ve been rebuilding all my project pages to include a Patreon button and an up-to-date list of Bronze and Institutional patrons. While doing this I tripped over the unshipped COMIT code and pondered what to do with it.

What I decided to do was ship it with an 0.1 version number as is. The alternative would have been to choose from several different possible interpretations of the language book and quite possibly get it wrong.

I think a good rule in this kind of situation is “First, do no harm”. I’d rather ship an incomplete implementation that can be verified by eyeball, and that’s what I’ve done – I was able to extract a pretty good set of regression tests for most of the features from the language book.

If someone else cares enough, some really obsessive forensics on the documentation and its code examples might yield enough certainty about the author’s intentions to support a full reconstruction. Alas, we can’t ask him for help, as he died in 2012.

A lot of the value in this revival is putting the language documentation and a code chrestomathy in a form that’s easy to find and read, anyway. Artifacts like COMIT are interesting to study, but actually using it for anything would be perverse.

Jun 01

The dangerous folly of “Software as a Service”

Comes the word that Saleforce.com has announced a ban on its customers selling “military-style rifles”.

The reason this ban has teeth is that the company provides “software as a service”; that is, the software you run is a client for servers that the provider owns and operates. If the provider decides it doesn’t want your business, you probably have no real recourse. OK, you could sue for tortious interference in business relationships, but that’s chancy and anyway you didn’t want to be in a lawsuit, you wanted to conduct your business.

This is why “software as a service” is dangerous folly, even worse than old-fashioned proprietary software at saddling you with a strategic business risk. You don’t own the software, the software owns you.

It’s 2019 and I feel like I shouldn’t have to restate the obvious, but if you want to keep control of your business the software you rely on needs to be open-source. All of it. All of it. And you can’t afford it to be tethered to a service provider even if the software itself is nominally open source.

Otherwise, how do you know some political fanatic isn’t going to decide your product is unclean and chop you off at the knees? It’s rifles today, it’ll be anything that can be tagged “hateful” tomorrow – and you won’t be at the table when the victim-studies majors are defining “hate”. Even if you think you’re their ally, you can’t count on escaping the next turn of the purity spiral.

And that’s disregarding all the more mundane risks that come from the fact that your vendor’s business objectives aren’t the same as yours. This is ground I covered twenty years ago, do I really have to put on the Mr. Famous Guy cape and do the rubber-chicken circuit again? Sigh…

Business leaders should learn to fear every piece of proprietary software and “service” as the dangerous addictions they are. If Salesforce.com’s arrogant diktat teaches that lesson, it will have been a service indeed.

Apr 17

Contributor agreements considered harmful

Yesterday I got email from a project asking me to wear my tribal-elder hat, looking for advice on how to re-invent its governance structure. I’m not going to name the project because they haven’t given me permission to air their problems in public, but I need to write about something that came up during the discussion, when my querent said they were thinking about requiring a contributor release form from people submitting code, “the way Apache does”.

“Don’t do it!” I said. Please don’t go the release-form route. It’s bad for the open-source community’s future every time someone does that. In the rest of this post I’ll explain why.

Continue reading

Mar 19

Am I really shipper’s only deployment case?

I released shipper 1.14 just now. It takes advantage of the conventional asciidoc extension – .adoc – that GitHub and GitLab have established, to do a useful little step if it can detect that your project README and NEWS files are asciidoc.

And I wondered, as I usually do when I cut a shipper release: am I really the only user this code has? My other small projects (things like SRC and irkerd) tend to attract user communities that stick with them, but I’ve never seen any sign of that with shipper – no bug reports or RFEs coming in over the transom.

This time, it occurred to me that if I am shipper’s only user, then maybe the typical work practices of the open-source community are rather different than I thought they were. That’s a question worth raising in public, so I’m posting it here to attract some comment.

Continue reading

Mar 08

Declarative is greater than imperative

Sometimes I’m a helpless victim of my urges.

A while back -very late in 2016 – I started work on a program called loccount. This project originally had two purposes.

One is that I wanted a better, faster replacement for David Wheeler’s sloccount tool, which I was using to collect statistics on the amount of virtuous code shrinkage in NTPsec. David is good people and sloccount is a good idea, but internally it’s a slow and messy pile of kludges – so much so that it seems to have exceed his capacity to maintain, at time of writing in 2019 it hadn’t been updated since 2004. I’d been thinking about writing a better replacement, in Python, for a while.

Then I realized this problem was about perfectly sized to be my learn-Go project. Small enough to be tractable, large enough to not be entirely trivial. And there was the interesting prospect of using channels/goroutines to parallelize the data collection. I got it working well enough for NTP statistics pretty quickly, though I didn’t issue a first public release until a little over a month later (mainly because I wanted to have a good test corpus in place to demonstrate correctness). And the parallelized code was both satisfyingly fast and really pretty. I was quite pleased.

The only problem was that the implementation, having been a deliberately straight translation of sloccount’s algorithms in order to preserve comparability of the reports, was a bit of a grubby pile internally. Less so than sloccount’s because it was all in one language. but still. It’s difficult for me to leave that kind of thing alone; the urge to clean it up becomes like a maddening itch.

The rest of this post is about what happened when I succumbed. I got two lessons from this experience: one reinforcement of a thing I already knew, and one who-would-have-thought-it-could-go-this-far surprise. I also learned some interesting things about the landscape of programming languages.

Continue reading

Mar 05

How not to design a wire protocol

A wire protocol is a way to pass data structures or aggregates over a serial channel between different computing environments. At the very lowest level of networking there are bit-level wire protocols to pass around data structures called “bytes”; further up the stack streams of bytes are used to serialize more complex things, starting with numbers and working up to aggregates more conventionally thought of as data structures. The one thing you generally cannot successfully pass over a wire is a memory address, so no pointers.

Designing wire protocols is, like other kinds of engineering, an art that responds to cost gradients. It’s often gotten badly wrong, partly because of clumsy technique but mostly because people have poor intuitions about those cost gradients and optimize for the wrong things. In this post I’m going to write about those cost gradients and how they push towards different regions of the protocol design space.

My authority for writing about this is that I’ve implemented endpoints for nearly two dozen widely varying wire protocols, and designed at least one wire protocol that has to be considered widely deployed and successful by about anybody’s standards. That is the JSON profile used by many location-aware applications to communicate with GPSD and thus deployed on a dizzying number of smartphones and other embedded devices.

I’m writing about this now because I’m contemplating two wire-protocol redesigns. One is of NTPv4, the packet format used to exchange timestamps among cooperating time-service programs. The other is an unnamed new protocol in IETF draft, deployed in prototype in NTPsec and intended to be used for key exchange among NTP daemons authenticating to each other.

Here’s how not to do it…

Continue reading

Feb 23

Announcing loccount 2.0 – now up to 74 languages

I just released the 2.0 version of loccount.

This is a major release with many new features and upgrades. It’s gone well beyond just being a faster, cleaner, bug-fixed port of David A. Wheeler’s sloccount. The count of supported languages is now up to 74 from sloccount’s 30. But the bigger change is that for 33 of those languages the tool can now deliver a statement count (LLOC = Logical Lines Of Code) as well as opposed to a line count (SLOC = Source Lines of Code, ignoring whitespace and comments)

To go with this, the tool can now perform COCOMO II cost and schedule estimation based on LLOC as well as COCOMO I based on SLOC.

The manual page includes the following cautions:

Continue reading

Dec 24

Pessimism about parallelism

Massive concurrency and hardware parallelism are sexy topics in the 21st century. There are a couple of good reasons for this and one rather unfortunate one.

Two good reasons are the combination of eye-catching uses of Graphics Processing Units (GPUs) in games and their unexpected secondary uses in deep-learning AI – these exploit massive hardware parallelism internally. The unfortunate reason is that single-processor execution speeds hit a physics wall in about 2006. Current leakage and thermal runaway issues now sharply limit increases in clock frequency, and the classic way out of that bind – lowering voltage – is now bumping up against serious quantum-noise issues.

Hardware manufacturers competing for attention have elected to do it by putting ever more processing cores in each chip they ship and touting the theoretical total throughput of the device. But there have also been rapidly increasing amounts of effort put into pipelining and speculative execution techniques that use concurrency under the hood in attempts to make the serial single processors that programmers can see crank instructions more rapidly.

The awkward truth is that many of our less glamorous computing job loads just can’t use visible concurrency very well. There are different reasons for this that have differing consequences for the working programmer, and a lot of confusion abroad among those reasons. In this episode I’m going to draw some distinctions that I hope will help all of us think more clearly.

First, we need to be clear about where harnessing hardware parallelism is easy and why that seems to be the case. We look at computing for graphics, neural nets, signal processing, and Bitcoin mining, and we see a pattern: parallelizing algorithms work best on hardware that is (a) specifically designed to execute them, and (b) can’t do anything else!

We also see that the inputs to the most successful parallel algorithms (sorting, string matching, fast-Fourier transform, matrix operations, image reverse quantization, and the like) all look rather alike. They tend to have a metric structure and an implied distinction between “near” and “far” in the data that allows it to be carved into patches such that coupling between elements far from each other is negligible.

In the terms of an earlier post on semantic locality, parallel methods seem to be applicable mainly when the data has good locality. And they run best on hardware which – like like the systolic-array processors at the heart of GPUs – is designed to support only “near” communication, between close-by elements.

By contrast, writing software that does effective divide-and-conquer for input with bad locality on a collection of general-purpose (Von Neumann architecture) computers is notoriously difficult.

We can sum this up with a heuristic: Your odds of being able to apply parallel-computing techniques to a problem are inversely proportional to the degree of irreducible semantic nonlocality in your input data.

Another limit on parallel computing is that some important algorithms can’t be parallelized at all – provably so. In the blog post where I first explored this territory I coined the term “SICK algorithm”, with the SICK expanded to “Serial, Intrinscally – Cope, Kiddo!” Important examples include but are not limited to: Dijkstra’s n-least-paths algorithm; cycle detection in directed graphs (with implications for 3-SAT solvers); depth first search; computing the nth term in a cryptographic hash chain; network-flow optimization.

Bad locality in the input data is implicated here, too, especially in graph- and tree-structure contexts. Cryptographic hash chains can’t be parallelized because their entries have to be computed in strict time order – a strictness which is actually important for validating the chain against tampering.

There’s a blocking rule here: You can’t parallelize if a SICK algorithm is in the way.

We’re not done. There are at least two other classes of blocker that you will frequently hit.

One is not having the right tools. Most languages don’t support anything but mutex-and-mailbox, which has the advantage that the primitives are easy to implement but the disadvantage that it induces horrible complexity explosions and is nigh-impossible to model accurately in your head at scales over about four interacting locks.

If you are lucky you may get some use out of a more tractable primitive set like Go channels (aka Communicating Sequential Processes) or the ownership/send/sync system in Rust. But the truth is, we don’t really know what the “right” language primitives are for parallelism on von-Neuman-architecture computers. And there may not even be one right set of primitives; there might be two, three, or more different sets of primitive appropriate for different problem domains but as incommensurable as one and the square root of two. At the present state of the art in 2018 nobody actually knows.

Last but not least, the limitations of human wetware. Even given a tractable algorithm, a data representation with good locality, and sharp tools, parallel programming seems to be just plain difficult for human beings even when algorithm being applied is quite simple. Our brains are not all that good at modelling the simpler state spaces of purely serial programs, and much less so at parallel ones.

We know this because there is plenty of real-world evidence that debugging implementations of parallelizing code is worse than merely _difficult_ for humans. Race conditions, deadlocks, livelocks, and insidious data corruption due to subtly unsafe orders of operation plague all such attempts.

Having a grasp on these limits has, I think, has been growing steadily more important since the collapse of Dennard scaling. Due to all of these bottlenecks in the supply of code that can use multiple cores effectively, some percentage of the multicore hardware out there must be running software that will never saturate its cores; or, to look at it from the other end, the hardware is overbuilt for its job load. How much money and effort are we wasting this way?

Processor vendors would love you to overestimate the functional gain from snazzy new silicon with ever larger multi-core counts; however else will they extract enough of your money to cover the eye-watering cost of their chip fabs and still make a profit? So there’s a lot of marketing push out there that aims to distract capacity planners from ever wondering when those gains are real.

And, to be fair, some places they are. The kind of servers that live in rack mounts and handle hundreds of thousands of concurrent transactions per second probably have their core count matched to their job load fairly well. Smartphones or embedded systems, too – in both these extreme cases a lot of effort goes into minimizing build costs and power budgets, and that’s going to exert selective pressure against overprovisioning.

But for typical desktop and laptop users? I have dark suspicions. It’s hard to know, because we’ve been collecting real performance gains due to other technology changes like the shift from spinning-rust to solid-state mass storage. Gains like that are easy to mistake for an effect of more CPU throughput unless you’re profiling carefully.

But here’s the shape of my suspicion:

1. For most desktop/laptop users the only seriously parallel computing that ever takes place on their computers is in their graphics chips.

2. More than two processor cores is usually just wasteful hotrodding. Operating systems may be able to parcel out applications between them, but the general run of application software is unable to exploit parallelism and it is rare for most users to run enough different processor-hungry applications simultaneously to saturate their hardware that way.

3. Consequently, most of the processing units now deployed in 4-core-and-up machines are doing nothing most of the time but generating waste heat.

My regulars include a lot of people who are likely to be able to comment intelligently on this suspicion. It will be interesting to see what they have to say.

UPDATE: A commenter on G+ points out that one interesting use case for multicores is compiling code really quickly. Source for a language like C has good locality – it can be compiled in well-separated units (source files) into object files that are later joined by a linker.

Nov 27

SRC, four years later

Four years ago, I wrote an entire version-control system in a 14-hour burst of inspiration. It’s a small, lightweight tool designed for solo single-file projects that allows several histories to coexist in a single directory – good for /etc files, HOWTOs, or that script collection in your ~/bin directory.

I wasn’t certain, at the time, that the concept would prove out as a production tool for anyone but me. But it did. Here are some statistics: Over 4 years, 21 point releases, 644 commits, 11 committers. Six issues filed by five different users, 20 merge requests. I know of about half a dozen users who’ve raised their hands on IRC or in blog comments. Code has about quintupled in size from the first alpha release (0.1, 513 lines) to 2757 lines today.

That is the statistical profile of a modest success – in fact the developer roster is larger than I realized before I went back through the logs. The main thing looking at the history reveals is that there’s a user community out there that has been sending a steady trickle of minor bug reports and enhancement requests over the whole life of the project. This is a lot more encouraging than dead air would be.

Of course I don’t now how many total users SRC has. But we can base a guess on fanout patterns observed when other projects (usually much larger ones) have done polls to try to measure userbase size. A sound extrapolation would be somewhere between one and two orders of magnitude more than have made themselves visible – so, somewhere between about 200 and 2000.

(There seems to be something like an exponential scaling law at work here. For random open source project X old enough to have passed the sudden-infant-death filter, if there’s an identifiable core dev group in the single-digit range you can generally expect the casual contributors to be about 10x more and the userbase to be at least 100x more.)

SRC has held up pretty well as a design exercise, too. I’ve had complaints about minor bugs in the UI, but nobody bitching about the UI itself. Credit to the Subversion developers I swiped most of the UI design from; their data model may be obsolete, but nobody in VCS-land has done better at UI and I was at least smart enough not to try.

2.7KLOC is nicely compact for an entire version-control system supporting both RCS and SCCS back ends. I don’t expect it to get much larger; there are only two minor items left on the to-do list, neither of which should add significant lines of code.

Today I’m shipping 1.21. With gratitude to everyone that helped improve it.

Oct 22

How to write narrative documentation

The following is a very lightly edited version of email I wrote to my apprentice Ian Bruene after he wrote documentation for his new Kommandant project that was, alas, as awful as I generally expect from programmers. I’m not training Ian for mere coding competence; he’s too talented for that and anyway I have higher standards. This is my way of insisting that he do documentation well – and it was he who suggested it would make a good blog post.

Continue reading

Oct 08

Reposurgeon’s Excellent Journey and the Waning of Python

Time to make it public and official. The entire reposurgeon suite (not just repocutter and repomapper, which have already been ported) is changing implementation languages from Python to Go. Reposurgeon itself is about 50% translated, with pretty good unit-test coverage. Three of my collaborators on the project (Daniel Brooks, Eric Sunshine, and Edward Cree) have stepped up to help with code and reviews.

I’m posting about this because the pressures driving this move are by no means unique to the reposurgeon suite. Python, my favorite working language for twenty years, can no longer cut it at the scale I now need to operate – it can’t handle large enough working sets, and it’s crippled in a world of multi-CPU computers. I’m certain I’m not alone in seeing these problems; if I were, Google, which used to invest heavily in Python (they had Guido on staff there for a while) wouldn’t have funded Go.

Some of Python’s issues can be fixed. Some may be unfixable. I love Guido and the gang and I am vastly grateful for all the use and pleasure I have gotten out of Python, but, guys, this is a wake-up call. I don’t think you have a lot of time to get it together before Python gets left behind.

I’ll first describe the specific context of this port, then I’ll delve into the larger issues about Python, how it seems to be falling behind, and what can be done to remedy the situation.

Continue reading

Oct 02

Rule-swarm attacks can outdo deep reasoning

It not news to readers of this blog that I like to find common tactics and traps in programming that don’t have names and name them. I don’t only do this because it’s fun. When you have named a thing you give your brain permission to reason about it as a conceptual unit. Bad jargon obfuscates, map hiding territory; good jargon reveals, aiding reflection on and and improvement of your practice.

In my last post I coined “shtoopid problem”. It went viral; every programmer has hit this, and it’s useful to have the term because you can attach to it recognition rules and tactics for escaping such traps. (And not only in programming; consider kafkatrapping).

Today’s invention is the term “rule-swarm attack”. It’s derived from the military term “swarm attack” and opposed to “deep reasoning”, “structural analysis” and “generative rules”. I’ll explain it and provide some case studies.

Continue reading

Sep 27

Solving shtoopid problems

There is a kind of programming trap I occasionally fall into that is so damn irritating that it needs a name.

The task is easy to specify and apparently easy to write tests for. The code can be instrumented so that you can see exactly what is going on during every run. You think you have a complete grasp on the theory. It’s the kind of thing you think you’re normally good at, and ought to be able to polish off in 20 LOC and 45 minutes.

And yet, success eludes you for an insanely long time. Edge cases spring up out of nowhere to mug you. Every fix you try drags you further off into the weeds. You stare at dumps from the instrumentation until you’re dizzy and numb, and no enlightenment occurs. Even as you are bashing your head against a wall of incomprehension, consciousness grows that when you find the solution, it will be damningly simple and you will feel utterly moronic, like you should have gotten there days ago.

Welcome to programmer hell. This is your shtoopid problem.

Continue reading

Aug 22

Unix != open source

Yesterday a well-meaning hacker sent me a newly-recovered koan of Master Foo in which an angry antagonist berated Master Foo for promoting an ethic of open-source software at the expense of programmers’ livelihoods.

Alas, I knew at once that he had been misled by a forgery, or perhaps some dreadful chain of copying errors, at whatever venerable monastic library had been the site of his research. Not because the economics was wrong – Master Foo persuades the antagonist that his assumption is in error – but because the koan conflates two things that were not the same. Actually, at least three things that are not the same.

Eighteen years into the third millennium, long after the formative events of Master Foo’s time, many people fail to understand how complex and contingent the relationship between the Unix tradition and the open-source ethos actually was in the old days. Too readily we project today’s conditions backwards in a way that impedes understanding of history.

Here’s how it was…

Continue reading

May 30

Defect attractors

There’s a phrase I’ve used on this blog more than once that I had reason to Google just now and found that (to my surprise) the top hits are mostly my writings. It is “defect attractor”.

In this post I’m going to explain why I think this is an important concept that needs to be in the toolkit of every software engineer, and talk about the practice it implies.

Continue reading

May 09

Embrace the SICK

There’s a very interesting article just out, C Is Not a Low-level Language;. in which David Chisnall punctures the comforting illusion that C is really a “close-to-the-metal” language and relates this illusion to the high costs of Spectre and other processor-level bugs.

Those of us who think seriously about language design have long been aware that C’s flat-address-space model is increasingly at odds with the real world of memory-caching hierarchies. Chisnall’s main contribution is to notice that speculative execution, the feature at the bottom of the Spectre and Meltdown bugs, is essentially a hack implemented to allow C programmers to maintain the illusion that they’re running on a really fast serial machine.  But he has other interesting points as well.

I recommend reading Chisnall’s article before you go further with this post.

It’s no news to my regulars that I’ve been putting increasing investment into the Go language and now believe it a plausible candidate to replace C and C++ over most of C/C++’s range – that is, outside  of kernels and hard realtime.  So the question that immediately occurred to me upon reading the article was: Is Go necessarily productive of the same kind of kludge that Chisnall is calling out?

Because if it is – but something else isn’t – that could be a reason not to overcommit to Go.  The twin pressures of demand for lower security defects and the increasing complexity costs of speculative execution are bound to toll heavily against Go if it does demand massive speculative execution and there’s any realistic alternative that does not. Do we need something much more divergent from C (Erlang? Ocaml? Even perhaps Haskell?) for systems programming to follow where the hardware is going?

So let’s walk through Chisnall’s discussion points, bounce Go off each one, and see what we can see.  What we’ll find implies, I think, some more general conclusions about what will and won’t work in matching language design to real-world workloads and processor architectures.

Continue reading

Apr 23

The UPSide state diagram

I think this diagram is now stable enough to put on the record.

UPSide state diagram

UPSide state diagram

Both this diagram and the Go code for the policy logic are generated from this pseudocode:


    render.state("DaemonUp", "Daemon running") 
    render.action("DaemonUp", "ChargeWait", CHARGING)
    render.state("ChargeWait", "Charge wait")
    render.action("ChargeWait", "MainsUp", CHARGED)
    render.action("ChargeWait", "OnBattery", MAINSDROP)
    render.state("MainsUp", "On mains power")
    render.action("DaemonUp", "OnBattery", MAINSOFF)
    render.state("OnBattery", "On battery power")
    render.action("MainsUp", "OnBattery", MAINSDROP)
    render.action("OnBattery", "Overtime", DWELLWARNING)
    render.state("Overtime", "User warned of shutdown")
    render.action("Overtime", "PreShutdown", DWELLTIMEOUT)
    render.state("PreShutdown", "Awaiting power drop")
    render.action("PreShutdown", "ChargeWait", RESTORED)
    render.state("UPSCrash", "UPS goes dark")
    render.state("HostDown", "Host has shut down")
    render.action("PreShutdown", "HostDown", HOSTDOWN)
    render.action("PreShutdown", "UPSCrash", BATTERYDRAIN, unreachable=True)
    render.action("OnBattery", "ChargeWait", RESTORED)
    render.action("Overtime", "ChargeWait", RESTORED)
    render.action("HostDown", "MainsUp", RESTORED_LATE)
    render.action("HostDown", "UPSCrash", BATTERYDRAIN, unreachable=True)

To see the full context of this, clone git@gitlab.com:esr/upside.git and explore the docs/ directory.