System engineering for dummies

I’ve been getting a lot of suggestions about the brand new UPSide project recently. One of them nudged me into bringing a piece of implicit knowledge to the surface of my mind. Having made it conscious, I can now share it.

I’ve said before that, on the unusual occasions I get to do it, I greatly enjoy whole-systems engineering – problems where hardware and software design inform each other and the whole is situated in an economic and human-factors context that really matters.

I don’t kid myself that I’m among the best at this, not in the way that I know I’m (say) an A-list systems programmer or exceptionally good at a couple other specific things like DSLs. But one of the advantages of having been around the track a lot of times is that you see a lot of failures, and a lot of successes, and after a while your brain starts to extract patterns. You begin to know, without actually knowing that you know until a challenge elicits that knowledge.

Here is a thing I know: A lot of whole-systems design has a serious drunk-under-the-streetlamp problem in its cost and complexity estimations. Smart system engineers counter-bias against this, and I’m going to tell you at least one important way to do that.

You know the joke. A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them near his car three blocks away. The policeman asks why he is searching here, and the drunk replies, “This is where the light is”

When we’re trying to estimate costs and time-to-completion for a whole system, we have a tendency to over-focus on the costs we can easily list and pin down, as opposed to the ones that are more difficult to estimate.

In general, hardware costs – the BOM (Bill of Materials) – are easy to estimate. If your estimate is off, time is on your side; buying parts will generally be cheaper six months from now than it is now. Other things that are much harder to estimate are software development costs and the time value of rapid completion. Delay is not your ally there; software development does not necessarily get cheaper inside your planning horizon, and being later to complete is a bad thing.

The streetlight effect means, therefore, that when doing cost and development-complexity analysis for a whole system, and trying to optimize out costs, we’re going to have a strong tendency to chisel away at BOM while neglecting attempts to lower the software-dev costs. We’re likely to end up writing a procurement strategy that trades small gains in the former for large losses in the latter, simply because we’re not allocating our attention as we should.

What makes this worse is that the zero-sum conflict is not just for the the attention of the planner’s brain. The easy- and hard-to-estimate costs can affect each other. Going cheap on the hardware often increases the software-development friction and lengthens the product timeline.

In the specific case that nudged me into consciousness, when I had to choose a main controller for the UPside I reached for an SBC running Unix on board rather than an Arduino-class microcontroller that requires custom firmware all the way down. Various EE types complained that my choice is overkill. I knew they were wrong in my gut, but I had to think about it to realize why.

The smart whole-systems engineer counter-biases against the streetlight effect. One of the ways is to plan on the assumption that software development costs you have no clear idea how to estimate are likely to blow up on you horribly, and that if there are hedges you can buy against that by taking a hit somewhere else (in the BOM, or even your raw revenues) it’s probably smart to go for at least some of them.

Twenty years ago I was the first to observe that making the software of your whole system open-source is an effective way to spread your development costs and mitigate the effects of your own experts moving on to other things. That’s one kind of hedge against large risks that are difficult to estimate – you’re trading away the expected benefits of collecting rent on the software’s secrecy, but (with rare exceptions) these were doubtful to begin with. Way easy to overestimate.

Using a Unix engine instead of a no-OS microcontroller or PIC in your embedded deployment is another long bet. You’ll pay for it where you can see, but the benefits in reduced costs and risks are future indefinite.

A smart systems engineer knows that he should counterbias against the streetlight effect by making some of those long bets anyway. Sometimes this will succeed, sometimes it will fail. The only thing you know for sure is that the “safe” strategy of never long-betting at all is suboptimal, exactly because the streetlight effect messes with your judgment.

Published
Categorized as General

49 comments

  1. This is a great systems call. I think that the BOM cuts and skimps are a large part of the core problem of why UPSes royally suck.

    Fighting against that really does seem the right way to go.

  2. You see something quite similar in over-the-wire protocols. The cost per byte shipped down the wire is quantifiable, while the cost of writing and maintaining software that reads and writes the protocol is less so.

    You still see a lot of people writing code that does nasty stuff like using packed bitfields in structs that are then shipped over the network, when using JSON or XML would a) not raise the cost of bandwidth to any significant degree and b) would make both debugging *and* persuading other people to speak your protocol orders of magnitude easier (not to mention writing code that speaks the protocol in another language).

    Yes, use cases still exist where husbanding every bit sent over the wire is important, but those cases aren’t nearly as common as people seem to think they are. HTTP is essentially text, and I seem to have heard somewhere that it’s been reasonably successful nonetheless. :-)

    Likewise RFC-822 email.

    1. >You see something quite similar in over-the-wire protocols.

      Yes. I’ve called out that particular mistake before.

        1. >This lesson brought to you by the letters G, P and S?

          Oh no. Being an old-school Unix guy I grokked it long before gpsd.

    2. HTTP is essentially text, and I seem to have heard somewhere that it’s been reasonably successful nonetheless. :-)

      You do know that HTTP/2 is very much a thing, and very much in use especially by high traffic web sites, don’t you? HTTP/2 is explicitly a binary protocol — because at large enough scales the overhead of parsing and unparsing text protocols becomes significant.

      When it comes to over-the-wire protocols, a good rule of thumb is: if you cannot build an ironclad case not to use a binary protocol, use a binary protocol. It’s 2018, and we have open-source tools to make this easy, the best of which is probably Cap’n Proto. The format of Cap’n Proto is designed to be read and written at zero parsing/unparsing cost on any CPU likely to be used in 2018. And it comes with tools that let you dump the messages in human-readable text format.

      1. Cap’n’Proto is nice, but it has a big pitfall from a systems-engineering perspective, compared to text-based formats – it doesn’t allow for _independent_ extensions of some existing wire format. (If a format is entirely defined by a central standard, new versions of it can simply tack on extra fields to the end of a record – but this makes _non-standard_ extensions impossible. There is an easy way to fix this – provide for extension chunks explicitly and request them to be identified by some tag, similar to what e.g. the PNG spec does (taken originally from the IFF format); but the Cap’n Proto folks don’t bother to warn users that they should be doing this if they want to avoid future problems with their nifty binary protocol/format. And that’s only one of many issues – for another example, text formats are simply more _robust_ to mistakes in implementation, which are sadly very common in the typical programming project. That ‘expensive’ parsing step is also a verification step which ensures you aren’t reading totally-garbage data as something that’s valid in your format. I might agree that HTTP/2 was a good idea, but only because HTTP itself is a _very_ special case.

        1. >compared to text-based formats – it doesn’t allow for _independent_ extensions of some existing wire format.

          Special case of “binary wire protcols are horribly brittle”.

        2. Cap’n’Proto is nice, but it has a big pitfall from a systems-engineering perspective, compared to text-based formats – it doesn’t allow for _independent_ extensions of some existing wire format.

          YAGNI. Or rather, again — build an ironclad case why you do need it before you demand it because in so doing you will be incurring complexity, confusion, and brokenness later on down the line. The harsh lesson of HTML, OpenGL, and X11 is that allowing third parties to extend protocols willy-nilly breeds its own nest of bugs and failure cases which you may not see until well into your development lifecycle. The first problem is an explosion of code paths to check for the presence of various extensions. For example, what if company A extends the protocol one way, company B extends it in a completely different way, and they both do roughly the same thing? Your choices are either to settle for the unextended protocol, which may not be adequate for your needs, or to check for and make use of each extension. This bit goes a long way toward explaining why virtually nobody outside of id Software was developing AAA game releases based on OpenGL renderers. In order to get OpenGL’s capabilities up to the level of the latest Direct3D, OpenGL had to be extended — and NVIDIA and ATI each had different ideas of how to extend it.

          Then there’s the problem of namespace collisions — what happens when multiple vendors provide extensions with the same name, or invoked the same way, but they take different parameters? XML namespaces, for instance, did a lot to address this issue — but then again, complexity is one of the most-hated aspects of XML and one of the reasons why it’s starting to lose to JSON for new development.

          There are many cases in which Jon Postel got it wrong — you do not want to be liberal in what you accept but, for the sanity of everyone everywhere, you want to force your clients to conform to a well-specified, auditable spec. Cap’n Proto punts on the issue of whether to allow extensibility or not — much like plain text or JSON. It’s certainly possible within the framework, but explicitly allowing it brings in a whole host of problems that are out of scope for Cap’n Proto.

          1. As someone who at a previous job actually dealt with this kind of thing, this is a key insight in both directions.

            Sometimes marshaling overhead *is* important. When you have a budget to respond to a request in fractions of a millisecond, the additional overhead actually matters because every bit of timeslice you can avoid in one location allows you a little bit extra time to do more clever things which add actual value in another.

            This doesn’t mean that binary protocols need to be awful, though. My experience of EE in school is that they knew just enough to think that they knew what they were doing. These are the people who think that new protocols should use CRC because “it’s easy to implement”.

            Ideally, you should have a way of defining your wire protocol which is easily human-understood. Then some build process converts that to the structs/classes/whatever which will be used/compiled in code. Concurrently, the definition should also generate a wireshark plugin or similar.

            And I’d add that I’m unhappy with protobuf because it’s too slow and adds too much overhead. Though I suppose it’s an unhappy medium between the two ends of the spectrum.

            1. >Sometimes marshaling overhead *is* important. When you have a budget to respond to a request in fractions of a millisecond, the additional overhead actually matters because every bit of timeslice you can avoid in one location allows you a little bit extra time to do more clever things which add actual value in another.

              Yes. For any one rule of good engineering, you can generally find extreme cases to falsify it. That doesn’t mean the rule is bad, it means the cases are extreme.

      2. >he best of which is probably Cap’n Proto.

        Right. You used to utter this rant about Protobufs. Which had to be replaced by Cap’n Proto because it sucked in some specific way I can’t be arsed to figure out because I already know binary application protocols are terminally brain-damaged. So, how many weeks will it be before you’re raving about Bride of Protobufs? Or Cap’n Proto Battles Godzilla, or term N in a semi-infinite series of descendants that all have the value of rancid camel vomit because they inherited the same fscking wrong premise?

        Some people never learn.

        1. Foo is better than Bar does not imply that Bar “sucks”. Surely you know that, Eric.

          I don’t see any evidence that Jeff said Protobuf sucks.

          1. >I don’t see any evidence that Jeff said Protobuf sucks.

            I never said he did. Reread more carefully, please.

        2. Well, it turns out that Cap’n Proto improves on protobufs in one key aspect: Protobufs take too long to parse and unparse. In other words, they’ve reduced the complexity and wasted CPU cycles incurred by textual formats, but they haven’t eliminated those completely. The author of Cap’n Proto (who did Google’s protobuf open source release) reports that servers inside Google could spend up to 30% of their CPU time parsing and unparsing protobuf messages. Enter Cap’n Proto, whose messages can be read or written without any parsing or unparsing step on every relevant CPU architecture.

          1. This is a case where someone has had a requirement to optimize and found an opportunity to do so.
            That doesn’t mean it’s right for everyone and it definitely doesn’t mean that anyone should *start* there.

            Premature optimization is the root of all evil.

            [rant]
            Knowing when to break the rules is *very* important. I was bought up as a C programmer by old C programmers and “learning to play by the rules before you break them” was continuously drilled into me. It’s not a phrase I’ve heard a programmer utter in nearly 2 decades tho’!

            Recently I had to do a proof-of-concept implementation of a protocol that we’d designed. I’ve read ESR’s The Art Of Unix programming and (before and since) I’ve been a die-hard text-wire-format person for all the reasons we know and love. I’ve been bitten before by binary protocol complexity. I’ve used text protocols without enough tooling.

            Anyway, in this instance I was under time pressure and the code was supposed to be suitable for use in security critical contexts. So I reached for Protobufs. The reason I reached for that *wasn’t* “because it was binary”. That was, in fact, the big “con” on my list. I reached for it 1) because it had some type safety; parsing numbers out of JSON consistently on different platforms is a *pain*. 2) it had a high quality implementation already available. 3) That implementation had been deployed widely elsewhere. 4) (they claim) it had had a security review. I could certainly believe it had had a reasonable amount of attention. 5) I didn’t want to write a watertight, safe parser in the time I had available.
            I looked at Cap’n’Proto as well. It looked really interesting but it had all the cons and not-all of the pros of Protobufs.

            I had to write a whole bunch of tooling but I’d have had to do a bunch of tooling for any encoding anyway.

            It allowed me to focus on my problem domain and do the proof of concept that was required. It was definitely a tradeoff. I’d have much rather have used something like Scheme’s read/write but I didn’t have readers and writers in the languages the client was interested in.

            Remember: engineering is about trade-offs and premature optimization is the root of all evil.
            [/rant]

        1. The tools support back ends that can emit code in a variety of languages. If you don’t want to use Ada’s C interop (which is excellent, and how e.g. GTK applications get written in Ada), the best way would probably be to write a plugin for the capnp tool that emits Ada code.

          Cap’n Proto itself is really just a meta-protocol which, if followed closely, would allow you to develop a pure Ada stack that can communicate with any other Cap’n Proto implementation.

    3. XML and JSON might be going too far, as the basic formats allow arbitrarily-deep hierarchies of arbitrary data, and that detail distorts APIs for consuming them. Well, unless the language’s underlying data type is arbitrary nested dictionaries to begin with.

  3. Good post, but I’d say that the issue is even more obvious here. I mean, using a Unix-capable SBC is pretty much a no-brainer once you have some sort of network communication in the project requirements – and that’s true for many reasons, too many to list actually. Moreover, while the RaspPi may in some sense be “overkill” in that some slightly-cheaper board may be a better fit in the abstract, the amount of community support you get with the RaspPi is very important too, and the only way to match that would be to choose some other mainline-kernel-supported board – no BSP-only junk – and I don’t know of many of these that are _cheaper_ than the Pi.

    The biggest problem with the Pi is its painfully slow startup, and that _might_ be partially addressed simply by building a kernel with custom compile options, and running from a simple ramdisk-like environment.

      1. You’re looking at what? That link goes to the Olimex front page.

        And Olimex is a good choice. They’re well known in the embedded systems world, and make good solid stuff.

  4. While I can’t argue against it (plus or minus boot time issues), I can’t resist one bit of snark: you’re going to have to make the display say “Booting Unix v.8.3…” at startup.

    You may also wish to consider a custom RPi variant with exactly the hardware you need and no more, if the Pi Zero won’t do the job for you. The RPi hardware design is also open source.

    1. Well, some Raspberry Pi hardware is open source…

      The CM3L looks interesting…integrated flash, all the ports you want, edit and build your own custom interface board, industrial orders in batches of thousands accepted. It also apparently hit the market four years ago.

  5. “One of the ways is to plan on the assumption that software development costs you have no clear idea how to estimate are likely to blow up on you horribly…”

    I would assert that the only way to truly learn this discipline is to have things blow up on you horribly a few times. Or more than a few times, if you’re thickheaded. We all know more than a few young whippersnappers who have brilliant engineering skills, but only time and experience will convey them engineering discipline.

  6. The smart whole-systems engineer counter-biases against the streetlight effect. One of the ways is to plan on the assumption that software development costs you have no clear idea how to estimate are likely to blow up on you horribly, and that if there are hedges you can buy against that by taking a hit somewhere else (in the BOM, or even your raw revenues) it’s probably smart to go for at least some of them.

    I suspect that a large part of this is because software engineering is a much less mature discipline than the various kinds of hardware engineering (electrical, mechanical, civil, etc.) Some of this is due to their relative ages, but I think that an important part that gets missed (perhaps its own kind of meta-streetlight effect) is that the various hardware engineering disciplines deal with actual mass – real physical things, about which you can have some “gut-level” idea that it costs something, and take space and time to store, assemble, test, etc.

    Whereas software is “just thinking” – and we seriously underweight the complexity of the refinement necessary to get from the handwavium level of describing a problem to be solved to an actual tested, supportable, documented piece of code.

  7. What most people also miss is that with software you don’t have just upfront costs of developing software, but also costs of maintaining software (and here overcomplication bites hard).

  8. Besides value in time to release, I see tremendous value in future iterations. I can foresee future versions with a host of easy to add features like environmental monitors (e.g. temperature and moisture sensors) that should become natural additions once you have a working v1 with an over powered system board. But the ones I am not thinking of are even more exciting, and with a standard part like you have chosen, you have avoided artificial limits.

    This had been sitting on the tip of my mind.

  9. The other bias that pushes this way is that BOM cost is per unit, while software cost is amortized over all units. Modeling this is simple math, but it’s far more common to vastly overestimate volume than underestimate it. Also, it’s easy to forget to factor in long term maintainability.

  10. Sigh.

    You haven’t even had the middle level of the design, but you know with metaphysical certainty that you need an IoT Xnix box that might have the usual botnet security flaws, and difficulty with GPIO (which would then require an Arduino like front-end) is the absolute best choice?

    You don’t know where either the keys or the light is.

    You haven’t asked much less answered “smart computer peer” or “peripheral device”. Which is the key question. At least if we worry about cost. If the UPS can cost $1000 before batteries, it is easy.

    The error in applying “But the light is better here” error is that you don’t apply it to the unix/embedded because you “just know” unix will be better than embedded.

    But Unix/Linux/BSD is often a pain in accesing unusual hardware. You can’t get to I2C, SPI, or other things easily, or they impose constraints where you are rewriting drivers to the point that a peripheral processer would have been easier.

    1. >But Unix/Linux/BSD is often a pain in accessing unusual hardware. You can’t get to I2C, SPI, or other things easily

      Looked at a Pi lately? It has those connectors, and the onboard Unix has drivers for them. So does the OLinuXino line I’m looking at. I think your knowledge might be a little out of date. I’ve used the GPIO pins on the RasPi myself from Linux, so I know you’re out to lunch on that one.

      >You haven’t asked much less answered “smart computer peer” or “peripheral device”

      You might want to read the design wiki before you go saying things like that.

      >you “just know” unix will be better than embedded.

      I don’t “know” any such thing and certainly have not asserted it – Arduino might be sufficient. What I “know” is that if we go with Arduino-class hardware the pool of potential devs shrinks by an order of magnitude or more.

    2. If the SBC can run a mainline Linux kernel then all the hardware access you could want is a matter of C programming, and if it doesn’t need more than a few hundred kHz on digital IO pins then it’s a matter of Python programming.

      If the SBC runs something that is not a mainline Linux kernel, then you have to maintain that and security and your custom drivers, and all the custom drivers in a UPS will be less than 1% of that job.

      There are some annoying limitations for some applications–Linux can’t be a SPI slave with the generic Linux SPI framework last time I checked; however, nothing here blocks a UPS-type application.

      The complaint about a RPI 1 or 2’s limited IO capability is valid. Early Pi models had very few IO pins. Designing for early Pi models was something of a challenge–“can I make my design work with only 8 digital pins? And if I can, where can I find a peer group to appreciate my engineering prowess?”

      Later Pi models have hundreds, though, and so do a number of more industrial-oriented Unix SBCs. I’m pretty sure that if you’re using hundreds of digital IO pins to a RPi in a UPS design then you’re doing something wrong though–better to put in a few multiplexers or downstream micros just so you don’t have big ribbon cables everywhere.

  11. I’d perhaps lean towards the “c.h.i.p.” as an 8 dollar baseboard to leverage. It does usb and wifi networking, and has a built-in charge controller (it works with lithium batteries only though at 5v). That said, I’d look into the chips in modern solar and AC charge controllers…

    You really don’t need much cpu for this, an mcu IS cheaper, I’d earlier suggested something derived from esp32. I’ve had a reluctance of late to embrace a full linux stack in IoT (post spectre), and am looking into sel4 for a project. It’s not that it’s wrong to select linux, it’s the ongoing maint of any OS that everyone is falling down on, and linux has a large attack surface. The idea of being able to formally verify a product hooked up to mains power has some appeal.

    as for arm OSes, armbian has been the best for many uses, for me, followed by lede. Olimex’s stuff works well with armbian, and more modern kernels are more generally available for anything armbian supports.

    linuxgizmos.com is a good resource.

    1. >The idea of being able to formally verify a product hooked up to mains power has some appeal.

      Yes, it does, but if we go with something like this where are we going to get devs?

      Personally, I’d be up for learning SEL4, but this thing ain’t going to fly if I have to do all the software myself. I have too many other calls on my time.

      1. > where are we going to get devs?

        There are lots of them around. It’s my core training. How much are you paying? Welcome to industry.

        I have to second Dave Taht’s comments here. One of the key benefits of using an embedded microcontroller, something small like a PIC/8051 is that you only need to worry about the software which you are writing for the thing. If you scale up a tad to something that runs QNX you get an OS which is designed for this kind of thing (and looking at their web page, they’ve added a *lot* of features since I last looked at it).

        A full OS is a lot nicer to develop on, and for a personal project it’s great. But having to worry about end-users upgrading the OS (and possible bricking implications) is worrisome.

        One way larger systems handle this is to have two separate controllers. You have a micro system which ensures all of the core safety stuff and runs very limited code. That then talks via an isolated channel such as an I2C bus to a larger system internally which is responsible for handling eg. SNMP services. This way, even if someone managed to hack the UI server, the only thing it would be allowed to do is to send commands to the safety-critical system. If the specified protocol is very limited and exception handling is well tested, and the hardware prohibits doing stupid things, you have something which can’t fail dangerously. At worst, an attacker can eg. switch your UPS on and off at whatever rate the safety controller and hardware allows. This is very annoying and disruptive to use, but not actually dangerous.

        1. >One way larger systems handle this is to have two separate controllers. You have a micro system which ensures all of the core safety stuff and runs very limited code. That then talks via an isolated channel such as an I2C bus to a larger system internally which is responsible for handling eg. SNMP services. This way, even if someone managed to hack the UI server, the only thing it would be allowed to do is to send commands to the safety-critical system.

          Congratulations, you just described the UPSide design exactly, down to the choice of bus. The main item in the project repository is a state diagram describing UPSide’s behavior, with a specification of the messages passing between the Unix SBC and the microcontroller at each transition.

          I even have a plan to generate a lot of the microcontroller code. The state diagram is written in cpp macros that expand to dot markup (dot is the graphviz toolsuite’s language for specifying graphs). Another use for those macros will be to write a mini-compiler, probably in Python, that compiles them to state-machine code in C. (I love, love, love this kind of DSL hacking.)

          Of course, the microcontroller (what we call the “midbrain”) retains no stored data between powerups. I think it may have only one piece of any kind of stored data outside the I2C message buffering, and that is the current state of the state machine. Obvious consequences for provability and testability follow.

  12. The project now has a mission statement:

    https://gitlab.com/esr/upside/wikis/mission-statement

    I’m posting this here because it may help head off unproductive rambling.

    I’d like a well-justified figure for our capacity goal. but lack the technical chops to compute it. What do people here think a typical wattage draw for a contemporary desktop system with 1 4K monitor attached? How would you translate that into watt-hours or a VA rating? What am I missing, if anything?

    1. The word “read” in “You should probably requirements next.”? :-)

      My main desktop system, a dual-hex 3.3 GHz Mac Pro (the 5,1 version, the last of the conventional towers) with 48 GB of RAM, an ATI 5770 graphics card, and two 4TB hard disks, with an AOC 23-inch 1080p monitor, draws about 240W if it’s just sitting here showing this page. Heavy activity pushes that to about 350W, with peaks over 400. I use an APC Back-UPS Pro 1500, and get 15 minutes of runtime or so. (It has a nice display that shows how much power is being drawn.)

      The 1500 VA UPS is a pretty good target to aim for, IMAO. You want lots of headroom.

      1. Agreed with this.

        As I understand the main use cases of customer UPS’s, the main goals of a UPS are:

        1) Get “Thing” over minor blip in power. For example, every time I go to print, for whatever odd reason, the (4) UPS(s) flip on until the print job is done.
        2) Keep “Thing” alive long enough for a stable shutdown.

        At which point 10-15 minutes is probably a sane goal for a V1.

        /Though I’m using what is probably an 8-900W setup right now (and indeed before I ripped out the Dual 5970’s and replaced them with a single 980, could quite trivially flip the circuit in my apartment).

    2. Under the use cases, for SOFTLANDER you refer to “host machine support for accepting a clean-shutdown command”. I don’t think that’s really the right model for SOFTLANDER. I want the host to be able to accept a Hibernate to Disk command, so that I don’t lose state. I’m fine with not being able to see my monitor during the outage, provided that I can be 100% certain the OS is receiving and responding to that Hibernate command.

      1. >I want the host to be able to accept a Hibernate to Disk command, so that I don’t lose state

        That’s a good idea. I don’t think I’ve seen that exposed on a desktop system, though.

        1. In Windows, it’s shutdown /h. I have my work laptop set to do that on [Ctrl][Alt][Z], for the obvious mnemonic reason.
          In Ubuntu, it’s
          systemctl hibernate. I assume other distros have their equivalents. If not, there will be increased pressure for them to do so for UPSide integration.

          A windows service or *nix daemon that connects via USB and/or UDP to the UPS could initiate hibernation trivially by issuing the appropriate command.

  13. @esr how do I ask to be added as a developer on gitlab?

    Also, the APC article on LiPoFe4 batteries refers to “the standard 12 Volt 7.2Amp Hour sealed lead acid batteries normally included with the UPS model”. Basically, a fire alarm battery.

Leave a comment

Your email address will not be published. Required fields are marked *