Jan 29

RISC-V is doing disruption right

I’ve recently become aware of RISC-V.

Verry innterresting.

Technical introduction here (somewhat out of date; hardware support is broader and deeper now, and I have seen video of a full Linux port running Doom), but the technicalia is not mostly where I’m going with this post.

I’m seeing a setup for a potentially classic disruption from below here. And not mainly because this instruction set is well designed, though it is. Simple, clean, orthogonal – it makes my compiler-jock heart happy; writing a code generator for it would be fun. If I needed to, but there’s already an LLVM back end for it.

And that points at what’s really interesting about RISC-V. whoever is running their product strategy has absorbed the lessons of previous technology disruptions and is running this one like a boss.

Continue reading

Dec 24

Pessimism about parallelism

Massive concurrency and hardware parallelism are sexy topics in the 21st century. There are a couple of good reasons for this and one rather unfortunate one.

Two good reasons are the combination of eye-catching uses of Graphics Processing Units (GPUs) in games and their unexpected secondary uses in deep-learning AI – these exploit massive hardware parallelism internally. The unfortunate reason is that single-processor execution speeds hit a physics wall in about 2006. Current leakage and thermal runaway issues now sharply limit increases in clock frequency, and the classic way out of that bind – lowering voltage – is now bumping up against serious quantum-noise issues.

Hardware manufacturers competing for attention have elected to do it by putting ever more processing cores in each chip they ship and touting the theoretical total throughput of the device. But there have also been rapidly increasing amounts of effort put into pipelining and speculative execution techniques that use concurrency under the hood in attempts to make the serial single processors that programmers can see crank instructions more rapidly.

The awkward truth is that many of our less glamorous computing job loads just can’t use visible concurrency very well. There are different reasons for this that have differing consequences for the working programmer, and a lot of confusion abroad among those reasons. In this episode I’m going to draw some distinctions that I hope will help all of us think more clearly.

First, we need to be clear about where harnessing hardware parallelism is easy and why that seems to be the case. We look at computing for graphics, neural nets, signal processing, and Bitcoin mining, and we see a pattern: parallelizing algorithms work best on hardware that is (a) specifically designed to execute them, and (b) can’t do anything else!

We also see that the inputs to the most successful parallel algorithms (sorting, string matching, fast-Fourier transform, matrix operations, image reverse quantization, and the like) all look rather alike. They tend to have a metric structure and an implied distinction between “near” and “far” in the data that allows it to be carved into patches such that coupling between elements far from each other is negligible.

In the terms of an earlier post on semantic locality, parallel methods seem to be applicable mainly when the data has good locality. And they run best on hardware which – like like the systolic-array processors at the heart of GPUs – is designed to support only “near” communication, between close-by elements.

By contrast, writing software that does effective divide-and-conquer for input with bad locality on a collection of general-purpose (Von Neumann architecture) computers is notoriously difficult.

We can sum this up with a heuristic: Your odds of being able to apply parallel-computing techniques to a problem are inversely proportional to the degree of irreducible semantic nonlocality in your input data.

Another limit on parallel computing is that some important algorithms can’t be parallelized at all – provably so. In the blog post where I first explored this territory I coined the term “SICK algorithm”, with the SICK expanded to “Serial, Intrinscally – Cope, Kiddo!” Important examples include but are not limited to: Dijkstra’s n-least-paths algorithm; cycle detection in directed graphs (with implications for 3-SAT solvers); depth first search; computing the nth term in a cryptographic hash chain; network-flow optimization.

Bad locality in the input data is implicated here, too, especially in graph- and tree-structure contexts. Cryptographic hash chains can’t be parallelized because their entries have to be computed in strict time order – a strictness which is actually important for validating the chain against tampering.

There’s a blocking rule here: You can’t parallelize if a SICK algorithm is in the way.

We’re not done. There are at least two other classes of blocker that you will frequently hit.

One is not having the right tools. Most languages don’t support anything but mutex-and-mailbox, which has the advantage that the primitives are easy to implement but the disadvantage that it induces horrible complexity explosions and is nigh-impossible to model accurately in your head at scales over about four interacting locks.

If you are lucky you may get some use out of a more tractable primitive set like Go channels (aka Communicating Sequential Processes) or the ownership/send/sync system in Rust. But the truth is, we don’t really know what the “right” language primitives are for parallelism on von-Neuman-architecture computers. And there may not even be one right set of primitives; there might be two, three, or more different sets of primitive appropriate for different problem domains but as incommensurable as one and the square root of two. At the present state of the art in 2018 nobody actually knows.

Last but not least, the limitations of human wetware. Even given a tractable algorithm, a data representation with good locality, and sharp tools, parallel programming seems to be just plain difficult for human beings even when algorithm being applied is quite simple. Our brains are not all that good at modelling the simpler state spaces of purely serial programs, and much less so at parallel ones.

We know this because there is plenty of real-world evidence that debugging implementations of parallelizing code is worse than merely _difficult_ for humans. Race conditions, deadlocks, livelocks, and insidious data corruption due to subtly unsafe orders of operation plague all such attempts.

Having a grasp on these limits has, I think, has been growing steadily more important since the collapse of Dennard scaling. Due to all of these bottlenecks in the supply of code that can use multiple cores effectively, some percentage of the multicore hardware out there must be running software that will never saturate its cores; or, to look at it from the other end, the hardware is overbuilt for its job load. How much money and effort are we wasting this way?

Processor vendors would love you to overestimate the functional gain from snazzy new silicon with ever larger multi-core counts; however else will they extract enough of your money to cover the eye-watering cost of their chip fabs and still make a profit? So there’s a lot of marketing push out there that aims to distract capacity planners from ever wondering when those gains are real.

And, to be fair, some places they are. The kind of servers that live in rack mounts and handle hundreds of thousands of concurrent transactions per second probably have their core count matched to their job load fairly well. Smartphones or embedded systems, too – in both these extreme cases a lot of effort goes into minimizing build costs and power budgets, and that’s going to exert selective pressure against overprovisioning.

But for typical desktop and laptop users? I have dark suspicions. It’s hard to know, because we’ve been collecting real performance gains due to other technology changes like the shift from spinning-rust to solid-state mass storage. Gains like that are easy to mistake for an effect of more CPU throughput unless you’re profiling carefully.

But here’s the shape of my suspicion:

1. For most desktop/laptop users the only seriously parallel computing that ever takes place on their computers is in their graphics chips.

2. More than two processor cores is usually just wasteful hotrodding. Operating systems may be able to parcel out applications between them, but the general run of application software is unable to exploit parallelism and it is rare for most users to run enough different processor-hungry applications simultaneously to saturate their hardware that way.

3. Consequently, most of the processing units now deployed in 4-core-and-up machines are doing nothing most of the time but generating waste heat.

My regulars include a lot of people who are likely to be able to comment intelligently on this suspicion. It will be interesting to see what they have to say.

UPDATE: A commenter on G+ points out that one interesting use case for multicores is compiling code really quickly. Source for a language like C has good locality – it can be compiled in well-separated units (source files) into object files that are later joined by a linker.

Nov 22

Contemplating the cute brick

Some years ago I predicted that eventually the core of your desktop PC would morph into a physically tiny compute engine that would merge with your smartphone, talking through standard ports and cables to full-sized peripherals like a keyboard and (a too large to be portable) flatscreen.

More recently I examined the way that compute bricks – small-form-factor fanless PCs running low-power chips – have been encroaching on the territory of traditional tower PCs. Players in this space include Jetway, Logic Supply, Partaker, and Shuttle. Poke a search engine with “fanless PC” to get good hits.

I have a Jetway running production in my basement; it’s my Internet-facing mail- and web-server. There’s a second one I have set up with Devuan that I haven’t assigned a role to yet; I may use it as a backup host.

These compute bricks are a station on the way to my original prediction, because they get consumers used to thinking of their utility machines as small compute nodes attached to human-sized peripheral hardware that may have a longer lifetime than the compute node itself.

At the lowest end of the compute-brick class are little engines like the Raspberry Pi. And right above it is something slightly different – bricks with a fan, active cooling enabling them to run the same chips used in tower PCs.

Of course the first machine in this class was the Apple Mac Mini, but it dead-ended years ago for reasons that aren’t Apple’s fault. It was designed before SSDs were really a thing and has spinning-rust-centric design assumptions in its DNA; thus, it’s larger, louder, noisier and waaay more expensive than a Jetway-class brick. Apple must never have sold very many of them; we can tell this by the fact that the product went four years between refreshes.

On the other hand, a couple days ago I dropped in a replacement for my wife’s aging tower PC. It’s an Intel NUC, a brick-with-fan, but unlike the Mac Mini it seems to have been designed from the start around the assumption that its mass storage would be SSD. As such, it achieves what the Mac Mini didn’t quite; it opens a new front in the ephemeralization wars.

Continue reading

Aug 03

How to get a reliable home router/WiFi box in 2018

My apprentice and A&D regular Ian Bruene had bad experiences with a cheap home router/WiFi recently, and ranted about it on a channel where I and several other comparatively expert people hang out. He wanted to know how to get a replacement solid enough to leave with non-techie relatives.

The ensuing conversation was very productive, so I’m summarizing it here as a public-service announcement. I’ve put the year in the title because some of the information in it could go stale quickly. I will try to mark each element of the advice with an expected-lifetime estimate.

Even before seeing any of the comments on this post I’m going to say you should read them too. Some of my regulars are more expert than I am about this area.

Continue reading

May 09

Embrace the SICK

There’s a very interesting article just out, C Is Not a Low-level Language;. in which David Chisnall punctures the comforting illusion that C is really a “close-to-the-metal” language and relates this illusion to the high costs of Spectre and other processor-level bugs.

Those of us who think seriously about language design have long been aware that C’s flat-address-space model is increasingly at odds with the real world of memory-caching hierarchies. Chisnall’s main contribution is to notice that speculative execution, the feature at the bottom of the Spectre and Meltdown bugs, is essentially a hack implemented to allow C programmers to maintain the illusion that they’re running on a really fast serial machine.  But he has other interesting points as well.

I recommend reading Chisnall’s article before you go further with this post.

It’s no news to my regulars that I’ve been putting increasing investment into the Go language and now believe it a plausible candidate to replace C and C++ over most of C/C++’s range – that is, outside  of kernels and hard realtime.  So the question that immediately occurred to me upon reading the article was: Is Go necessarily productive of the same kind of kludge that Chisnall is calling out?

Because if it is – but something else isn’t – that could be a reason not to overcommit to Go.  The twin pressures of demand for lower security defects and the increasing complexity costs of speculative execution are bound to toll heavily against Go if it does demand massive speculative execution and there’s any realistic alternative that does not. Do we need something much more divergent from C (Erlang? Ocaml? Even perhaps Haskell?) for systems programming to follow where the hardware is going?

So let’s walk through Chisnall’s discussion points, bounce Go off each one, and see what we can see.  What we’ll find implies, I think, some more general conclusions about what will and won’t work in matching language design to real-world workloads and processor architectures.

Continue reading

Mar 16

UPSide needs a battery technologist

The design of UPSide is coming together very nicely. We don’t have a full parts list yet, but we do have a functional diagram of the high-power subsystem most of which can be expanded into a schematic in a pretty straightforward way.

If you want to see what we have, clone the repo, cd to design-docs, make transactions.html, and view that in a browser. Note that the bus message inventory is out of date; don’t pay a lot of attention to it, one of the design premises has changed but I haven’t had time to rewrite that section yet.

We’ve got Eric Baskin, a very experienced power and signals EE, to do the high-power electronics. We’ve got me to do software and systems integration. We’ve got a lot of smart kibitzers to critique and improve the system design, spotting problems the two Eric’s might have missed. It’s all going well and smoothly – except in one key area.

UPSide needs a battery technologist – somebody who really understands all the tradeoffs among battery chemistries, how to spec battery types for different applications, and especially the ins and outs of battery management systems.

Eric Baskin and I are presently a bit out of our depth in this. Given time we could educate ourselves up to the required level, but the fact that that portion of the design is lagging the rest tells me that we ought to recruit somebody who already knows the territory.

Any takers? No money in it, but you get to maybe disrupt the whole UPS market and and certainly work with a bunch of interesting people.

Mar 11

How to get started on the UPSide project

The current state of play is: We have a high-level system design and a map of the behavior states. We have a capacity target (300W for 15 mins) and a peak-continuous-load spec (400W) We know we’re going to build a double-conversion design and we’re considering a couple of alternative topologies. We pretty much know the external-interface specs (some details may change).

I’m expecting both my prototype copy of the forebrain Unix SBC (an Olimex LIME2) and the interface contract for the high-power subsystem to land on my desk tomorrow.

Interest in this project continues to be huge. Another company wants in as of this morning. The volume of feature requests is high enough that I’m buckling under the editing load.

The rest of this post is instructions to potential contributors about how to get on board.

1, Get an ID on GitLab. Tell me what it is so I can add you to the project group.

2. If you have a feature request, please Don’t post it on this blog. Add it to the “General feature request thread” on the tracker.

3. Read the wiki. Read the tracker issues. I try to keep both pruned so the volume is not overwhelming. Read the Rejected Ideas page on the wiki, too.

4. Read the design documents in the project wiki. The important one is the transaction design; the I2C message inventory will change, but the basic state diagram probably won’t.

5. Participate in the design discussion. This takes place in tracker threads.

6. When we’re ready to breadboard a prototype, throw some parts money in the tip jar we don’t have yet. If you must contribute before then the PayPal blogbutton works fine.

7. Prototype builds will probably go down at PA Makerspace in Phoenixville, PA. If you are within driving distance and a competent electrics tech, consider joining us for a build.

8. Once we have a full design with a PC board and enclosure: if you have a shop facilities for it, try to replicate the build. We’ll know we have the build recipe debugged when other people can do it.

9. If your favorite hardware feature request doesn’t appear in the version 1 prototype, relax, We may think it’s a good idea but be holding off till v2 out of a desire to keep v1 simple and launch fast.

10. If your favorite software feature request doesn’t appear in the version 1 prototype, pitch in and make it happen. A Unix SBC is not a difficult programming environment – the OS on this one is a Debian port.

After step 10 and a couple of design iterations the future becomes less clear. maybe try to get it into volume manufacturing through a partnership with an established vendor.

Feb 18

In the face of uncertainty, buy options.

Yesterday I posted about how the streetlight effect pulls us towards bad choices in systems engineering. Today I’m going to discuss a different angle on the same class of challenges, one which focuses less on cognitive bias and more on game theory and risk management.

In the face of uncertainty, buy options. This is a good rule whether you’re doing whole-system design, playing boardgames, or deciding whether and when to carry a gun.

Continue reading

Feb 16

Announcing: The UPSide project

A week ago I argued that UPSes suck and need to be disrupted. The response to that post was astonishing. Apparently I tapped into a deep vein of private discontents – people who had been frustrated and pissed off with UPS gear for years or decades but never quite realized it wasn’t only their problem.

Many people expressed an active desire to contribute to a kickstarter aimed at this problem. I got one offer from someone actually willing to hire an engineer to work on it. Intelligent feature suggestions – often framed as gripes about the deficiencies of what you can buy out there – came flooding in.

Perhaps most remarkably, the outlines of a coherent design began to emerge. We identified a battery technology we could buy COTS that would improve on the performance and lifetime of lead-acid but without the explosion risk of lithium-ion. The way that safety and regulatory requirements would require a partition between low- and- high-power electronics became clearer. A feature list solidified. We took in good ideas and rejected some not-so-good ones.

Therefore, even though we don’t yet have a lead hardware engineer, I have initiated Project UPSide. There’s no code or schematics yet; we’re still developing requirements and architecture. By “architecture” I mean, for example, what specific kinds of information the hardware subsystems need to exchange.

All interested parties are welcome to browse the wiki and apply for write access. Roles we are especially looking for:

* Lead hardware engineer – needs to be able to do overall design and systems integration.

* Someone who knows how to program USB endpoints. (It will land on me to learn this if we can’t find someone with experience.)

* Someone who understands battery-state modeling. (Again, I’ll learn this if nobody steps up.)

My own job is, basically, product manager – keeper of the requirements list and recruiter of talent.

UPDATE: If you want to request features or changes to the design wiki, the best way to do that is by opening an issue in the tracker. That way the discussion stays on record for later viewers.

Feb 08

UPSes suck and need to be disrupted

Warning: this is a rant.

I use a UPS (Uninterruptible Power Supply) to protect the Great Beast of Malvern from power outages and lightning strikes. Every once in a while I have to buy a replacement UPS and am reminded of how horribly this entire product category sucks. Consumer-grade UPSes suck, SOHO UPSs suck, and I am reliably informed by my friends who run datacenters that no, you cannot ascend into a blissful upland of winnitude by shelling out for expensive “enterprise-grade” UPSes – they all suck too.

The lossage is extra annoying because designing a UPS that doesn’t suck would be neither difficult nor expensive. These are not complicated devices – they’re way simpler than, say, printers or scanners. This whole category begs to be disrupted by an open-hardware design that could be assembled cheaply in a makerspace from off-the-shelf components, an Arduino-class microcontroller, and a PROM.

How badly do UPSes suck? Let me count the ways…

Continue reading

Mar 28

Odlyzko-Tilly-Raymond scaling

I’ve been ill with influenza and bronchitis for the last week. Maybe this needs to happen more often, because I had a small but fundamental insight into network scaling theory a few minutes ago.

I’m posting it here because I think my blog regulars cast a wide enough net to tell me if I’ve merely rediscovered a thing in the existing literature or, in fact, nobody quite got here before.

Continue reading

Sep 13

Trials of the Beast

This last week has not been kind to the Great Beast of Malvern. Serenity is restored now, but there was drama and (at the last) some rather explosive humor.

For some time the Beast had been having occasional random flakeouts apparently related to the graphics card. My monitors would go black – machine still running but no video. Some consultation with my Beastly brains trust (Wendell Wilson, Phil Salkie, and John D. Bell) turned up a suitable replacement, a Radeon R360 R7 that was interesting because it can drive three displays (I presently drive two and aim to upgrade).

Last Friday I tried to upgrade to the new card. To say it went badly would be to wallow in understatement. While I was first physically plugging it in, I lost one of the big knurled screws that the Beast’s case uses for securing both cards and case, down behind the power supply. Couldn’t get it to come out of there.

Then I realized that the card needed a PCI-Express power tap and oh shit the card vendor hadn’t provided one.

Much frantic running around to local computer stores ensued, because I did not yet know that Wendell had thoughtfully tucked several spares of the right kind of cable behind the disk drive bays when he built the Beast. Which turns out to matter because though the PCI-E end is standardized, the power supply end is not and they have vendor-idiosyncratic plugs.

Eventually I gave up and tried to put the old card back in. And that’s when the real fun began. I broke the retaining toggle on the graphics card’s slot while trying to haggle the new card out. When I tried to boot the machine with the old card plugged back in, my external UPS squealed – and then nothing. No on-board lights, no post beep, no sign of life at all. I knew what that meant; evidently either the internal PSU or the mobo was roached.

Continue reading

Apr 15

The midrange computer dies

About five years ago I reacted to a lot of hype about the impending death of the personal computer with an observation and a prediction. The observation was that some components of a computer have to be the size they are because they’re scaled to human dimensions – notably screens, keyboards, and pointing devices. Wander outside certain size extrema and you get things like smartphone keyboards that are only good for limited use.

However, what we normally think of as the heart of a computer – the processing and storage – isn’t like this. It can get arbitrarily small without impacting usability at all. Consequently, I predicted a future in which people would carry around powerful computing nodes descended from smartphones and walk them to docking stations bundling a screen, a pointing device, and a real keyboard when they need to get real work done.

We’ve now reached an interesting midway point on that road. The (stationary) computers I use are in the process of bifurcating into two classes: one quite large, one very small. I qualify that with “stationary” because laptops are an exception for reasons which, if not yet obvious, will be in a few paragraphs.

Continue reading

Apr 07

Too clever by half

The British have a phrase “Too clever by half”, It needs to go global, especially among hackers. It can have any of several closely related meanings: the one I mean to focus on here has to do with overconfidence in one’s intelligence or skill, and the particular bad consequences that can have. It’s related to Nassim Taleb’s concept of a “fragilista”.

Continue reading

Apr 03

Sometimes I should give in to my impulses

For at least five years now I’ve been telling myself that, as nifty as it would be to play with the hardware, I really shouldn’t spend money on a small-form-factor PC.

This was not an easy temptation to resist, because I found little systems like the Intel NUC fascinating. I’d look over the specs for things like that in on-line stores and drool. Replacing a big noisy PC seemed so attractive…but I always drew back my hand, because that hardware came with a premium pricetag and I already have working kit.

Then, tonight, I’m over at my friend Phil Salkie’s place. Phil is a hardware and embedded-programming guy par excellence; I know he builds small-form-factor systems for industrial applications. And tonight he’s got a new toy to show off, a Taiwanese mini-ITX box called a Jetway.

He says “$79 on Amazon”, and I say “I’ve thought about replacing my mailserver with something like that, but could never cost-justify it.” Phil looks at me and says “You should. These things lower your electric bills – it’ll pay itself off inside of a year.”

Oh. My. Goddess. Why didn’t I think of that?

Continue reading

Oct 07

The FCC must not lock down device firmware!

The following is a comment I just filed on FCC Docket 15-170, “Amendment of Parts 0, 1, 2, 15, and 18 of the Commission’s Rules et al.”

Thirty years ago I had a small hand in the design of the Internet. Since then I’ve become a senior member of the informal collegium that maintains key pieces of it. You rely on my code every time you use a browser or a smartphone or an ATM. If you ever ride in a driverless car, the nav system will critically depend on code I wrote, and Google Maps already does. Today I’m deeply involved in fixing Internet time service.

I write to endorse the filings by Dave Taht and Bruce Perens (I gave Dave Taht a bit of editorial help). I’m submitting an independent comment because while I agree with the general thrust of their recommendations I think they may not go far enough.

Continue reading

Mar 11

The Great Beast is armored!

All my readers should be aware of the Rowhammer attack by now.

It gives me great pleasure to report that thanks to our foresight in specifying ECC memory for the design, the Great Beast of Malvern has armor of proof against this attack. The proof being over a thousand runs of the Rowhammer test.

Thank you, everyone who threw money into the Beast’s build budget. If y’all hadn’t been so generous, the build team might have had to make compromises. One of the most likely items to be cut would have been ECC…because registered ECC DRAM at the Beast’s speeds is so freaking expensive that the memory was about a third of the entire build budget. And now we’d have a vulnerable machine.

As it is, the Beast roars in triumph over the Rowhammer.

Oh, and what I’m currently doing with the Beast? Why, I’m repairing the very fabric of time..itself! Explanation to follow, probably early next week.