Pessimism about parallelism

Massive concurrency and hardware parallelism are sexy topics in the 21st century. There are a couple of good reasons for this and one rather unfortunate one.

Two good reasons are the combination of eye-catching uses of Graphics Processing Units (GPUs) in games and their unexpected secondary uses in deep-learning AI – these exploit massive hardware parallelism internally. The unfortunate reason is that single-processor execution speeds hit a physics wall in about 2006. Current leakage and thermal runaway issues now sharply limit increases in clock frequency, and the classic way out of that bind – lowering voltage – is now bumping up against serious quantum-noise issues.

Hardware manufacturers competing for attention have elected to do it by putting ever more processing cores in each chip they ship and touting the theoretical total throughput of the device. But there have also been rapidly increasing amounts of effort put into pipelining and speculative execution techniques that use concurrency under the hood in attempts to make the serial single processors that programmers can see crank instructions more rapidly.

The awkward truth is that many of our less glamorous computing job loads just can’t use visible concurrency very well. There are different reasons for this that have differing consequences for the working programmer, and a lot of confusion abroad among those reasons. In this episode I’m going to draw some distinctions that I hope will help all of us think more clearly.

First, we need to be clear about where harnessing hardware parallelism is easy and why that seems to be the case. We look at computing for graphics, neural nets, signal processing, and Bitcoin mining, and we see a pattern: parallelizing algorithms work best on hardware that is (a) specifically designed to execute them, and (b) can’t do anything else!

We also see that the inputs to the most successful parallel algorithms (sorting, string matching, fast-Fourier transform, matrix operations, image reverse quantization, and the like) all look rather alike. They tend to have a metric structure and an implied distinction between “near” and “far” in the data that allows it to be carved into patches such that coupling between elements far from each other is negligible.

In the terms of an earlier post on semantic locality, parallel methods seem to be applicable mainly when the data has good locality. And they run best on hardware which – like like the systolic-array processors at the heart of GPUs – is designed to support only “near” communication, between close-by elements.

By contrast, writing software that does effective divide-and-conquer for input with bad locality on a collection of general-purpose (Von Neumann architecture) computers is notoriously difficult.

We can sum this up with a heuristic: Your odds of being able to apply parallel-computing techniques to a problem are inversely proportional to the degree of irreducible semantic nonlocality in your input data.

Another limit on parallel computing is that some important algorithms can’t be parallelized at all – provably so. In the blog post where I first explored this territory I coined the term “SICK algorithm”, with the SICK expanded to “Serial, Intrinscally – Cope, Kiddo!” Important examples include but are not limited to: Dijkstra’s n-least-paths algorithm; cycle detection in directed graphs (with implications for 3-SAT solvers); depth first search; computing the nth term in a cryptographic hash chain; network-flow optimization.

Bad locality in the input data is implicated here, too, especially in graph- and tree-structure contexts. Cryptographic hash chains can’t be parallelized because their entries have to be computed in strict time order – a strictness which is actually important for validating the chain against tampering.

There’s a blocking rule here: You can’t parallelize if a SICK algorithm is in the way.

We’re not done. There are at least two other classes of blocker that you will frequently hit.

One is not having the right tools. Most languages don’t support anything but mutex-and-mailbox, which has the advantage that the primitives are easy to implement but the disadvantage that it induces horrible complexity explosions and is nigh-impossible to model accurately in your head at scales over about four interacting locks.

If you are lucky you may get some use out of a more tractable primitive set like Go channels (aka Communicating Sequential Processes) or the ownership/send/sync system in Rust. But the truth is, we don’t really know what the “right” language primitives are for parallelism on von-Neuman-architecture computers. And there may not even be one right set of primitives; there might be two, three, or more different sets of primitive appropriate for different problem domains but as incommensurable as one and the square root of two. At the present state of the art in 2018 nobody actually knows.

Last but not least, the limitations of human wetware. Even given a tractable algorithm, a data representation with good locality, and sharp tools, parallel programming seems to be just plain difficult for human beings even when algorithm being applied is quite simple. Our brains are not all that good at modelling the simpler state spaces of purely serial programs, and much less so at parallel ones.

We know this because there is plenty of real-world evidence that debugging implementations of parallelizing code is worse than merely _difficult_ for humans. Race conditions, deadlocks, livelocks, and insidious data corruption due to subtly unsafe orders of operation plague all such attempts.

Having a grasp on these limits has, I think, has been growing steadily more important since the collapse of Dennard scaling. Due to all of these bottlenecks in the supply of code that can use multiple cores effectively, some percentage of the multicore hardware out there must be running software that will never saturate its cores; or, to look at it from the other end, the hardware is overbuilt for its job load. How much money and effort are we wasting this way?

Processor vendors would love you to overestimate the functional gain from snazzy new silicon with ever larger multi-core counts; however else will they extract enough of your money to cover the eye-watering cost of their chip fabs and still make a profit? So there’s a lot of marketing push out there that aims to distract capacity planners from ever wondering when those gains are real.

And, to be fair, some places they are. The kind of servers that live in rack mounts and handle hundreds of thousands of concurrent transactions per second probably have their core count matched to their job load fairly well. Smartphones or embedded systems, too – in both these extreme cases a lot of effort goes into minimizing build costs and power budgets, and that’s going to exert selective pressure against overprovisioning.

But for typical desktop and laptop users? I have dark suspicions. It’s hard to know, because we’ve been collecting real performance gains due to other technology changes like the shift from spinning-rust to solid-state mass storage. Gains like that are easy to mistake for an effect of more CPU throughput unless you’re profiling carefully.

But here’s the shape of my suspicion:

1. For most desktop/laptop users the only seriously parallel computing that ever takes place on their computers is in their graphics chips.

2. More than two processor cores is usually just wasteful hotrodding. Operating systems may be able to parcel out applications between them, but the general run of application software is unable to exploit parallelism and it is rare for most users to run enough different processor-hungry applications simultaneously to saturate their hardware that way.

3. Consequently, most of the processing units now deployed in 4-core-and-up machines are doing nothing most of the time but generating waste heat.

My regulars include a lot of people who are likely to be able to comment intelligently on this suspicion. It will be interesting to see what they have to say.

UPDATE: A commenter on G+ points out that one interesting use case for multicores is compiling code really quickly. Source for a language like C has good locality – it can be compiled in well-separated units (source files) into object files that are later joined by a linker.

136 comments

  1. What about fighting to addopt another architecture than von Neumann’s in the long term, perhaps favouring a functional programming aproach?

    1. There certainly are advantages (and disadvantages) to the functional approach, especially in a purer form that, for example, enforces immutable data structures and eschews side effects. However, I don’t think that would get at the issues that ESR has enumerated regarding the difficulties of parallelism in particular.

      Obviously, it does nothing for algorithms that must be inherently serial, so I’m guessing you’re getting at the human element and language design elements. On the latter, I’d say that the functional paradigm does little that is obvious to improve the lack of clearly thought out parallel language primitives – that is to say, the best way to approach it is not more clearly obvious in a functional setting than in an imperative one. And this is borne out by the lack of functional languages that even attempt to do this decently.

      Actually a better case might be made for a constraint-based/logical language paradigm like that of PROLOG. It may be more natural to consider what interweaving processing streams should or should not be allowed to do. But this runs also squarely into the other problem ESR mentioned – human tractability. It is, I think, no coincidence that the best known constraint/logical based programming language is still an academic toy, and that asserts and constraints are underutilized even in other contexts. I believe it is because people really have difficulty thinking that way.

      Even people who should be thinking like this, like people who program test cases, often see their tests more like FSAs and look for accept responses, and less like constraint engines.

      Now this way of thinking is something that could, and I think arguably _should_, be trained much more into programmers. But this would still be nibbling around the edges, I think. The ability to think in terms of constraints would be necessary, but not sufficient. Because human brains have certain of the same issues computers have – we have massive parallelism in terms of things like visual pattern recognition, but conscious thought and even things like memory retrieval in humans appears to be almost wholly serial – and probably necessarily so (just try to picture two scenes from your memory in your mind at the same time!). And this brings with it a difficulty in trying to think about concurrency, especially if there are too many threads.

      1. Thank you David, I have heard of Prolog but never payed much attention to it until now; will surely check it later.

        Greetings.

        1. Fact-based languages like Prolog are gaining traction as deployment shifts from functional models to logical models (Ansible, Chef, Puppet, etc.). Fact-based languages simplify the task of verifying an input matches certain patterns. For a concrete example, see Open Policy Agent — it uses a fact-based language to do authorization and policy enforcement for HTTP API’s, Docker, Kubernetes, Kafka, and sudo.

        2. David, Cleverson, et al.:

          Have any of you heard of a Pique engine? This is an abstract engine, capable of computing logical queries in manner similar to that of a Warren Abstract Machine (WAM), the mathematical core of any Prolog compiler, one key distinction being that a Pique engine was designed to work on parallel architectures.

          A previous company of mine did significant development constructing an actual implementation, but I’m having a devil of a time find a link to the original paper…

          1. I had not heard of that, but color me intrigued. I’ll look into that, and thanks for mentioning it!

    2. Doesn’t do much for the sequentiality inherent to most programs. You may get a speed up if there’s some pipelining that’s available, maybe, and the programming paradigm in your language allows streaming. But that’s not necessarily requiring non-von Neumann machines, I think.

  2. Eric, I agree overall. I’m in a slightly different field, but have seen what you describe here.

    The only time my computers load all of their cores at 100% at once, is when I am performing 3D rendering for architectural visualization.

    I’ve found that the modern “MegaDoomDestroyer” video cards ray trace *much* faster per dollar, than even the $1,000-and-up CPUs like Threadripper and Skylake-X. Rendering software (Octane Render and Unreal Engine 4) now can use the video card to calculate lighting, meaning I spent $300 on my CPU and several times that on cards–and that setup performs faster then a CPU-heavy one. Rendering software is now very similar to computer game technology, so this isn’t surprising.

    1. Rendering software is now very similar to computer game technology, so this isn’t surprising.

      Which is the cause, and which the effect?

      1. Computer game technology *is* rendering, so it’s only natural that non-game rendering would follow suit.

        It’s my understanding that GPUs became more generalized, and non-graphic GPUs were developed, after someone demonstrated the ability to use GPUs to do matrix multiplication and (at the time necessarily) outputted the results onto video memory. It became clear shortly after that, that GPUs would be a fantastic plaything for engineers and meteorologists doing physical simulations (among other things).

  3. My multi-core CPU work use case is I think very common today — Virtual Server systems. Commonly have about 50 servers running across 6 servers 24×7. We currently have 80 to 88 physical cores per physical host, and are working our way past 10% utilization (barely started migration off VM old system). Also working to transition GPU heavy users to Virtual machines with NVIDIA GRID system. Due to RAM configuration, I don’t think we’ll get past about 25% core utilization on these systems.

    Lesson learned here is that you have to scale RAM much faster than CPU on a per VM basis, I think due to the low per-VM core usage in our use case, and the much better capability to share CPU cores vs no capability to share RAM (we do not allow ANY swapping!).

    1. Virtualization was the savior of Enterprise multi-core data center CPU vendors. It allowed racks of low-end servers running at ~<5% utilization to be consolidated into far fewer high-end servers running at whatever utilization Management would pay for, typically 50% or so, though it could be driven higher – above 75% there is often not enough headroom for peak loads. As you point out, this requires enough physical memory to hold the workloads.

      AFAIK that trend has been petering out. Current high-end machines with ~32-64 cores and 6TB RAM should be running 500+ VMs, but last I checked most Enterprise customers were topping out at ~75-100 VMs / box. This was due to several issues:
      (a) fear of too many eggs in one basket
      (b) cost of low-end 1U and 2U servers vs 4U high-end machines
      (c) moving workload to "cloud"
      (d) recommendation from dominant virtualization software vendor which is deathly afraid that further consolidation will kill license revenue

      With some work on the virtualization system's scheduler there is no technical reason current x86 server hardware (like a fully-configured HP DL580G10) can't run a typical enterprise mix of 500+ Linux and Windows VMs other than laziness, fear, and greed.

      Virtualization fits esr’s model of non-locality allowing efficient parallelism.

      1. AFAIK that trend has been petering out. Current high-end machines with ~32-64 cores and 6TB RAM should be running 500+ VMs, but last I checked most Enterprise customers were topping out at ~75-100 VMs / box.

        In our data centers (which run a mix of single and multi tenant cloud environments) we find that RAM is always the constraint. This could be mitigated by memory page deduplication, which as far as I can tell is in its infancy right now. The storage people have dedupe just about perfected but the virtualization people still seem to be trying to figure it out.

        Once they’ve got memory page deduplication perfected … it has all sorts of uses. The obvious one is that all of those separate copies of Linux or Windows that are running on the same node will share a single set of memory pages, making full virtualization almost as memory-efficient as containers. (Yeah … the container people will love that.) Even without virtualization, memory dedupe could put an end to shared libraries, since everyone can static link to the same libraries and the operating system will merge the shared pages on its own. Think of the kind of dependency-hell-ending benefits that could produce.

        1. Once they’ve got memory page deduplication perfected … it has all sorts of uses. The obvious one is that all of those separate copies of Linux or Windows that are running on the same node will share a single set of memory pages, making full virtualization almost as memory-efficient as containers.

          Isn’t that considered to be poor form while we’re still in the age of kernels informally written in C, where address space layout randomization is one of many techniques to raise the barrier of attacks? Although per Wikipedia and a couple of references I followed from it, side channel attacks on hardware may be making this technique obsolete.

          1. You can presumably use PIC code and PC-relative addressing (which is available on x64) to avoid having a lot of the pages refer directly to randomized addresses. That saves memory for normal shared libraries as well, even without fancy memory dedup going on.

  4. Living out in the industrial control engineer space, I gave up on new machines delivering measurable performance improvements years ago. More cores at the same speed do stuff all for processing real time, asynchronous data that has to be made serial.

    GPU off loading for decompressing video streams is pretty cool for CCTV workstations. Although that’s another good argument that increasing the number of cores in the CPU is not really helping.

    More than two processor cores is usually just wasteful hotrodding.Operating systems may be able to parcel out applications between them, but the general run of application software is unable to exploit parallelism and it is rare for most users to run enough different processor-hungry applications simultaneously to saturate their hardware that way.

    I’d say three, one for the OS, one for applications, one for cat videos!

    1. You’re on the right track. ESR is complaining that there’s little point parallelizing certain programs, but that’s not the point of multiple cores on a workstation. The point is that I might be interacting with one app with my fingers, while having several other apps running at the same time.

      The ability for the OS to have its own core (and hyperthreading to allow that core to run two threads concurrently) means there’s no excuse for any latency at all when you use the keyboard or mouse to manipulate an interface. There should be immediate feedback from the system acknowledging that action, even if it takes a bit more time before the command can be fulfilled.

      I want to be able to support a thread per device, so that every hardware interrupt is immediately serviced, with parallel memory buses to reduce contention issues.

      1. >I want to be able to support a thread per device, so that every hardware interrupt is immediately serviced, with parallel memory buses to reduce contention issues.

        I don’t think this will help as much as you seem to expect. One reason for doubt is USB polling speed; this puts a lower bound on keyboard or mouse latency of 8msec, which is large compared to any hardware interrupt time you’re likely to see on current hardware.

        While, in theory, higher polling rates could pull USB latency down as low as 1msec, good luck getting the vendors to upgrade. They like their cheap, well-amortized USB 1.1 designs and they’re not going to spend the NRE to even go to USB 2’s higher polling speeds without market pressure. Which it seems even the twitch-gamers aren’t actually delivering – otherwise we’d see gamer keyboards being competitively marketed with low link-latency claims, a thing which is not happening.

        Another issue is that delays between click/press and visible response are likely (I think) to be dominated by stages in the pipeline much later than registration of the hardware event. Dan Luu’s measurements seem to support this.

        UPDATE: What I meant by the last sentence is that if you look at Luu’s measurements of increasing latency they are clearly not dominated by USB polling times. They’re much too large for that.

        1. The main thing I’d like to see isn’t so much very short best-case latencies, as it is shortish and bounded(*) worst-case latencies. I can live with 15ms from keypress to response. I can live with a lot worse than that. What I can’t abide is no response to any input for 2000+ms. It takes a bit longer than that for me to decide that a program has definitely taken a detour off into east hyperspace of course, but there’s really no good reason I can think of for a UI to not respond at all for human-relevant periods. With, of course, the giant exception for known-long operations, non-interactive apps and so forth, but even those should probably generally show the user that something is going on and still respond to such UI elements as are still relevant at the time.

          (*) – by “bounded”, I don’t mean true hard-realtime deadlines, but things like waiting for network round-trips before acknowledging a keypress or mouseclick should be avoided. Call it really squishy realtime.

  5. “1. For most desktop/laptop users the only seriously parallel computing that ever takes place on their computers is in their graphics chips.”

    Absolutely.

    The bad part is that the hardware we have now is hitting that wall pretty much head-on. The most recent GPU cards, like the RTX series, aren’t seriously faster for standard games than the previous GTX models. The only potential large speedup is in raytracing – and a big part of that is the denoising part, done with Deep Learning Super Sampling, which in turn relies on handing some of the machine learning preprocessing over to an NVIDIA supercomputer(!).

    So the lesson is “massive parallellism is easy to program when someone else does it.”

  6. @ESR “… but the general run of application software is unable to exploit parallelism and it is rare for most users to run enough different processor-hungry applications simultaneously to saturate their hardware that way.

    You are definitely correct. But I think this particular statement somewhat misses the point. Or, that is, what *ought* to be the point.

    I had high hopes that multi-core desktop processors would allow all non-UI-essential processes to run unnoticed in the background and leaving the main UI thread free to be super responsive to my every click and keystroke. And in that world, I wouldn’t care that my system was over-provisioned in cores and wasting a bit of power. (e.g. I bought an i7 when an i5 or even an i3 would prolly have been adequate.) So, saturating the cores is mostly a non-goal.

    Unfortunately that doesn’t seem to be the case at all. I can’t tell that the WIMP UI is any more consistently responsive on any platform than it was 15 years ago. Prolly relates to the intractable human problem again – we just don’t know how.

    1. Mouse and keyboard input usually go through a shared USB bus. CPU core count does not affect the busy little USB controller.

    2. > Unfortunately that doesn’t seem to be the case at all. I can’t tell that the WIMP UI is any more consistently responsive on any platform than it was 15 years ago.

      Really? I think responsiveness has _vastly_ improved over what it was 10 years ago, let alone 15. Granted, this is due to only a handful of factors, chiefly overprovisioning of RAM and the use of SSD’s. 10 or 15 years ago, the average machine would be swapping out to spinning rust on a constant basis (even while running Linux!) which killed responsiveness. Today I can install a _very_ lightweight desktop environment like Xfce and LXDE (and still enjoy good usability – they’re nothing like the “light” wm’s/desktops of 10 years ago), and a reasonably modern machine will not only _not_ swap, but plenty of RAM will be available for caching. And dual+ cores (compared to single-core CPUs 10 or 15 years ago) also have an impact, though ESR may be right to some extent that more than 2 cores is overkill.

      1. I agree with Michael here. Even on new machines I find that aggravatingly slow response to UI actions is common. You’d think that a modern OS would pay some attention to making sure that *something* visibly happens as soon as you press a key or click a mouse button. What makes it doubly aggravating is that I generally can’t find an explanation for the torpid behavior — it happens even when CPU usage and memory usage are moderate.

        1. Dan Luu did a couple of interesting blog posts on keyboard latency last year.

          https://danluu.com/input-lag/
          https://danluu.com/keyboard-latency/

          tl;dr: it really is dramatically worse on most modern machinery. An Apple IIe from 1983 was the clear winner, largely because a keystroke more or less gets directly written to display memory instead of going through however many intermediate processes, buffers & transformations.

          1. The observation about input latency is interesting, but I think beside the point: the objectionable behavior of modern UIs, to me, is not that there’s a longer delay between keystroke and screen response in the optimal case than there was in the C64 days. It’s that, when I press a key, the UI does not respond at all for a human-relevant period, and then suddenly responds to all the queued keystrokes at once. (This is more noticeable on smartphones than desktops/laptops.)

            There are a few forces at work here – back in the single-tasking days, if the UI of your program went catatonic, it was either blocked on a long computation (in which case, a faster computer would make the UI more responsive by shortening these delays; it could be blocking on IO too, but a “faster” machine probably had faster IO also) or it’s gone off in the weeds somewhere, probably never to return. You could eventually get a feel for a particular app to know which of these two bogosities you were encountering at the time. Modern desktops seem to be pretty good at keeping one errant app from freezing all input (usually), but it’s still distressingly common to have apps go off into la-la land for a few seconds while they wait for *something* to happen in the background. That something is not usually a long calculation for which either more cores or faster cores would help much, and it’s rarely clear what’s going on. It’s really annoying on smartphones, where a foreground app misbehaving can make it seem like the entire device has become wedged (with no response whatsoever – more than a few times I’ve had my phone suddenly wake back up right as I was prying the back cover off to yank out the battery!)

            What I’d like to see is for UI interaction to be handled more-or-less as a soft realtime system, where just “freezing” and not responding in any way is emphatically not acceptable. The system must either respond to input, give some clear indication of the fact that it cannot and why not (an “I’m thinking” light of some kind, to be used sparingly), or, if catatonia absolutely cannot be avoided, then it should throw away input older than a second or so to avoid the “sudden flurry of unintended actions caused by sudden PLOKTA processing” misfeature.

      2. The key word in my quote is consistently.

        Yes, RAM and SSDs have overall sped up many things. Tho most gains soon enough get chewed up by software bloat, methinks.

        But none of it explains the slow downs, near lock-ups, and click-click and nothing happens behaviors that are still common.

        So, empirically, I guess we say it was never due to CPU speed, CPU cores, RAM, hard drive speed, or network speed. None of those are a limiting factor nowadays.

        1. I think some of that is that sometimes stuff gets swapped despite *no memory pressure*. Windows is especially bad in this regard, as, from what I’ve read, it seems to require that backing store exist before it will allocate memory. Linux is generally a lot better, but I find that running backups on my system tends to fill all avaliable memory with dentry and inode cache data, which forces out page cache for files, and even causes some application memory to be swapped out. My backup software uses the nocache wrapper to prevent memory from being filled with useless pagecache of files touched during the backup, but unfortunately that doesn’t seem to prevent the kernel from spamming memory with the dentry and inode caches.

          1. I’m tending to blame it on the single-threaded nature of the event loop that is the basis of (all?) GUI toolkits. In that environment most anything can block it long enough to be noticeable to the user.

  7. Interestingly enough, my first published algorithm was a parallel version of Dijkstra’s shortest path. The trick is to use really massive parallelism, i.e. content addressable memory. My dissertation gives a good overview of the landscapes of architectures and algorithms.
    The major advantage of CAM over CSP-style parallelism is that the algorithms are MUCH simpler and easier to understand.

    1. >The trick is to use really massive parallelism, i.e. content addressable memory

      Yeah, that kind of dodge is why I was careful to specify Von Neumann machines. Wisenheimer. :-)

      1. Wouldn’t a hash table allow you to get the needed properties of content-addressable memory via a Von Neumann machine?

        1. You can add layers of indirection, in the worst case implementing a logical non-Von Neumann machine. The problem is that adds overhead to the algorithm and complexity (and errors) to the program. Depending on the size of the dataset, gains from parallelism are often swamped out by losses due to complexity. In the best case, the wall time decreases, but the total computer time increases.

          1. Complexity can be managed by well-defined interfaces and practices.
            Between large-scale datasets, hash tables (implemented as SSTables) and a few other blocks, you’ve basically described Google from about a decade ago.

            1. So complexity doesn’t just make avoiding errors more difficult (which can be offset by good practices). It increases the constant term (O), or adds additional terms to scaling. For example, the pure linear implementation of Dijkstra’s shortest path scales as O(n*n) where n is the number of nodes. The best parallel algorithm for it, on Von Neumann machines, scales as O(n*n/p + n*log(p)), where p is the number of parallel workers. Unmentioned is the size of O increases noticeably too. For square mesh networks with fewer than 2,500 nodes, the linear algorithm is faster, and with the parallel algorithm, each additional node slows it down. Somewhere between 2,500 and 10,000 nodes was the break even point, as of 2012. If you look for parallel Dijkstra’s, the presentation from buffalo.edu is pretty interesting.

              Another real world example is the pypy-stm branch, which attempted to remove the global interpreter lock from python. The whole idea was to use speculative execution to let multiple python (pypy) threads run in parallel. Each thread would run a reasonable section of bytecode, then try to commit its changes to the shared memory. If there were no collisions with another thread’s changes, the changes would get committed. If there were conflicts, the thread state would get rolled back and it would try again. It was successful, sorta. They got it working, and for certain workloads it was faster than the normal pypy version. Unfortunately, the added overhead meant that a single thread would execute at about 80% of the speed of a normal pypy version. That means, assuming no collisions, and a perfectly parallel workload, adding a second thread pegs a second CPU core at 100%, and only gets you 60% faster than straight pypy wall time.

      2. I would argue that actually the CAM (SIMD) model is a lot closer to the original von Neumann idea than virtually any MIMD model is. There’s one processor, one memory, one instruction pointer in the program. The main difference is in the kinds of things the processor can ask the memory: is there something with value X, how many between X and Y, what’s the sum or max or whatever of the values. But it’s all single-thread, single-value operations as far as the CPU is concerned.
        Any MIMD machine has a more complicated top-level architecture and more (as you note) complex programming that is much less intuitive.

  8. If you are going to be keeping the hardware for a long time, extra cores may provide redundancy if the failures are independent. Which might let you nurse the thing along longer before needing to scare up a replacement.

    First takeaway, designing a parallel computing solution for minimum complexity, and using that to reduce software maintenance costs for long term use may be difficult. There may be some fundamental work to do to make that process easier, if it can be made easier. Especially if you want a high level of verifiable reliability and security.

    Second, understanding how to apply parallel computing to a distributed controls problem may also be difficult in that it requires understanding both parallel computing and an unusual part of optimal control theory. .

  9. A few thoughts from experience:

    It appears to me that the complexity of the locking system which can be handled by humans is proportional to how well it can be modeled formally. The most complex system I dealt with had a ~10 layer R/W lock hierarchy. But it was strictly-defined and guaranteed to be dining-philosopher free. So when something needed to be implemented you simply went down the list, in order, of the locks you needed to acquire and acquired them. Releasing was done automatically in reverse order. Systems where all of the possible lock contention is implied, undocumented or undocumentable, are nearly impossible to work with and have large amounts of “WTF” issues which are difficult to resolve.

    At least some of the SICK algorithms are likely able to be parallelized in a data-driven way. For example, if you gather statistical data on your depth-first search queries, it’s possible to split the searching some fraction of nodes higher on the tree so that significant sections of subtrees are processed in parallel.

    Others, such as cycle detection, might be able to be parallelized with additional algorithm research. I don’t claim it’s easy so much as I don’t have an intuitive sense that it’s impossible as with crypto chain calculations.

  10. “…Consequently, most of the processing units now deployed in 4-core-and-up machines are doing nothing most of the time but generating waste heat.”

    So what?

    If even a small fraction of time most or all of the processors are utilized, then the cost may well be worth it.

    For example, as someone above pointed out, multiple cores often mean faster compile and build. Even if the $200/hr (cost) programmer only is compiling/building a fraction of the time, the extra CPU cost and electricity quickly pays for itself.

    For example, I build robots for agricultural applications and 95+% of the time the cores are below 50% average utilization. But if for even a fairly short period of time the average utilization approaches 100%, the application falls behind and fails and can’t be used. And, these applications are pretty straightforward to pipeline such that all of the cores are utilized. I can always use more cores. And faster GPUs too.

    Even for computers that are never pushed to their limits, so what? We’re talking a few hundred dollars that might be wasted, but might be utilized in the future. Not a big waste, in my opinion, for potential usefulness.

    1. >So what?

      I wasn’t objecting to the waste so much as to the widespread expectation (or possibly wistful hope) that More Cores Will Solve Everything.

  11. I got my brain fried one day when I learned a task I *thought* was inherently serial, was actually parallelizable – computing the digits of pi. Surely, you have to know digits 1-748 to compute digit 749, right? So, I am leary of adding block chain and crypto computations to the inherently SICK list.

    1. Consider that blockchain and crypto computations are designed and mathematically proven to require serial execution.

      1. Until the proof came out – would you have dreamed pi could be computed in parallel? Designed, yes, I’ll buy that. Mathematically proven? I doubt it. I expect that there’s at least one hidden assumption that will come back to bite those things.

        1. I can think of one cipher system (technically, an entire class of systems) that can be decrypted in parallel: block ciphers. [Technically some hashes can work in parallel too, as that’s what custom bitcoin chips are all about.] However, I don’t believe this will ever apply to encrypting block ciphers, as it’s been known for 40 years now — at least within the NSA — that such ciphers can be susceptible to a process known as “differential analysis”. I’m not a true expert on this, but here’s my best shot at explaining this for a non-cryptographer.

          The idea behind differential analysis is that because block ciphers are symmetrical, they can be treated as reversible mathematical functions and composed accordingly. Therefore, taking two cyphertexts (encrypted by the same key) and merging them with XOR, another reversible function, reveals information regarding the encoding of the XOR’d plaintexts (hence, “differential”). The more the XOR of the two plaintexts should be just zeroes — say, because both blocks contain similar file formatting or network header data — the faster you can eliminate possible keys used.

          In order to reduce the number of vulnerable blocks, most modern block cypher systems “chain” blocks together: the encrypted result of block 1 is XOR’d (before encryption) with the plain text of block 2, then the result of block 2 XOR’d with block 3, and so on. [Block 1 must be XOR’d with a pseudorandom “block 0”, which also must be unique among all encrypted messages lest differential attack become possible again.] As a result encryption is an entirely serial process (as block n cannot be encrypted without the result of block n-1).

          As far as anything other than hashing or block ciphers having practical parallel implementations? I doubt it: either best practices suggest doing encryption only once [using asymmetric public keys to secure an “ephemeral” single-use block cipher key, then using the ephemeral key for the full message] or parallelization would imply a break of the system [such as a being able to predict future states of a stream cipher’s key].

  12. >Our brains are not all that good at modelling the simpler state spaces of purely serial programs, and much less so at parallel ones.

    I think this may be exacerbated by the fact that (among males, at least), programming ability seems to correlate with ADHD/autism spectrum brain types, which tend to have deficiencies in parallel processing and poor interrupt response.

  13. @esr: I concur, and I see several things going on.

    Your comment about easy performance gains coming from converting from spinning platter to solid state drives points to one of them. The sort of tasks most computers perform are I/O bound, not compute bound, and faster processors alone don’t improve performance, because “all machines wait at the same speed”. Anything that speeds I/O boosts perceived performance.

    (IIRC, you ran into the problem of being I/O bound trying to tweak your Python code for Repusurgeon, but the symptoms were such that the fact your code was I/O bound was hidden below the level at which your diagnostics could detect it.)

    For things that are compute bound, parallel processing is not necessarily a fix, because the work must be broken into chunks that can be processed in parallel, and that requires the locality of data you talk about.

    And as you point out, parallel processing is intrinsically harder for human programmers to code for. I saw studies years back talking about the average developer being able to consciously track 5 to 7 things at a time. Beyond that, bugs appeared because the programmer simply lost track of something – the developer’s wetware stack wasn’t deep enough. And this happened in purely sequential processing before parallel processing became a thing.

    I see languages like Go and Rust as specifically trying to make parallel processing easier to code, with better primitives and compilers that try to make sure you aren’t stepping on your own toes. But while that helps, it doesn’t do much for the developer’s finite bandwidth in understanding what is going on.

    I have a 4 core i5 processor in my desktop, running at 3.1 – 3.4 ghz. (In actual fact, it usually runs slower by at least a ghz because there’s nothing going on that needs the extra speed, and current CPUs tend to assume laptops where battery is the scarce resource and higher CPU speeds mean higher power consumption.) I think I can count of the fingers of one hand the number of times I’ve seen all four cores maxed out (and I was doing something inherently parallel), let alone all being used at once. And while additional cores theoretically should make it possible to run background tasks on the others and reserve the first core for foreground stuff the user interacts with, we are still a way to go in OSes that can intelligently distribute tasks in that manner. (Some of those “background” tasks may have to deal with non-localized data, or have to coordinate with other background tasks on other cores with the complexity of inter-process messaging.)

    The G+ comment that talked about multi-core benefits in software builds is not a surprise. That’s well suited to parallelization with good locality of data. The question is what other common tasks exhibit those characteristics.

    We are rapidly hitting walls in processor speed, but the shoe doesn’t pinch that hard because the sort of things we normally do don’t need raw speed. We are in effect going wide – with bigger address spaces and multiple cores, instead of deep with faster CPUs.

    The bigger question is where hitting the wall on processor speed will really bite? What sort of tasks will be impeded because they can’t be done fast enough? (With the added question of what is fast enough – I draw a distinction between “I don’t want to wait all day for this to finish” and “If this cannot be done in X amount of time, lives may be at risk.”)

    >Dennis

    1. >For things that are compute bound, parallel processing is not necessarily a fix, because the work must be broken into chunks that can be processed in parallel, and that requires the locality of data you talk about.

      Really though, non-locality of data in parallel algorithms is just I/O boundness within the program’s state machine, rather than between physical components of the computer.

      EDIT: In other words “I/O bound” means “the data isn’t here”.

  14. “like like [sic] the systolic-array processors at the heart of GPUs”

    I realize you like to bandy around domain terminology, but here, as elsewhere, you are quite specifically and unambiguously talking out of your ass. GPUs don’t use systolic arrays. At least one of these is true: a) you don’t know what systolic arrays are b) you don’t understand how GPUs work. Probably it’s both.

    1. >At least one of these is true: a) you don’t know what systolic arrays are b) you don’t understand how GPUs work. Probably it’s both.

      OK, chain of provenance. In fact I do know what a systolic array is, and have since the first descriptions of those architectures were declassified in the early 1980s. I have not forgotten that systolic arrays were originally developed for image processing in military radars. At that time I didn’t know enough about signal processing to have any feel for how they were used; later I learned enough to place a bet that at least one of their applications was de-noising.

      My belief that there are systolic arrays inside GPUs is not personal knowledge, but comes from my old friend Perry Metzger, who mentioned this during a text chat we were having last week. His mention made sense to me because GPUs need to de-noise, too. It wouldn’t be in the least surprising for systolic arrays to be deployed that way.

      I dismiss as wildly unlikely the possibility that Perry was talking out of his ass. The likeliest possibilities, then, are (1) you are dead wrong, or (2) there are minor variations in the semantic reach of the term “systolic array” within relevant expert communities, with you and Perry happening to be from different ones.

      I choose to believe (2), subject to disconfirmation by evidence.

      1. On the contrary, systolic array has a well defined meaning that applies to no GPU designs at all. You are grossly misinformed. Again.

        Hardware architectural reality is not contingent on what you “choose to believe”.

          1. >The NVIDIA RTX GPUs have a tensor processor which is based off of systolic arrays.

            …which detail I did not know, but Perry doubtless does.

            So “Pence’s Chaperone” is either ignorant of the facts, or a troll, or both. Color me shocked.

          2. >The NVIDIA RTX GPUs have a tensor processor which is based off of systolic arrays.

            Pence’s Chaperone may not be entirely wrong, though. It is possible that this is actually a software simulation of a systolic array running on a mesh. That’s what Systolic Optimization on GPU Platforms describes, but I don’t know whether the RTX is an instance Compute Unified Device Architecture.

            In this paper there’s an illuminating illustration of three different multiprocessor topologies – systolic array, mesh, and shared-memory – embedded in a critique of Google’s use of a systolic array in their Tensor Processing Unit. It states plainly that systolic arrays are good for large tensor multiplications.

        1. All right, so to me, “systolic” means “the maximum amount of pressure in the arteries at the peak of the heart’s compression cycle”. Just what the hell does “systolic array” mean?

      2. That would be a fixed-function part, though, and fixed function is rapidly becoming negligible in modern GPUs. The commonly-used programming model for GPU compute has nothing to do with systolic arrays; if anything, it’s closer to having barrel processors as the individual “cores” in a multicore system, albeit the “cores” are actually using SIMD execution units under the hood to provide parallelism, not switching tasks as barrel processors did.

      3. Having browsed the design documents and RTL of one of the major current desktop GPUs, I can assure you that it absolutely doesn’t fit the Wikipedia definition of systolic array.

        It’s possible that some people would consider some minor parts of the design systolic arrays (e.g. somebody mentioned tensor units, although I’d say even that is questionable). The overall system design of the graphics part (ignoring the display controller, media codecs, etc.) is much better described as a bunch of cores with graphics-specific features that are connected by a data fabric, plus some fixed function stuff (e.g. rasterizers).

        It’s possible that the term systolic arrays was a better fit for earlier, pre-general purpose GPUs.

        1. >Having browsed the design documents and RTL of one of the major current desktop GPUs, I can assure you that it absolutely doesn’t fit the Wikipedia definition of systolic array.

          I don’t think anyone in this conversation, nor Perry who’s not here, has supposed that an entire GPU looks like a systolic array. Just that systolic arrays are components in these things.

          >better described as a bunch of cores with graphics-specific features that are connected by a data fabric

          …which is sometimes used to simulate a systolic array. That makes sense.

          On the other hand, we now have a cite that establishes the presence of a real, hardware-level systolic array in the Google TPU, so it is possible that Chad Irby was actually thinking of that device when he replied to Pence’s Chaperone, and Perry were thinking of it during our conversation last week.

          1. >On the other hand, we now have a cite that establishes the presence of a real, hardware-level systolic array in the Google TPU, so it is possible that Chad Irby was actually thinking of that device when he replied to Pence’s Chaperone, and Perry were thinking of it during our conversation last week.

            No, not really.

            When you dig around a bit, you find that the “Tensor Cores” NVIDIA uses in its newest cards are mostly just systolic processing matrices, buried under the CUDA programming interface.

  15. Perhaps another use for the parallelism is the transactional processing, when multiple entities race to compute their transaction, then one of them wins to commit the transaction, and the others, whose work is disturbed by this commit, have to recompute and retry the commit. The speculative execution inside the CPU is essentially an example of such a system, so extending it into the higher level should make sense. Perhaps this approach could use some hardware-level support, such as copy-on-write memory, where all the parallel transactions read the common state but write the changes into their local memory, which on a successful commit becomes the new master copy.

    Maybe this can also be seen as pre-memoization, when the functions run in advance with the arguments that they might end up getting, and if latter the guess matches the reality, the memoized results can be immediately used. The problem is of course that the “arguments” might include a good deal of state and the “results” might include the side-effect changes to this state, and how to verify that the state didn’t change underneath. As Dadiv Isecke already mentioned, perhaps one way to go about it would be to adopt some sort of functional programming, but it doesn’t really solve the state problem it just shuffles it to another place where you have to copy the state a lot and get the same issue of checking whether two copies are identical. A hardware solution for verification of the (un)changed state might be a better idea.

  16. One common load that parallelizes well is spreadsheets. This is probably more relevant to me than most given I’m a finance guy, but at work I very frequently use spreadsheets that take multiple seconds to re-calculate even with a modern multi-core machine, and if I ever were to go into business for myself I’d be doing the same on my home PC pretty often. A lot of people use Excel at work, and most standard office machines are mass-market desktops, so that will help use the extra cores. It may sit idle 99% of the time, but it’ll noticeably improve performance when you actually are working on a big sheet.

  17. A couple of random points.

    1. The Windows RT API for applications is inherently asynchronous. Which implies parallelism at a level where the basic application programmer can safely exploit it. I assume that iOS and Android present similar application level interfaces.
    2. C++ 17 specifies parallel versions of appropriate algorithms in the library. I’ve recently been playing with brute force decryption of WW2 era Enigma messages. To make my code exploit all the cores all I had to do was add a parallel policy to a ‘for_each’. Once I’d jumped through the hoops to make this run on Linux (neither GCC nor Clang support this yet, Intel have a standalone library that is expected to be integrated into both chains) my dual 6 core old server (so 24 threads with hyper-threading) was regularly showing 23xx% cpu. Which was quite satisfying.
    3. (I know, but this is 2018 and ternary is the new binary) I recently used the Boost Graph Library to solve a shortest path problem. I see that there’s a parallel version of this library.

  18. If the concern is about wasted electric power, why aren’t multi-core
    computers designed to shut off power to idle cores? Or are they? If
    the concern is about slow computers, how much speed is really needed
    for most tasks? (I doubt your word processor program can’t keep up
    with your typing speed.) On the other hand, why do 3 GHz computers
    take just as long to boot up or shut down as 0.001 GHz computers with
    much smaller instruction sets did?

    I don’t see the mere existence of idle cores as being a bad thing, any
    more than the existence of thousands of books in my library that I’m
    not currently reading, or the fact that most cars capable of going 100
    miles per hour aren’t currently going that fast.

    1. >If the concern is about wasted electric power, why aren’t multi-core computers designed to shut off power to idle cores?

      Many now are. That’s a pretty obvious move on a laptop, let alone a smartphone.

      >I don’t see the mere existence of idle cores as being a bad thing,

      I answered this once before, but I guess it needs emphasizing. I wasn’t really objecting to the waste but to the widespread fantasy that piling more cores on a typical desktop machine or SOHO-grade server will solve anything. One way to establish that is if present deployments are already overbuilt and wasteful.

    2. On the other hand, why do 3 GHz computers take just as long to boot up or shut down as 0.001 GHz computers with much smaller instruction sets did?

      “The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.”

      Or if you’d like it in Sturgeon Normal Form, 90% of programmers are crap.

  19. The problem is that the complexity explodes later than your head does.

    Co(re)-Dependency!

    The only reason I find a multicore/thread CPU useful is to do a make -j (ncores+1) for something like the Linux Kernel, or whatever GUI library I’m compiling.

    Were I to win the lotto, I would use the <10nM process to create RiscV, MIPS (both opensource) or even some form of ARM – an there was a 68k cycle correct version… with lots of front/back/side end FPGA stuff so you could reconfigure it to do a specific job and it would run well above 5Ghz. Maybe it could be called the serial killer.

    One other thing is most "parallelism" can be addressed at the end of the chain. Throw the list of 3d objects at the renderer. Or the conditions for a bockchain hash.

    An associated problem is the architecture around the CPU. Much serialism would be best with a HUGE L1 cache with some fast bulk pull from the DRAM. It would be useful if there were a way ANYWHERE to tell the CPU to cache the known critical sections. Instead there is the common problem that 511 byte loop runs fast, but 513 drops. Or the OS ends up filling the cache with junk for it's <5% ok, no task or IO stuff, the having to reload the application and its data.

    Then there are the things which caused the Spectre/Meltdown security holes – hey, lets prefetch, anticipate, etc.!

    The whole thing seems to be a series of ugly as opposed to elegant hacks trying to compensate for quantity (v.s. quality) programming, like the Microsoft bloatware a few years ago.

    1. > Were I to win the lotto, I would use the <10nM process to create RiscV, MIPS (both opensource) or even some form of ARM – an there was a 68k cycle correct version… with lots of front/back/side end FPGA stuff so you could reconfigure it to do a specific job and it would run well above 5Ghz. Maybe it could be called the serial killer.

      If making ultra-fast CPUs was so easy it would have been done a long time ago. There are modern CPU designs which resemble older ones with smaller transistors, but for the most part they’re used for embedded stuff and aren’t that impressive, due to their very limited exploitation of instruction-level parallelism.

      (As for FPGAs, there’s been some work on that — but FPGAs are hard to program and the uses are limited. They’ve been available for a while. A few places use them, but the advantages over GPUs and ASICs have usually not been much.)

      > Much serialism would be best with a HUGE L1 cache with some fast bulk pull from the DRAM.

      The reason why L1 cache is fast is that it is small. Huge caches are slow caches. That’s why we have L2 and (often) L3.

      > It would be useful if there were a way ANYWHERE to tell the CPU to cache the known critical sections.

      You mean like the prefetch instructions that most processors have supported for many years?

      The thing I’m getting at here is that CPU design is actually hard, and the obvious things have already been done. Several of the people making processors do, in fact, have at least a minimal understanding of their jobs.

  20. For a modern CPU:

    Something like 90% of a core’s die area is used by the L3 cache, which only has 60 cycles less latency over DRAM.

    Each core has 160 actual registers.

    An argument could almost be made for abolishing software context switching, just have ten thousand register banks and switch between them.

    >and that’s going to exert selective pressure against overprovisioning.

    Smartphones have 8 cores now, while I’m typing this on something that only has four logical cores. (although most people are unaware of core counts)

    Although oddly enough smartphones are the least multitasking devices, with only one focused application (sometimes there would be a phone call or music playing in a background application).

    This goes back to what Babbage was asked: “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?”
    Which could be extended to:
    “Will one person see the same results on his machine as would mine produce?”
    It might simply be argued that the average person simply wants another person to see the the same things: the same web page layout, the same text, the same colors. Somehow these evolving standards have forgotten what the standards were attempting to achieve in the first place.
    This would be bad for the blind, but maybe webpages should just be streaming bitmaps. Long gone are the days of “this web page is best viewed on 1024×768.”

    1. >Smartphones have 8 cores now, while I’m typing this on something that only has four logical cores. (although most people are unaware of core counts)

      I learned something relevant today. Apparently one reason for multicores in smartphones is so you can build high-power/low-power pairs, the low-power one to run while the phone is inactive and the high-power one when you’re actually doing something. That tells me that, unlike desktop and server CPUs, it’s a non-symmetric architecture. Some of those “cores” could be specialized hardware like a GPU or a cellullar endpoint.

      1. That’s fairly recent; the iPhone, for instance, has only had such an architecture since late 2016. Some of the cores are designed for power-efficiency, and some for raw speed. Each thread has a priority level, and the lower-priority threads can be scheduled onto low-power cores to save battery. The OS tries to keep track of which threads are important (such as the one responding to button presses) and which aren’t. A lot of work happens in the background, thoroughly invisible to the user. Both kinds of CPUs can operate at the same time, if the chip and kernel are smart enough.

        (As for the non-CPU hardware, there’s plenty of it on the same chips — but it’s specialized and can only be used through very restricted APIs. A processor that can only multiply 16-bit floating-point matrices may be incredibly useful for some jobs, but it won’t be running GCC any time soon.)

      2. Not significantly so. “2.3GHz Quad + 1.6GHz Quad” as one example.

        In some cases all “big.LITTLE” cores can be used, which simply means a smartphone can do more in the background than what my 2 physical core computer can do in the foreground.

        Right now my computer is idling at 1% between keystrokes and for geekbench benchmarks, my logical cores are equivalent to an average smartphone core now, so I suppose we both know everything but this?

        We potentially both know that if a CPU is aggressively scheduled to wake from idle less, it would consume even less power.

  21. I answered this once before, but I guess it needs emphasizing. I wasn’t really objecting to the waste but to the widespread fantasy that piling more cores on a typical desktop machine or SOHO-grade server will solve anything.

    It probably needs emphasizing over and over. Yes, it’s totally useless. There are way too many use cases that require serialization of data for the ‘more cores’ approach to work, even if programmers wanted to try to utilize them. Even the apparently simple UI vs background is harder than it looks.

    I find it interesting that we seem to have come full circle, from submit your program to the mainframe via teletype, to power on your desktop, to thin clients, then to virtualization of servers and desktops. Ok, we’re a lot better at it now, but re-centralisation seems to be a thing. Which is a pain for me – when a host goes down we have a wider fault domain than we used to.

  22. How does the brain compare against microchips these days, in terms of connections per cubic millimeter, and the thermal runaway problem?

    (And, at a tangent, how the fsck did that become an evolutionary advantage… and yes, I know about tool use, symbolic language, the tradeoffs of raising offspring, and savannah life… but still… I keep wondering how we got here incrementally, a la the eye getting here from light sensitive cells to eyespots to being in a little divot to putting a cornea and fluid in front…)

  23. “1. For most desktop/laptop users the only seriously parallel computing that ever takes place on their computers is in their graphics chips.”

    This is increasingly true of supercomputers, of course. Of the Top 500, all of which are now linux, an ever increasing number are GPGPU based, with CPUs there to farm out and collect jobs.

  24. Back up a little here. Where is it guaranteed that computers have to continue “getting faster” forever? At the risk of sounding a little too luddite … perhaps we are at a point where we have perfected the CPU and that’s actually ok. Perhaps the next phase of technology innovation will have to be a rediscovery of the art of writing code efficiently instead of linking in a hundred megabytes of other people’s libraries just to make “Hello World” start up in whatever programming language is hip in Starbucks this week.

    And I don’t want a self-driving car, so pthgpthphtphtphtphthh to you too.

    1. There are two possible limits to technological advancement that don’t seem often seriously discussed, at least not in the conversations one sees that have passed through journalists.

      One is the physical limits you mention.

      Another is in what culture and society permit the development of. Remember, there were some points in history that had a lot of what was available for the industrial revolution, but did not have the revolution. Rome, some of the Chinese dynasties. We don’t fully understand the societal underpinnings of what let us get where we are, and can’t be entirely sure that we haven’t screwed them up in some subtle way. Unless we’ve screwed them up in an unsubtle way.

      1. We don’t fully understand the societal underpinnings of what let us get where we are, and can’t be entirely sure that we haven’t screwed them up in some subtle way. Unless we’ve screwed them up in an unsubtle way.

        I vote unsubtle by now. In a not all that long ago Armed and Dangerous topic we were discussing how we can see in the not too distance future that we’re going to have to set up an entirely parallel Internet, down to the wires, if we’re going to be allowed to continue developing our own FOSS projects.

          1. The problem is of course that our ruling class is systematically dismantling our society, using useful idiots like Ehmke as their shock troops, with social terrorist mobs giving the idiots massive power.

            But as long as our society is for example in an r selected time of plenty, where the Left’s support of their violent brethren cannot be overstated, any sort of self-help as you initially posted before revision is out of the question.

            See the link, which does point out tiny, totally cohesive RWDSs are the only form of self-defense that’s currently practical against the Left’s blatant violence, and that’s not exactly a solution to our FOSS problem. The mobs could go leaderless, they frequently are, although I suppose exemplary punishment of a random selection of them might tame them.

            I keep on this violence theme because as far as I know, force is the only thing which stops the Modern Left which started with the French Revolution, or purity spirals in general. That, or running out of people to kill, which I’m told has happened a few times.

            So unless any of us are willing to start down that path of leaderless but possibly effective resistance today or in the near future, we need to plan and start building our own independent communications networks, individually prepare for the days when electrical power becomes very expensive and/or unreliable, and so on.

            Now that the Eye of Soros has been turned on us, we won’t be left in peace this side of the counter-revolution.

            1. The GOP controls all 3 branches of the US government and you still think we’re on the verge of violent communist revolution?

              1. Trump controls only a fraction of the executive, Federal judges in fact control more of it, with the the general approval of the Supreme Court’s 4 Democrat appointed judges and Roberts. You must have missed his virtue signaling on Obamacare when his vote wasn’t even needed, and you evidently haven’t “picked up a newspaper” since before the November election results were announced, or ever learned enough civics to know about the filibuster and what it takes to really control the Senate (well, it could be ended with a majority vote on the rules, Harry Reid started that process and now it has been ended for nominations).

                You are a paradigmatic NPC, I can’t count the number of times I’ve read “the GOP controls all 3 branches” trope as bit of rhetoric that depends on theory to dismiss reality.

                Oh, yeah, Trump’s chances of winning reelection are steadily dropping as he screws over more of his base, see gun owners for example, and he’s 2 years too late to start posturing about building the Wall. His best chance is the Democrats nominating a complete loon, a strategy which failed G.H.W. Bush.

                Meanwhile the Left controls the culture, as well as most of the government as outlined above, and their blatant vote stealing in 2018 is signaling that “there is no voting our way out of this” (TINVOWOOT) is going to become a widely accepted fact.

                From Trump on down, those defending themselves from Antifa et. al. are persecuted with the full weight of the government, while Antifa is given a pass, it’s very clear they have the full protection of the DoJ including the FBI, very different from the 1970s as mentioned in the Days of Rage link. While some of the highest figures in the Democratic party openly support violence against their enemies.

                So, yeah, anyone with eyes to see and a knowledge of the Modern Left’s history can discern the possibly of “violent communists” taking over the government, this case without even needing a formal revolution. And thus we on the other side are arming ourselves to an extent never before seen in American history, got started after 9/11 when W made it clear we were on our own, and then Obama got elected.

                But, sure, keep what goes for your mind off these facts and developments and spew out the talking points of the moment, that’ll have you fully prepared for “revolution” or counter-revolution if/when it comes.

  25. You don’t mention video games in your article at all. These are often very demanding of both cpu and you power and are usually the primary market / raison d’être for high end desktops.

    1. >You don’t mention video games in your article at all. These are often very demanding of both cpu and you power and are usually the primary market / raison d’être for high end desktops.

      Included under “computing for graphics”.

      I think you’ll find, if you profile, that computing for video games is partitioned such that all of the game computation outside of video rendering on the GPU takes place on just one (1) of the host machine’s processors at a time. The languages/toolchains in which these games are authored in just aren’t sophisticated enough to support serious concurrency on the general-purpose CPUs.

      1. That is not really true any more.

        Many game engines for AAA titles are multi-threading their host machine code. Look at the source code for Doom3 BFG edition, it splits the game logic into jobs, processed by a pool of worker threads. That game was released in 2012.

        For example, Rage, as well as many newer titles uses additional threads for streaming content from disk (level and texture data, sounds etc) decoding/decompression and uploading to graphics cards, etc.

        Very commonly, the audio will be rendered in its own thread too, which would be streaming content via an audio api. Again it will be synthesizing/decoding compressed audio, mixing and doing large amounts of convolution, among many other operations.

        There are many interesting talks given at GDC about parallelism in the context of non graphics processing. One of my favorites was “Parallelizing the Naughty Dog Engine Using Fibers”.

        1. >Many game engines for AAA titles are multi-threading their host machine code.

          OK, I stand corrected. But it doesn’t follow from this alone that a game can saturate multiple cores. You mention “additional threads for streaming content from disk (level and texture data, sounds etc)” Even with an SSD those threads are going to be waiting on I/O often enough that assigning a processor core to any of them won’t help much. Waits on the graphics card or sound hardware will also have the effect of lowering a thread’s CPU utilization.

          To actually get use out of lots of cores you need lots of threads that are almost purely compute bound. Mixing and convolution might qualify, but again those will be subject to I/O stalls – source data gotta come off the disk, right?

          1. How about updating the game state each frame? Games that track a lot of objects have to perform godzillions of calculations, most of which are not very dependent on one another, almost all of which are very compute-intensive since they depend only on the previous state (already in memory) and, perhaps, a relatively small amount of player-input data.

            The new hotness in game programming is replacing an OO approach with an entity-component system. Roughly speaking, this means representing the entire game state as a sparse matrix with the rows being game objects (entities) and the columns being clusters of related data relevant to an entity (the components). If you write candidate state changes to a fresh area of memory and then commit them at the end of the frame (a simple double buffering scheme may work for this), you can parallelize the shit out of updating it while keeping the update function stable — which may be necessary when you’re at the scale of having to track every blade of grass in a visually realistic environment such as that of Red Dead Redemption 2. Let alone MMOs where you now may have to track millions of player-controlled objects, concurrently.

            1. >How about updating the game state each frame? Games that track a lot of objects have to perform godzillions of calculations, most of which are not very dependent on one another, almost all of which are very compute-intensive since they depend only on the previous state (already in memory) and, perhaps, a relatively small amount of player-input data.

              Yeah, that’s a plausible case.

              1. Isn’t most of the heavy lifting for that being done in the GPU?

                And a certain amount of it is being done server-side nowadays as well, as even nominally single-player games have a strong server-side component.

          2. Many current games can saturate multicore CPUs particularly at the highest settings. Ok, I just fired up Fortnite (semi-demanding), maxed out the settings and jumped over to an area with lots of structures. The game is running just over 60 fps and the four cores are all running a bit over 70% utilization. So in this situation, the (mid-range gaming) GPU is maxed out but there is still enough work to use most of a 3.5 GHz four core CPU.

  26. I play a lot of computer games, and some of them will quite happily hammer every single thread available on whatever machine I’m using.

    Admittedly, most don’t need four or six separate cores running at 99%, but the ones that do are the ones that made me go out and get water cooling for my computers, just to drop the fan noise to something lower than a dull roar.

    Some game programmers are finally starting to really use multiple threads, because they have to find some way to keep the game AI and map generation running while the rest of the machine is busy talking to the graphics card.

  27. My approx 7-8 y.o. MacBook Pro was deemed hopelessly obsolete by Apple a year or so ago. It refuses to install the latest Mac OS, and its days of running xcode are numbered. Yet for 99% of my purposes it runs fine. At some point I swapped out the hard drive for a SSD. The OS isn’t at all optimized for SSD, but it definitely helps. I also swapped out the optical drive for a much bigger spin-disk for bulk storage. (Note that current model MBPs also tend to have smallish SSDs and lack bulk data storage.)

    It has essentially 4 areas of potential hardware improvement:
    1. The screen is cracked, but I usually forget to notice after a while.
    2. The battery is awful. I think I might’ve got a counterfeit replacement battery. Thanks, Amazon…
    3. It really needs a better GPU.
    4. Definitely could use a lot more RAM.
    …but there’s nothing wrong with either the CPU speed or core count. Actually a raspberry pi would be sufficient if it had better application program support.

    The real use for 4+ cores and massive RAM would be, in my case, running virtual machines. Sort of a simultaneous dual boot. I’m in the process of fixing up my son’s old windows laptop for that reason. Only things wrong with it are a failed disk and a broken down-arrow key. Again, CPU is fine.

  28. DAWs with multiple software plugins being used on many tracks (multi-track recording software), and AV software in general, can never have enough cores. The theorem prover Isabelle also utilizes all parallel CPU cores available.

    In fact, I always want my CPU meter at 1%, and my RAM use at 10%. This always guarantees that my CPU isn’t at 90%, a result which is commonly called “slow computer”.

    I want it all, 32 cores in a laptop that costs, at most, $700. Point-oh-1-percent showing on the CPU meter, while doing heavy processing, that would be true bragging rights.

    Informative article, talk about parallelism is always interesting.

    But you should consider this law, “The Law of 1998, ‘Why Would Anyone Ever Need 16GB of RAM?”, a related law to your article here on parallelism.

    1. >But you should consider this law, “The Law of 1998, ‘Why Would Anyone Ever Need 16GB of RAM?”, a related law to your article here on parallelism.

      There’s a crucial difference, though. We have a couple decades behind us now of people trying to get speed gains on serial programs by automagically distributing their computation to multiple cores, and failing.

      DAW = Digital Audio Workstation?

      1. Yes – Digital Audio Workstation.

        Let’s mix down a recording of a vocal quartet with a keyboard. That’s six input channels (4 x voices + stereo from keyboard). Each of six tracks will be individually compressed, equalized, pitch corrected, and have reverb added. Lots of opportunities to split the work across cores.

        A drum kit can bring in another half dozen or more microphone signals. Add a couple of guitars and now you’re over a dozen individually processed channels all running at once, times several processing plug-ins per channel.

        Overrunning the CPU is so common that there’s a term of art: to “bounce” a track is to perform all the audio processing steps on a particular track and write that track to disc.

  29. We look at computing for graphics, neural nets, signal processing, and Bitcoin mining, and we see a pattern: parallelizing algorithms work best on hardware that is (a) specifically designed to execute them, and (b) can’t do anything else!

    This also applies to the human brain, I believe, in exactly the way you’re describing here. The brain can perform countless parallel computations in real-time when judging the speed of an incoming cricket ball (if you’re Australian).

    But mathematical operations are a different kettle of fish – you have to work on them slowly and serially, untold orders of magnitude more slowly than the brain can handle cases for which it’s been optimised by natural selection.

  30. Probably a dumb question, but why doesn’t having more cores speed up userland–right now I have 4 Chrome (work computer, can’t use anything else) windows with multiple tabs open, Outlook, Excel, and several Cygwin windows. Those are just the applications, not the myriad of other tasks that the OS has running. Some of those are depended on hardware/interrupt stuff, but more of them are not.

    Shouldn’t a dual CPU Quad Core processor be able to have at least 8 things going on at once? With NUMA even memory should be able to be accessed at least 2 processes at a time.

    1. >Probably a dumb question, but why doesn’t having more cores speed up userland

      Well, if userland is bottlenecked by disk or network waits, you can throw all the cores at it you want and speedup will be marginal at best. That would be my first guess.

    2. The workload you describe, William, is not going to be very CPU bound. I would not anticipate more cores to speed it up. (Also, such as it is CPU bound, Office shows signs of not always having good parallelizability *even between different programs in the suite*. I can’t tell you how many times I’ve had Outlook lock up when Excel was busy, for instance. That’s just Microsoft writing crappy code). Frankly, on a work PC, I wouldn’t be surprised if your employer has underspent on RAM, and the system is swapping (in which case *everything* will be waiting on disk).

      1. I insisted on 16G of ram on the last upgrade. I HATE swapping.

        All of my (windows) systems seem to have this problem though. I have a *old* Dell 690–the dual CPU quad-core example from above. It’s got 24G of ram and 4 hard drives and still gets bogged down from time to time with similar workloads.

        I guess as you note it’s just crappy MS code. When I find another job I’m going to be replacing the desktop with something made within the last 3-4 years :).

  31. One advantage to parallelism being fundamentally hard: one application (usually) shouldn’t be able to run amok and render the workstation totally unusable. Unless there’s a bug in the OS, or if key system management features turn out to be I/O bound. Which I would consider a bug.

    1. Maybe switch to Windows? That’s not a question of parallelism being hard. A properly designed desktop OS will give desktop tasks absolute interrupt priority, allowing for instantaneous response from the UI regardless of what’s going on in the background. Windows seems to do a better job of giving time to user processes while under load than Linux. You fill up your browser with tabs, and Linux can grind to a halt. That doesn’t really happen under Windows (or, near as I can tell, macOS).

  32. A well meaning friend bought me a shiny new laptop, but it broke because the CPU ran too hot. The heat warped the motherboard. He gave me a six-core beast to run Windows 10, Microsoft Word, and Google Chrome; it turned into a $2000 paperweight.

    This straw broke the camel’s back. I slapped Debian on a Thinkpad from 2011 with a core 2 duo and started using it as my daily driver. I became interested in Linux when I found out I could get my work done without moving my hands from the keyboard, and this accident seemed like a good excuse to switch. After using it for a month, I love it! Emacs and Latex make writing a pleasure. Sometimes I get distracted because it’s so damn fun to tinker with the system, but on the whole, my productivity has gone way up.

    For casual users like me—folks who just need to browse the web and interact with text—a 10-year-old computer is completely fine.

    1. For casual users like me—folks who just need to browse the web and interact with text—a 10-year-old computer is completely fine.

      My everyday work laptop – for a mix of software development in Ruby, Java, JavaScript, Clojure, ClojureScript and Common Lisp – is a seven (nearly eight!) year old Lenovo ThinkPad X220, running FreeBSD.

      The only downside about this machine is the 1366×768 screen resolution.

      Everything else is great: the non-chiclet keyboard, matte LCD panel, small form factor, user-maintainable components (my four year old swapped his ThinkPad hard drive for an SSD using a single small screwdriver).

      I bought the lot – an i5 model w/ 16GiB RAM, several docking stations, and a brace of aftermarket power adaptors – for around $600 (Australian).

      I’m probably going to keep running this machine for as long as the 768px screen remains usable in the face of modern UIs that often assume retina-style displays.

      1. Depending on your exact ThinkPad model, aftermarket LCD panel swaps are probably possible to increase your screen resolution. Lots of people do it.

        1. Yeah I’ve been looking at that … it’s a step out of my hardware maintenance comfort zone, which might be a good thing. I think I’d buy a second X220 to perform the upgrade on, though, as insurance :)

          1. >Yeah I’ve been looking at that … it’s a step out of my hardware maintenance comfort zone, which might be a good thing. I think I’d buy a second X220 to perform the upgrade on, though, as insurance :)

            Let me know how that goes. I have an X220 too.

  33. Only way I see to use parallel is to limit it. More restriction you give then easier you will be able to reason about this. Best example of this is read-only data. Multiple thread that can modify same complex data is very hard to handle correctly, but when you ban modifications everything become trivial.
    It will be not always possible (like if everything is readonly then you can’t write anywhere your results) but this is probably only way to limit this complexity.

  34. You’re right about the code compiling use case for multicore processors. This is why I use a 24 core opteron from 10 years ago as my build server at work. It can get Android compiled in about 55% of the time of the ~4 years more modern 8-core system I use as my work desktop.

    The other use case is definitely virtual machines and containers. That 24 core opteron system has 5-6 containers running on it most of the time, and I could probably run dozens of containers on it without saturating the processors. Those containers are probably things that would have run fine on an old Pentium 4 or Athlon processor, but it’s nice to not have to worry about resources when spinning another task up. The only time it ever saturates is when doing a large compile job. It’s absolute garbage for single threaded workloads, but still holds its own in tasks that can be parallel.

    Games are pushing slowly into multicore, with console games leading the charge now to an extent. It’s still a huge task to figure out what can be broken up, but the lower level GPU apis are helping in that regard by breaking down the biggest monolithic thread that used to be in games. (The Direct3D/OpenGL driver.) We’re now to the point where a 6-core processor is useful if you’re going to be simultaneously playing a game, and streaming it to the internet at the same time. Below 6 cores and the streaming starts to bog down.

    The fact that you have a handful of SICK tasks you might be running, and a bunch of smaller parallel tasks is part of what lead to the big.LITTLE design of some modern phone processors. 2 superscalar heavily pipelined fast processors that take tons of die space for those stubborn, linear tasks, with 4+ lesser processors that take significantly less die space and power for anything that doesn’t need that degree of serial power.

  35. >one interesting use case for multicores is compiling code really quickly

    Which also implies that all the users who use apps with JIT compliation, Java or .NET, can get rid of the startup delay / “warm up time” that way. Granted, I have never heard this being an actual issue. What I did often see was when startup delay was caused by some other thing for non-technical users, they would click on the desktop icon impatiently five times which obviously only makes performance worse.

  36. I saw several references to saturating cores. I’m going to suggest that this should NOT be the end goal. 8 processors running at 50% is going to have significantly lower heat generation and greater lifespan than 4 cores running at 100%. It’s better to have slack capacity to work with IMO.

    In my personal experience, I do see benefits from more cores, but more ram helps more even on a single OS. I could be fooling myself.

    1. When I’m playing a graphics-intensive game or doing a long video render, I absolutely DO want to saturate the hell out of every core on the machine.

      A single eight hour render time reduction will more than pay for any theoretical loss in CPU hardware life span (not that I’ve ever had a CPU die of old age).

      1. The goal is good graphics though. Having more capacity than you actually need is great when you can. I was mostly thinking in terms of CPU cores though.

        I have had graphics cards go bad on me before. I suspect heat was one of the causes, but of course I can’t prove it.

        1. A lot of rendering still relies on high CPU usage – and some graphics software can use both at the same time to speed up rendering even more.

          Some games will run the CPU flat-out just to keep handing the GPU the information it needs to render the environments it’s creating on the fly.

    2. It’s 2019, too.

      Your CPU will thermal limit itself before it has heat damage, and even running at 100% rated with a stock cooler, you shouldn’t hit throttling, typically.

      You should – haven’t tested – be able to run a modern Intel Core [and presumably AMD Ryzen] without a heatsink, without damaging it. It will run very slowly, and might crash due to flakiness, but it should suffer no permanent damage.

      (Speaking of desktops with actual ventilation; laptops are another matter.)

      1. I think the last time I definitely killed a CPU directly with heat was back around 2000. I have had other parts die for no known reason, sometimes after a fan stopped working. Can’t pinpoint prove heat is the cause, but it is suggestive.

  37. Individual programs might only be able to use one or two cores, but no one is running just one program. In addition to the background processes of your OS, you’ve probably got a browser going, a music player, maybe something like Steam– and that’s just your baseline before you start actually working on anything. No one wants to shut their browser down just to fire up a game or a productivity application.

    The truth is that modern desktop workloads won’t really run smoothly on dual-cores anymore because multitasking has become so central to the way computers are used now. Unless you’re literally just using your computer for Facebook and nothing else, four cores is the sweet spot for a personal computer right now.

    1. That.

      You don’t need to saturate four cores even once to make them useful for perceived responsiveness, which is the key to smooth UI.

      Simply not having to context-switch as much makes things feel much faster.

      “Computers are made for man, not man for computers”, to rephrase Scott Alexander.

      They are not generally for maximizing CPU usage with idle cores as a sin – they’re meant for doing human tasks, and having them at less than 100% usage is super important for that.

  38. When it comes to latency, I often feel I’m (im)patient zero. I’m glad to see others reacting to it here. Dan luu’s stuff is really excellent.

    I buy 2ms monitors, running at 120hz refresh rates, have the fastest keyboard I can buy, and run linux r/t kernels on everything… and it’s STILL not enough.

    Another favorite example – voip runs at 20ms sample intervals. Anyone that
    still remembers what the circuit switched telephony network was like can also remember that a cross town phone call was like whispering in your lover’s ear – it really was. I’ve got voip (opus codec) below 2.5ms, and it’s still not as good – from shouting 20 feet across the room to 2.5 feet it is still nothing like the old phone network. (try it!). Analog mic pickups remain best.

    Everybody “knows” that humans can’t grok 20ms intervals -it’s a f-ing lie. Playing music there’s a huge difference between me locked in on the drummer 2 feet away vs 8.

    I’ve spent a lot of time trying to parallize workloads, in for example, the ardour daw, or in build systems, and in code. I like very much that go has csp, as choosing the “right set” and level of abstraction in other languages is really difficult. For example I’ve spent months now debating if I should use nng ( https://github.com/nanomsg/nng ) or liburcu ( https://liburcu.org/ ) in a project to try and speed up the babeld daemon. In both cases I need a memory pool and some sort of semi-r/t scheduler, not in either lib. And lots of threads. threads in c, suck. c++ suck harder. As soon as I commit to one of these methods I have to rewrite everything else from scratch.

    as for “thinking parallel”:

    Probably the best modern book on the subject is paul mckinney’s:

    https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.2018.12.08a.pdf

    I mostly fantasize about cpus that could context/security switch in 5 clocks (like the mill), with onchip static ram for the interrupt service routines.

    sometimes I try to find a role for the beaglebone’s PRUs – I see an awful number of co-processors explicitly designed to handle I/O, but wish modern processors would handle interrupts in single digit cycles.

    I don’t know what to do about it all. lowlatency.com? highlighting products that try to get latency down?

    1. Silly question, but have you done any comparison of the performance characteristics of recent SRAM vs. DRAM? It would seem to me that at some point someone would support using on-board SRAM DIMMs instead of DRAM. Sure, the cost would probably go up by an order of magnitude, but for people who are really looking to get maximal performance I’d think that would be something they’d be willing to pay for.

  39. The cost model for layering enough memory on chip to be genuinely useful (e.g 32MB) has no appeal in the embedded market. As much as I’d like a single or dual core, pure static memory, wireless router – or watch – it seems unlikely to happen unless an apple drives it, or a cheap memory cell creation technique arrives, or a cpu arrives, that would do the right things in such a small package.

    I think we are beginning to see a resurgence of “scratchpad ram” , which will help on the determinism front. I hope that mips survives long enough to bring these to market: https://www.mips.com/products/warrior/i-class-i7200-multiprocessor-core/

  40. But without lots of extra cores, how will your PC manage to process all your personal data in the background and determine which bits to send off to Microsoft/Apple? That’s critical functionality right there!

  41. My enthusiasm for Go and Rust is to address the mental programmer’s limits on groking multi-process stuff. Doing it in Java was perhaps theoretically possible, but in practice even the 1% engineers got it wrong most of the time.

    Fixing UI latency would be a good use for some of those unused cores, but I expect that will require not only a new kernel/OS, but reimplementing all the apps, and that I don’t expect in my lifetime.

  42. Eric,

    This reminds me of a question I had several years ago, but to which I have never figured out an answer.

    I read the book _Godel, Escher, Bach_ by Douglas Hofstadter, which gives a good overview of recursiveness and the halting problem in a format that’s friendly to non-coders. He postulated a computer language called “Bloop”, which looks similar to Pascal but does not permit unbounded loops. It supports syntax such as, “LOOP N TIMES”, but has no equivalent of a DO WHILE construct. This means that any program that can be written in Bloop is guaranteed to terminate, but not all programs that can be executed on Turing machines can be written in Bloop; only those that limit themselves to primitive recursive functions can be.

    He then adds a MU-LOOP: BEGIN END construct and calls the extended language “Floop”. “Floop” is a typical language falling under the Church-Turing thesis, equivalent to typical computer languages in power. Floop can implement general recursive functions that cannot be implemented in Bloop, but Floop programs are not guaranteed to terminate.

    He then walks through Cantor’s diagonal argument to show that neither Floop or equivalent languages can implement a program termination-tester that tells us in finite run-time whether a specified program will terminate or run forever. Following that, he postulates a language “Gloop” that has functionality beyond Floop, hence beyond any language covered by the Church-Turing thesis, which can implement said termination-tester.

    So here is my question:

    Hypothetically, what trait or capability would a language need to implement to become Gloop? Hofstadter never explains just what the specific impossible capability (i.e., something equivalent to the unbounded loops that raise Bloop to Floop) *is* that would allow implementation of his “Red” programs.

    Possible thoughts I had include:
    — Parallel threads (probably not sufficient)
    — Infinite number of parallel threads (sufficient?)
    — Something else?

    But I have no idea what hypothetical capability is really need to have a Gloop language. And I’d like to understand this, not because I think it’s possible, but because it would increase my understanding of the whole halting problem and exactly where the barrier is that prevents Turing machines from implementing a clean solution.

    Thoughts?

    1. >Hypothetically, what trait or capability would a language need to implement to become Gloop?

      I don’t know. I think his argument was even if you your Gloop implementation has an oracle for termination, you still end up with an unprovability result.

  43. Oh for eff’s saje, disc platters do not have rust on them. I hate that lame joke. BTW hard discs make for excellent bulk storage for movies and ISO files (for consoles). How wasteful it is to store a movie that doesn’t need more than 3.5MB/s in read speeds (tops) on an SSD?

    1. >Oh for eff’s saje, disc platters do not have rust on them. I hate that lame joke.

      Joke? I have never thought “spinning rust” was a joke, just a figure of speech. Iron oxide is iron oxide, after all.

      1. Modern hard disks do not use iron oxide, they use thin film magnetic material vapor-deposited to the platter to ensure a uniform surface that enables high densities.

        Which is why old hard disk platters used to be red or orange in color but modern platters are grey-silverish in color.

        I get the aversion towards using HDDs in the OS partition, but 1TB harddisks are excellent for bulk storage at low cost, it’s the kind of secondary disk you will actually buy without a second thought instead of that 1TB SSD you might not. Which is a good thing because you will never have to care about multimedia files or ISOs filling up your drive ever again.

        1. >Modern hard disks do not use iron oxide, they use thin film magnetic material vapor-deposited to the platter to ensure a uniform surface that enables high densities. Which is why old hard disk platters used to be red or orange in color but modern platters are grey-silverish in color.

          Huh. I didn’t know that. Been long enough since I actually saw a naked disk platter that they hadn’t changed color yet. When did it happen?

          >I get the aversion towards using HDDs in the OS partition, but 1TB harddisks are excellent for bulk storage at low cost,

          Yeah – the Beast has one SSD and one spinning drive for that reason.

          1. It’s been at least since shortly after the turn of the century. Sometime between 2000 and 2004 I disassembled a hard drive that was super-dead and the platters were silvery, not reddish. (2 or 3 platters on the spindle, I used the head assembly as a novelty cubewall magnet for a while)

            1. >It’s been at least since shortly after the turn of the century.

              Well, before consumer SDDs then. Means there was no time at which “spinning rust” was accurate outside of military/aerospace. Interesting.

  44. Here is an idea that I’ve developed after reading your post:
    https://www.tdcommons.org/dpubs_series/2896/

    Basically, when we have spare CPUs, we can schedule multiple threads of one process together,and then use the fast low-overhead inter-thread communications directly in the user space. Another way to look at it is kind of like superscalar execution on a higher level.

    Not sure if anything useful will happen with it, but at leats the idea is out there now.

Leave a comment

Your email address will not be published. Required fields are marked *