Embrace the SICK

There’s a very interesting article just out, C Is Not a Low-level Language;. in which David Chisnall punctures the comforting illusion that C is really a “close-to-the-metal” language and relates this illusion to the high costs of Spectre and other processor-level bugs.

Those of us who think seriously about language design have long been aware that C’s flat-address-space model is increasingly at odds with the real world of memory-caching hierarchies. Chisnall’s main contribution is to notice that speculative execution, the feature at the bottom of the Spectre and Meltdown bugs, is essentially a hack implemented to allow C programmers to maintain the illusion that they’re running on a really fast serial machine.  But he has other interesting points as well.

I recommend reading Chisnall’s article before you go further with this post.

It’s no news to my regulars that I’ve been putting increasing investment into the Go language and now believe it a plausible candidate to replace C and C++ over most of C/C++’s range – that is, outside  of kernels and hard realtime.  So the question that immediately occurred to me upon reading the article was: Is Go necessarily productive of the same kind of kludge that Chisnall is calling out?

Because if it is – but something else isn’t – that could be a reason not to overcommit to Go.  The twin pressures of demand for lower security defects and the increasing complexity costs of speculative execution are bound to toll heavily against Go if it does demand massive speculative execution and there’s any realistic alternative that does not. Do we need something much more divergent from C (Erlang? Ocaml? Even perhaps Haskell?) for systems programming to follow where the hardware is going?

So let’s walk through Chisnall’s discussion points, bounce Go off each one, and see what we can see.  What we’ll find implies, I think, some more general conclusions about what will and won’t work in matching language design to real-world workloads and processor architectures.

On C requiring high “instruction-level parallelism”: So will Go if you write it like C.  Sometimes this is not avoidable.

To write Go in a way that will keep a modern Intel processor’s instruction pipeline full you need, by Chisnall’s argument, to fruitfully decompose each serial algorithm into somewhere around 180 worker threads.  Otherwise you need to do speculative execution to avoid having a significant chunk of your transistor budget simply behaving like a space heater.

Go, in itself, doesn’t solve this problem.  Sure, it lowers the barriers – its implementation of CSP via channels and goroutines is quite elegant and a handier toolkit for writing massively concurrent code than I’ve ever seen before.  But it’s only a toolkit; a naive translation of serial C code to Go is not going to use it and not going to solve your processor-utilization issue.

On the other hand, every other potential competitor to Go has the same problem.  It might be the case that (say) Erlang or Rust or Haskell imply a better toolkit for massively concurrent programming, if one but knew how to use it.  The problem is that the difficulty of moving any given chunk of production C code, or of writing a functional equivalent for it, rises in direct proportion to the ontological gap you have to jump to get from C to your new language.

The real cost of obsolescing speculative execution isn’t moving away from C, it’s what you have to do in your replacement language to go from a naive serial translation of your code to one that seriously jacks up the percentage of your processor clocks doing productive work by using concurrency in some appropriate way.  And the real issue is that sometimes this isn’t possible at all.

This isn’t just a theoretical issue for me.  I’m now scoping the job of moving reposurgeon from Python to Go in order to improve performance on large repositories and and it runs head-on into that wall.  Python’s Global Interpreter Lock makes it C-equivalent for this discussion – yes it has a richer type ontology than C, and GC, but this turns out to help remarkably little with thread decomposition.  It makes the trivial resource management issues a bit easier, but that just means you hit the question of how to write an algorithm that really exploits concurrency sooner.

Some of reposurgeon’s core operations can’t be factored into things done by a flock of threads at all, because for each commit you might want to vectorize over they do lookups unpredictably far back in time in the repository metadata. Some other operations could be because there are no time-order dependencies, but it’s going to require map-reduce-like partitioning of the metadata, with a high potential for bugs at the joins.  Bad news: I think  this is going to turn out to be a typical transition problem, not an unusually difficult one.

The implication is that the C-like requirement to look like a superfast serial machine (a) is not going away even partially without a lot of very hard work changing our algorithms, and (b) never going to go away entirely because there is an awkward residuum: some serial algorithms don’t have an equivalent that exploits concurrency.  I dub these SICK algorithms, for “Serial Intrinsically; Cope, Kiddo!”

SICK algorithms include but are not limited to: Dijkstra’s n-least-paths algorithm; cycle detection in directed graphs (with implications for 3-SAT solvers); depth first search; computing the nth term in a crypto hash chain; network-flow optimization…and lots of other problems in which you either have to compute sub-results in a strict time order or you have wickedly bad lookup locality in the working set (which can make the working set un-partitionable).

There’s actually another Go-specific implementation issue here, too.  Theoretically Go could provide a programming model that exposes hardware-level threading in a tractable way.  Whether it actually does so – how much heavier the runtime cost of actual Go threads is –  is not clear to me.  It depends on details of the Go runtime design that I don’t know.

This generalizes. To jack up processor utilization to a degree that makes speculative execution unnecessary, we actually need two preconditions.  (1) We need a highly concurrent algorithm (not just 1 or 2 threads but in the neighborhood of 180), and (2) we need the target language’s threading overhead to be sufficiently low on the target architecture that it doesn’t swamp the gains.

While Go doesn’t solve the problem of SICK algorithms, it doesn’t worsen the problem either.   The most we can say is that relative to its competitors it somewhat reduces the friction cost of implementing an algorithmically-concurrent translation of code  if there is one to be found.

Chisnall also argues that  C hides the cache memory hierarchy, making elaborate and sometimes unsuccessful hackery required to avoid triggering expensive cache misses.  This is obviously correct once pointed out and I kind of kick myself for not noticing the contextual significance of cache-line optimization sooner.

Missing from Chisnall’s argument is any notion of how to do better here. Are we supposed to annotate our programs with cache-priority properties attached to each  storage allocation?  It’s easy to see how that could have perverse results. Seems like the only realistic alternative would be for the language to have a runtime that behaves like a virtual memory manager, doing its best to avoid cache misses at each level by aging out allocation blocks to the next lower level on an LRU or some similar scheme.

Again, Go doesn’t solve this problem, but no more do any of its let’s-replace-C competitors. You couldn’t even really tackle it without making every pointer in the language a double indirect through trampoline storage.  (IIRC there was a language on the pre-Unix Mac that actually did this, long ago.)

We do get a little help here from the absence of pointer arithmetic in Go. And a little more from the way slices work; they are effectively trampoline storage pointing at elements that could be transparently relocated by a GC – or a VM embedded in the language runtime.

Combining those, though?  I don’t think it’s been done yet, in Go or anywhere else – and excuse me, but I don’t want to be the poor bastard who has to debug a language runtime implementing that hot mess.

Next, Chisnall has some rather penetrating things to say about how the C language model impedes optimization.  Here for he first time we get some serious help from Go that is already implemented – doesn’t depend on a hypothetical property of the runtime.  The fact that for loops in Go naturally use a dedicated iterator construct rather than pointers is going to help a lot with loop-independence proofs in the typical slice or array case.  (Rust makes the same call for the same benefit.)

Unlike C, Go does not guarantee properties that imply structure field order is fixed to source order.  The existing Go implementations don’t reorder to optimize, but they could. There are no exposed pointer offsets, so a compiler is allowed to insert padding to speed up stride access. In general, the C features that Chisnall notes would impede vectorization and SROA have been carefully omitted from Go.

Chisnall then describes the problems with loop unswitching in C. I don’t see Go giving any additional optimization leverage here, unless you count the effects of uninitialized variables always being zeroed.  That (as Chisnall points out) also guarantees that you can’t have program behavior that is randomly variable depending on the prior contents of memory.

Go sweeps away most of the issues under “understanding C”.  Padding is never visible, bare pointers and pointer offsets are absent, pointers are strongly typed and cannot be interconverted with integer  (well, trivially they can but the specification does not guarantee they will round-trip).

Integer overflow is the exception. To reduce the runtime overhead of arithmetic Go opts not to overflow-check each operation, the same choice as C’s.  That choice could start a whole other argument about speed-vs-safety tradeoffs, but it’s an argument that would wander some distance from Chisnall’s principal concerns and mine, so I’m not going to pursue it.

Instead I want to return to the initial question about Go, and then consider in more detail what the existence of SICK algorithms means for processor design.

So, is Go a “low-level language” for modern processors?  I think the answer is really “No, because there are very good reasons we’ll never have a ‘low-level’ language again.”  While the ability to manage lots of concurrency fairly elegantly does pull Go in the direction of what goes on on a modern multi-core processor, CSP is a quite abstract and simplified interface to the reality of buzzing threads and lots of test-and-set instructions that has to be going on underneath.  Naked, that reality is unmanageable by mere human brains – at least that’s what the defect statistics from the conventional mutex-and-mailbox approach to doing so  say to me.

Chisnall begins the last section of his argument by suggesting “Perhaps it’s time to stop trying to make C code fast and instead think about what programming models would look like on a processor designed to be fast.”  Here, where he passes from criticism to prescription, is where I think his argument gets in serious trouble.  It trips over the fact that there are lots of important SICK algorithms.

His prescription for processor design is “Let’s make flocks of threads much faster”.  Well, OK – but I think if you interpret that as a bargain you can make to eliminate speculative execution (which is what Chisnall wants to do, though he never quite says it) there’s a serious risk that you could wind up with an architecture badly matched to a of lot of actual job loads.

In fact, I think there is a strong case that this has already happened with the processors we have now.  My hexacore Great Beast already has more concurrency capacity than reposurgeon’s graph-surgery language can use effectively (see ‘wickedly bad lookup locality’ above), and because some operations bottleneck on on SICK algorithms it is certain that getting out from under Python’s GIL won’t  entirely fix the problem.

Underlying my specific problem is a general principle: you can reduce the demand for instruction-level parallelism to zero only if your job load never includes SICK algorithms.  A GPU can pretty much guarantee that.  A general Turing machine cannot.

Chisnall seems to me to be suffering from a bit of a streetlight effect here. He knows concurrency can make some things really fast, so he recommends making the streetlight brighter even though there are lots of SICK problems we can’t drag to where it shines.  To be fair, he’s far from alone in this – I’ve noticed it in a lot of theoretically-oriented CS types.

He finishes by knocking down a strawman about parallel programming being hard for humans, when the real question is whether it can be applied to your job load at all.

There is, for some real problems, no substitute for plain old serial speed, and therefore we seem to be stuck with speculative execution. There’s nothing for it but to embrace that SICK.

90 comments

  1. There’s actually another Go-specific implementation issue here, too. Theoretically Go could provide a programming model that exposes hardware-level threading in a tractable way. Whether it actually does so – how much heavier the runtime cost of actual Go threads is – is not clear to me. It depends on details of the Go runtime design that I don’t know.

    I’m not close – by orders of magnitude – to your level as a programmer, but I’m wondering whether the real answer to this is not to make a particular prescription, but for someone to provide a library which allows some very direct control the issues you’re discussing; which processor does which work, how threads will run, order of execution, etc., with the idea that people can experiment with these issues. Then the improved code can be fed back into the main branch of Go (or Rust, or whatever.) Hopefully I’m not too far from the mark here. ;-)

    Also, your link is bad. (I think the problem is that you used two different kinds of quotes.)

  2. It looks like you edited this in WYSIWYG mode, but put in the entries as if you were editing in text mode, so we’re seeing lots of faux HTML tags, instead of getting cites and links.

  3. Of course, multithreaded designs come with their own exploitation risks and Hard Problems. I’m skeptical that they are quite the security panacea that Chisnall is implying.

    Erlang (and others) try to avoid some of the worst problems with multithreading by making stuff immutable. That’s certainly helpful, but the pure functional style comes with an upfront cost for most programmers (where “most programmers” == “programmers not named Joe Armstrong” :-)). Would we all get better at it if we were using an Erlang-style language for everything? Probably. How much better remains to be seen. Sometimes a design that maintains state is so much cleaner than a pure functional design that it’s not even funny.

    > when the real question is whether it can be applied to your job load at all.

    Yes. The old joke about a thousand chickens being stronger than one ox comes to mind (good luck hitching them to a wagon). As does the equally old joke about producing a baby in one month by putting nine women on the job.

    1. Erlang looks really awesome. Unfortunately, I’ve never found a good tutorial (by which I mean a good tutorial for someone at my quite amateurish level, who has problems understanding how to write a useful program given the way Erlang uses variables.)

  4. There is, for some real problems, no substitute for plain old serial speed, and therefore we seem to be stuck with speculative execution.

    Speculation can’t help SICK algorithms either (except to the extent that speculation necessarily includes pipelining, which is helpful).

    The thing that makes speculation an improvement over pipelining is that it runs two (extremely low overhead) threads per branch decision: one for if the branch was taken, one for if it was not (possibly applied recursively to multiple branches in processors with really deep pipelines). If the algorithm isn’t parallelizable at all, then half the speculative execution results will be discarded, and the speculation is just guaranteeing you spend as much as 50% of your computing power in waste heat (though that might still be a reasonable tradeoff if there are order-of-magnitude savings in other parts of the CPU, e.g. less idle time due to more complete memory bus utilization).

    Modern CPUs can do parallel execution optimizations for you in real time given only legacy x86 machine code to work with. That sets the floor for what human programmers should be worried about at the human-readable language level. Even worrying about it at the compiler and toolchain level is a dubious exercise–didn’t Intel already try that with Itanium?

    C hides the cache memory hierarchy, making elaborate and sometimes unsuccessful hackery required to avoid triggering expensive cache misses.

    Those cache misses show up in cache-line profiling, and C makes it easy(er) to apply the advice of the profiler to avoid hot spots compared to other languages which don’t have predictable structure packing behavior. Rumor has it that some languages with unspecified structure packing have toolchains that can automate structure layout reorganizations based on profiling data (I have yet to see a useful one in real life though). There’s still a fairly low limit on what can be done without changing the algorithms, and so far only humans can do that in non-trivial cases (and only when a non-SICK algorithm exists).

  5. Amdahl’s law has linear scaling of performance in the number of cores as an upper bound for good reasons; one of them is SICK algorithms.

    I was watching some marketing videos for the Transputer last night. They claimed that the Transputer always scales linearly in the number of CPUs and demonstrated this with–er, ray tracers, Mandelbrot set renderers, and programs that drew Mandelbrot sets into ray-traced scenes.

    The Transputer was once hailed as the next big thing, arguably the greatest thing to happen to computing since the microprocessor itself. It failed in part because the system architecture falls down hard on SICK algorithms, particularly those with large working sets. Not only would runtime of such an algorithm be constrained by the speed of a single Transputer, but Transputers were inherently NUMA — each CPU came with a sliver of RAM on board, and system memory was the sum of all the RAM slivers on each CPU die. Accordingly, to work with a large dataset, Transputers had to divide the working set across all CPUs, and then coordinate with one another to either fetch needed data to a CPU that had it, or hand off control to another CPU, feeding it the data it needs.

    Sure made for some sweet, sweet demos though.

  6. How does someone get to the level where they can write an article on this – that gets noticed no less – without having had their faces rubbed in Amdahl’s Law already?

  7. “There is, for some real problems, no substitute for plain old serial speed, and therefore we seem to be stuck with speculative execution.”

    Maybe the solution is not in a new general language for all workloads on “big” CPUs.. Maybe the solution is in developing processors for specific workloads.

    This is already done with GPUs and TPUs in machine learning. Move parallelizable loads to massive parallel GPUs etc and use traditioal CPUs for the SICK stuff.

    1. >Move parallelizable loads to massive parallel GPUs etc and use traditioal CPUs for the SICK stuff.

      We’re already doing this. It’s why graphics cards exist. Duh. :-)

      1. I know. But why then trying to shoehorn massive parallel loads onto a serial CPU?

        This sounds like you have a hammer, so you treat every problem as a nail.

        1. >But why then trying to shoehorn massive parallel loads onto a serial CPU?

          Jobs like reposurgeon don’t have any other choice. Many of its algorithms won’t run on a GPU.

        2. > I know. But why then trying to shoehorn massive parallel
          > loads onto a serial CPU?

          Other way around. Running VERY serial workloads on increasingly parallel CPUs.

          Several reasons:

          0) “We” are pretty much at the limit of how fast ‘we’ can make a single core/cpu run. All the other stuff “we” are adding is to make cores/dies/whatever do more work in the same number of clock cycles.

          1) Commodity (mass market desktop/server) CPUs are marketed, like everything else, on a “checkbox” basis. This CPU has 4 cores and 8 threads, that CPU has 1 core and 4 threads, therefore THIS ONE is better. Same with all the other crud. That it’s going in a desktop that sits idle for 8 to 20 hours a day, and almost never uses anything near it’s real capacity is irrelevant. Checklists win.

          2) Regardless of a specific workload, modern *operating systems* can take advantage of multicore/multithread CPUs because they have lots of discrete tasks running at the same time. Things like browsing the web, watching porn, having a video game running in the background, having your mail client checking the mail server, IRC window, etc. all running at the same time means you WANT at least 2 cores and 4 threads.

          For “SICK” workloads you’d want almost the opposite of a GPU–something to stick in the backplane of a commodity server/desktop box with a insanely fast single core processor and gobs of local memory.

          But you’d have maybe 200 sales inquiries a year, but because each CPU would have a MASSIVE cost (minimal amortization) and so your only sales would be to TLAs who have very specific needs.

          Except it wouldn’t, because modern multi-core chips will always be a little slower, but MUCH cheaper.

  8. The message-passing ways to “avoid synchronization problems” have been around for a long time but they have two problems: First, outside of the problems with the “natural pipelining” they’re a pain to program, and sometimes it’s really hard to make sure that these programs work right (Veritas High Availability Cluster of yore comes to mind as an extremely buggy example). Second, once the problem is not obvious, such as if you need to lock _two_ locks at once, doing is with message passing is very difficult and still has all the pitfalls of the usual locking (you can find an example of this in the section “Queues as the sole synchronization mechanism” of my book “The practice of parallel programming”, the free text available at http://web.newsguy.com/sab123/tpopp/06odata.txt ). I think the first problem can be at least partially solved with the better compiler support: to avoid the manual management of the context for the messages, it would be nicer to just write the code the old-fashioned way and then have the function’s stack frame placed in the dynamic memory, so that the execution can be picked up and continued on it when the reply message comes back. The Promises might be a step towards this, and Microsoft had also been experimenting along these lines in C#.

    The goroutines also have another problem, with the channels they use for communications (unless I’m out of date on this). Basically, the channels in Go are either with delivery guarantee and flow control, or without flow control and lossy. Once you start doing cyclical topologies, you’re stuck with either a possibility of deadlock if you use all reliable channels, or the possibility of a loss if you use an unreliable channel. A better solution would be to use the “reverse” channels with unlimited buffering and a higher priority, so that if a thread has a choice of what channel to read, it would always read the “reverse” channel first. With this approach, it’s also fairly easy to detect the defective topologies when they get constructed, as I’ve done in the Triceps CEP project.

    As for speculative execution, perhaps it’s not to be shunned but to be embraced: basically, you trade some otherwise unused computational power for a chance to make a faster progress and return it to the main task. I wonder if the cycle detection in the graphs can make use of it: do the growth in multiple directions in parallel, then after a few steps stop and reconcile this progress, do the next wave, and so on. If the reconcilement is cheaper than the expansion, this would result in a faster overall progress. Of course, some of the results of the expansion will be duplicates and will be thrown away but that’s to be expected with the speculative execution. So the real problem is how to make the reconcilement cheap.

  9. The problem is that Moore’s law took a detour. Originally processing speed was linear, but the worse legacy is the Intel “compatibility back to the original 4004” that the x86 abomination represents. If we can get those to go 4GHz, we probably could have gotten a PPC or some other single core RISC processor to go 20GHz. Arm has been handicapped by its focus on low power but is close to the simplicity, as was the 68k series. The x86 instruction set is like Chinese pictogram characters v.s. a phonetic alphabet. Remember the huge real estate used by the extra complex cores could be turned into a minimal cycle ram cache or even some kind of local memory.

    So this argument is because the less expensive hardware processors are not optimal except with many threads – think of requiring a V12 with multiple valves and a 10 speed transmission – we have to adapt the software and languages to using it instead of something like a 2 rotor Wankel that runs at 20k RPM.

    I’m actually shocked that there is no embedded core processor I can think of that goes above 2GHz, and even ARM, which should be easy to the point of trivial to do barely exists – Most top out around 1.8Ghz with 4 cores.

    1. >I’m actually shocked that there is no embedded core processor I can think of that goes above 2GHz, and even ARM, which should be easy to the point of trivial to do barely exists – Most top out around 1.8Ghz with 4 cores.

      Well, instead of being shocked you should consider this a clue about the economics driving processor design. Your report says there’s lots of market for low power consumption and not so much for really high-speed serial CPUs.

      1. Mostly I consider it a clue about the speed of light. Half a light-nanosecond is six inches.

    2. > Arm has been handicapped by its focus on low power but is close to
      > the simplicity, as was the 68k series.

      It’s not handicapped by it, it still exists BECAUSE of it. Otherwise the Intel world would have eaten it up too.

      MIPS is still hanging on in many of same markets. Well, it’s probably on life support.

    3. Anyone who thinks ARM is simple should spend some time looking over the Cortex-M7 user manual.

      1. I was going to reply saying something extremely similar. The original ARMv1, while not without its quirks, was a pretty svelt design. By the time they added on floating point, deep pipelining, vector instructions, other accelerations, etc. it’s a mess. Not nearly as bad as x86-64, but a mess all the same.

    4. x86 isn’t executed natively anymore by most processors. It’s translated to a risc-like instruction set internally to the processor. That is a lot of die space wasted on the silicon that translates x86 instructions, but that isn’t what’s holding down our clock speeds.

      IBM continued pushing Power chips forward to higher clock speeds but still only managed about the same speeds we’re getting with modern x86-64 chips. The best they’ve done was Power6, which managed about 5ghz on a Dual core. This is right about where Intel Coffee Lake 4/6-core processors are now. Their recent Power9 chips have gone down to 4ghz, mostly because they’re pushing for far more cores now (12-24.)

      Snapdragon 800 series arm processors have definitely gone up to at least 2.3ghz in released products.

    5. No, nobody can get to 20GHz. It’s not the backwards compatibility, it’s more fundamental. Everybody hit the 4GHz limit and bounced, whatever design they tried. Oh, IBM and Sun/Oracle managed 5GHz versions of POWER in 2007 and SPARC in 2017, respectively, but those were serious stretches at the edge of possibility, not potential mass-market speeds promising future improvements.

      If anybody had any good idea how to improve clock speeds, the whole industry wouldn’t have been sitting pat for the last ten years as Intel managed to squeeze out nothing more than a mere tripling of single-thread performance from the Pentium IV at 3.8GHz to the Intel Core i7-8700K at 3.70GHz. Anybody who had a plausible plan could certainly get funding; Google could certainly use some 20GHz chips for SICK workloads.

      We’ve still got increases in transistor density on a Moore’s Law curve. Per-clock single-threaded performance is still making gradual gains. It’s just that we hit a clock-speed wall ten years ago that nobody expected, and that nobody knows how to beat. It’s not even a conspiracy by the hardware people to blame the software people; it’s just that they don’t have much of anything better to do with die space than add cores and hope the software people can figure out how to make use of them.

      REST IN PEACE
      Microprocessor Clock Speed Gains
      $273.9 billion
      Killed by the laws of physics, or a good imitation thereof
      1971-2007

      1. >It’s just that we hit a clock-speed wall ten years ago that nobody expected, and that nobody knows how to beat. It’s not even a conspiracy by the hardware people to blame the software people; it’s just that they don’t have much of anything better to do with die space than add cores and hope the software people can figure out how to make use of them.

        This matches my understanding of the situation. No more single-thread speed gains for us until we find end-runs around some pretty harsh physical limits. Spintronics might do it – nothing less fundamental will.

        The processor manufacturers are a bit like Wile E Coyote at this point, frantically running in midair until the market notices that adding more compute cores per CPU has reached zero or even negative returns for SICK-dominated job loads – which is to say “most of them.”

        Gonna be economically ugly for Intel and the like when that hits home; their whole business model and institutional structure is wrapped around the assumption that CPUS go obsolete on the timescale of 18 months (a Moore’s Law doubling time back when it applied to speed). Ain’t true no more.

      2. >Everybody hit the 4GHz limit and bounced, whatever design they tried.
        >It’s just that we hit a clock-speed wall ten years ago that nobody expected, and that nobody knows how to beat.

        This is true, and it IS Moore’s Law, but the slow down is not due to the “cores”, it was RAM.
        The things designed into the core to make C programs work faster, all mentioned in this post or the article this post is in response to, that are invisible to the programmer, relied on main memory (good ol’ RAM) being just as fast as the core. This NEVER happened. Google “Memory Wall”. So, chip designers put small amounts of memory on die, “cashe” memory. Then they put multiple levels of cashe on die to keep single thread/ILP/SICK workloads going strong. It is this part of the chip, the cashe, that has the most thermal density, because it’s is always on (if the cores on, it’s on). And by on, I mean that every “bit” of that cashe memory must be refreshed some millions of times a second, depending on the RAM type, but taking all kinds of current and losing energy as heat…

        Until a form of “stable when off AND off until needed” memory can be mass produced economically (or more economically than what’s proposed for less than 7nm) and placed on die or near to it (2.5D or 3D chip?), Moore’s Law and the “thermal envelope” derived there of will rule the roost.

        >it’s just that they don’t have much of anything better to do with die space than add cores…
        Or add functions that bring to the CPU more toward a SoC. Check a die shot of the coffee lake I7 you mention above. 40ish% is graphics processing, and I’d say 10ish % is L3$, tho it’s hard to tell because its “spread out” compared to a Ryzen die shot…

        Anecdotally…

        1) 15ish years ago, my mother had a PC with a Celeron processor, which is a P IV whith cashe DISABLED… it took minutes to boot into XP
        2) Not realizing the need to design with in a “thermal envelope”, Pentium IVs became known for being quite the “space heater”

    6. Shrinking process size is what has enabled the clock speed increases, because electrical signals move at the speed of light and the transistors inside the processor need to be closer together to go faster.
      Getting things shrunk down further is Really Hard(tm). The wikipedia entries on process size are pretty interesting reading. Search “7 nanometer”

  10. The thing is, Chisnall talks about Spectre and Meltdown being a result of using speculative execution to preserve the “fast PDP-11” model, but for PDP-11s that had MMUs, modern machines have a flatter address space than the PDP-11, and using the PDP-11 model would eliminate Meltdown without a performance penalty: The PDP-11 had separate user and kernel address spaces, so userspace could not even attempt to read from kernel space.

    1. >The PDP-11 had separate user and kernel address spaces, so userspace could not even attempt to read from kernel space.

      Well, of course he’s thinking of the later revs with the 22-bit extension.

      1. I don’t believe that any PDP-11 with any MMU, 18-bit or 22-bit, lacked separate kernel and user spaces. What wasn’t present till late in the game was separate instruction and data spaces. The pre-MMU 11s with 16-bits of physical address space addressed directly were, IIRC, the only ones that had no distinction between kernel and user space.

    2. >Chisnall talks about Spectre and Meltdown being a result of using speculative execution to preserve the “fast PDP-11” model

      I withdraw my earlier comment after having seen a picture of the guy. He’s too young to remember either machine directly (no blame; the PDP-11 was just before my time).

      I now think what he really means is “the VAX model”, and isn’t aware of the PDP-11 MMU at all.

      1. For full disclosure, I’m also too young to remember it directly, but I’m keenly interested in memory management schemes and have a bit of a retrocomputing obsession, so I’m probably in the 90th percentile among geeks my age (that even know what a PDP-11 is) for knowledge of PDP-11 memory management.

    3. There is nothing preventing an x86 processor from having such separation. The much-maligned segmentation system could easily be used to enforce it. A processor can be designed with independent cache systems for each privilege ring (allowing higher-privilege threads to access the lower-privilege caches, of course) including the “negative rings” that virtualization creates. Of course, with multi-core processors as a given, we could even envision the host system owning one core for its supervisory functions, and VMs being limited to specific cores to better isolate them from each other.

      1. Back in the late 1970s/early 1980s, Intel explicitly talked up segmentation in the 8086 as something that would allow future members of the processor family to handle multiuser operating systems, and that the flat address space approach of rivals was thus an inferior design choice.

        (I fairly recently re-read a lot of old issues of Byte and Interface Age.)

  11. No matter what you pick, you are going to end up with something for systems programming that involves raw pointers. For many classes of device-driver programming, etc., you need to be able to reliably bit-bang, including specifying memory barriers, etc. C allows you to do this with some degree of ease, as long as you’re willing to wrap a tiny bit of assembly into a macro. I’m not sure how the other languages support this.

    At one of my previous job where tasks were executed in parallel and processing time was measured in microseconds, one of my co-workers was give a job to implement a complex state machine with a budget of 2 cache lines. This was turned into (one of) the unions from hell. But by fitting everything into 2 cache lines it was possible to avoid cache stalls and get much higher throughput.

    1. >No matter what you pick, you are going to end up with something for systems programming that involves raw pointers.

      C will not soon be dislodged from the use cases you’re talking about, but I think they’re a shrinking niche. Much as assembler was in my beginning years as a programmer, and for the same reasons.

  12. Here’s a cynical answer … just figure out what Intel uses as the bottom layer of microcode and then we’ll all write our programs in whatever that is. Does anyone know how many layers of translation an x86 instruction goes thru before it is ever actually executed by whatever sort of processor actually underlies Intel’s “x86” chips.

    Oh, and it’s reported they have MINIX running in there somewhere – entirely invisible to the user of their “x86” chip.

    My prediction: Meltdown and Spectre are just the first droplets of a coming tsunami that will provide a painful lesson that complexity is the biggest enemy of security.

    Thus spake the cynic.

    1. > Oh, and it’s reported they have MINIX running in there somewhere – entirely invisible to the user of their “x86” chip.

      The version of the rumor I remember is that the “magic manager” part of the chip ran a modified version of MINUX at some point. (You remember, the thing that combined with a specific chipset and network interface chip gave an untraceable/unblockable back door into your system?)

      1. Intel Management Engine… *shudders* Now that’s a security nightmare. It’s enough to make one use AMD chips instead of Intel. (I wonder what hidden security bugs are lurking in AMD’s CPUs…)

    2. Yeah, the MINIX thing is part of the AMT thing that Intel does in their chipsets, not (AFAIK) in the CPU itself.

      Not that that diminishes Michael’s point or makes AMT any less awful.

      1. The impression I get is that the MINIX instance is part of AMT, but is stored in the CPU firmware. However, my understanding is that it’s perfectly normal x86 code running in a special processor mode, not implemented in the language of the chip’s own microcode.

        1. Now that I look into it more, different sources are saying different things. There was a flurry of articles after the ME/AMT vulnerability last year. Probably nobody outside Intel really knows.

          The most interesting one I found was about Dell selling special custom computers with ME disabled.

          1. The most interesting one I found was about Dell selling special custom computers with ME disabled.

            They are very expensive, and only available to TLAs or other organizations with very special security concerns.

            As will be any computer without active surveillance built in in the future.

          2. Different sources say different things because there are multiple versions of ME in different Intel products. It used to be a low-power RISC design, more recently it’s a stripped-down 486 core (or something compatible enough to run Minix with only light modifications).

        2. My information has it that the Intel ME (and MINIX OS) runs on a special, Pentium-class SoC that resides within the motherboard chipset. It’s not onboard the main CPU, but in most configurations the main CPU will not run unless it detects working ME. (Again, the CIA, NSA, etc. get special dispensation on this front — but it’s pricey.)

          Note that this means the ME processor is not vulnerable to Meltdown or Spectre, being based on a non-speculating processor.

    3. At the very least, until we get access to that layer, no programming language implementers can get a good sense of what the hardware can actually do, and start building languages that can leverage it. We’re stuck with languages that look like C because the lowest layer we can drop to looks like C.

    4. I saw this several months ago, and am leaving the link here because the info is about Micheal’s point and the (partial ?) fix involves golang and Linux…

      https://www.youtube.com/watch?v=iffTJ1vPCSo

      I’ve not watched it, but I’ve read the slides from

      http://u-root.tk/

      it’s involved….

      When I found the video above, this video was cued up:

      https://www.youtube.com/watch?v=MujjuTWpQJk

      I don’t pretend to understand all of it, but I AM hoping it can shed some light on both the security issue and the use of Golang as a systems language.

    5. The trick to that would be Compile-on-demand, like it’s done for JavaScript now. Otherwise there will be major compatibility issues, because the microcode is probably different between the CPU models. They’ve actually tried this with VLIW, hit this exact problem, then tried to define a new high-level instruction set for it (Itanium) that would still be translated internally into microcode, it ended up worse than x86, and they gave up on that.

      Hm, I wonder if it’s possible at all to produce better code with the static compilation. The current approach takes into account the properties of the code that get discovered at run-time, driven by the data. This is probably not something that can be done statically.

  13. Sidenote: while depth-first search is hard or maybe even impossible to run in parallel, breadth-first search can and is done in parallel, even parallel on GPU (though fast parallelization on GPU needs some tricks and/or extra support from hardware for fast atomics).

    1. Why wouldn’t it be possible to run depth-first search in parallel? Store nodes to visit in a concurrent stack (e.g., Java’s ConcurrentLinkedDeque), flag already visited nodes in a concurrent map (e.g., Java’s ConcurrentHashMap) and have multiple worker threads pulling tasks from the shared stack. Total memory consumption is limited by the number of workers.

  14. I think that drawing any kind of wider conclusion about speculative execution from the existence of Spectre and Meltdown is a mistake. The problem of preventing a killed speculation from affecting state is already a solved one — processors do it just fine for architecturally-visible state. Spectre-v1 and Meltdown just demonstrated that architecturally-invisible state can be inferred by user code and that therefore for security reasons it has to be protected in the same way. That is, treat cacheline fills like register writes, forwarding them along the pipeline and only committing them when (and if) the branch/speculation retires. That shouldn’t have much performance cost, since memory reads in incorrectly speculated branches are likely to be through bad or unwanted pointers that you don’t want polluting your cache anyway.

    The same applies even more strongly to Spectre-v2, where indirect branches performed in one address space can train the branch predictor to affect predictions in another address space. Even when nothing malicious is going on, this is polluting the prediction statistics with irrelevant data, reducing the accuracy of predictions. Tagging BTB entries with the address space isn’t just a security thing, it’s the Right Thing.

    The conclusion that can be drawn from Spectre and Meltdown is that “architecturally invisible” processor optimisations aren’t invisible, and thus in the same way that certain kinds of secure code (e.g. crypto) have to take care to avoid side-channel creation by compiler optimisations, they will also have to take care to avoid side-channel creation by processor optimisations. In order to do this, it will be necessary for chip makers to start documenting their architecturally invisible behaviour (I’m sure that will give them a fit — having to tell the world all about their secret sauce!) so that these kinds of vulnerabilities can be prevented in the future.

    But as I say, this has nothing to do with speculation, and there is no reason to think that speculative/OoO execution is obsolete or a busted flush.

    1. As Edward Cree notes, there is a way to avoid the problem, that of checking the permissions to do something before letting it complete. Or in this case, become observable.
      It’s just another bug, of the “oh darn, we didn’t realize this could happen” sort, and it’s one the Multicians /didn’t/ make.
      He who know no history is doomed to repeat it (;-))

  15. Naive questions here, but what would happen if you took speculation all the way out? Would it eliminate the Spectre and Meltdown vulnerabilites? What performance tradeoff would we be looking at? I assume slower performance but less energy. This could be a selling point, say, for any properly security-conscious need. Also, eco-friendly if you want to go green.

    Would it just be apps that would need to be rewritten? The OS itself? Something in between them? All of the above?

    1. >Naive questions here, but what would happen if you took speculation all the way out?

      We don’t have to speculate. Spectre and Meltdown fixes have been applied in the Linux kernel. This article reports a minimal hit on one network-heavy workload, but we shoulds expect it to be worse when throughput is not bottlnecked on network stalls.

      This article says < 5% for current Intel chips, but admits to a up to a 31% latency hit on older ones like i7s. >I assume slower performance but less energy.

      I don’t think we;ll get energy savings. The whole die is still powered up.

      >Would it just be apps that would need to be rewritten? The OS itself? Something in between them? All of the above?

      Nothing needs to be rewritten. Speculative execution is a processor-level optimization that is invisible to code above it except through side channel monitoring (that’s the leak these exploits use).

      1. > I don’t think we;ll get energy savings. The whole die is still powered up.

        Fewer gates are actively transitioning, which means less energy lost to capacitance in the die, so there is a non-zero power saving per cycle; however, there is a huge gap between “non-zero power saving” and “enough power saving to pay for the 5% more CPU cycles required to run the same code.”

  16. The sense I’m getting from this whole conversation (and once again, my programming skills are pretty amateurish) is that we really need a new generation of assembly code hackers (or similar) who can actually tell us what a chip is doing at a very fine level. As I understand the problem, Go, Rust, and even C don’t necessarily take us down to bare metal anymore, and possibly even the compiler mavens are no longer clued in to chip processes in an appropriate way.

    Am I correct, and does a version of assembler which leverages the full capabilities of a chip even exist any more?

    1. >Am I correct, and does a version of assembler which leverages the full capabilities of a chip even exist any more?

      If you mean, for example, making the cache hierarchy visible – the answer is no.

      On modern hardware you’re basically at the mercy of a whole bunch of microcode-level hacks that are not only outside your control, they’re not even readily understandable. Go in, try to find a good primer on cache-line optimization (for example).

      1. On modern hardware you’re basically at the mercy of a whole bunch of microcode-level hacks that are not only outside your control, they’re not even readily understandable.

        Maybe rebuilding a workable assemble language should be your next project. (Runs away quickly!)

        1. There is a tantalising possibility of Intel exposing a more ‘fluent’ instruction set. Each code has multiple execution units, Intel microcode dispatches instructions to them on the basis of what it can see. A compiler (probably…) might be able to do a rather better job because it can operate outside real time and see the big picture.

          But I don’t suppose they will, and probably for sound paranoia reasons.

            1. But Itanium is a jump, why not as (another) extension to X86_64? Rather than giving hyper-threading to the OS as virtual cores, allow micro-threading at the program level.

              Way OT to either vanilla C or Reposurgeon…

          1. You mean prefetch instructions? We had those. They were all the rage about 15 years ago.

            They sucked. Every CPU model needed very different hinting, and wrong hinting was worse than no hinting. Nobody wanted to release Wintel binaries with JIT compiler optimization.

            Then Intel got speculation running fast enough that they could figure out prefetches from unhinted machine code, and nobody looked back.

        2. I think you and esr are slightly talking past each other. “Assembly” is a thin human-readable layer on top of a processor’s machine code, which is the API through which the processor is given commands. That API is public and documented, or else nobody would be able to write software for the processor. Popular assemblers are routinely kept up to date with revisions to the opcodes.

          It used to be that these opcodes were baked directly into the silicon. Nowadays (meaning, at least the last two decades), the opcode API is actually emulated by a software layer sitting on a deeper hardware layer. That software layer (the “microcode”) and its underlying hardware layer are usually trade secrets, and totally internal to the processor.

          This description is simplified, but it’ll do, and in any case we’ll never know the full story without insider knowledge we couldn’t talk about any way (open designs excepted).

          The point is that there’s not really anything to be done at the assembler level in the way of improving software. Even if somebody reverse-engineers a clever microcode hack (this has been done), it’s not portable or reliable, and so compliers wouldn’t dare make use of it.

          Incidentally, esr gave a slightly bad example about exposure of the cache hierarchy. Intel, at least, documents theirs and provides opcodes to manipulate it. I’d be interested in knowing whether those are used in kernels and compilers.

          1. >Incidentally, esr gave a slightly bad example about exposure of the cache hierarchy.

            I did not know this. When were those instructions introduced?

            1. *does research*

              All the ones listed in section 11.1.5 of the SDM have been available since the Pentium 4, in 2000. Chapter 11 as a whole describes the cache system.

      2. I am developing a user-mode virtual memory C++ library whose performance depends to a large extent on cache optimization. To give an idea of the level of performance tuning I’m talking about, part of this same code was at one time spending about 10% of its time doing a single “quotient and remainder” operation per data item. I got rid of that particular bottleneck by redoing the data structures so that the most common cases used a shift instead.

        It is possible to figure these things out, but you have to understand the CPU pretty well. :-)

    2. It cannot exist. Until Intel lets us write microcode of course…

      None of this helps Eric, if we make a processor that executes sequentially it will be slower. If we remove caching it will be slower. We could make a multi or many core processor that ran a different style of code very efficiently, something like a GPU, and we’re still programming it in a dialect of C/C++, or rather doing this doesn’t hurt performance. I’m not convinced Chisnall has a good point unless you exclude consideration of implementation dependent layers of C/C++ (or any other similar language).

      And still Eric is stuck with SICKles.

      I wonder if an automatic translation to Go will then allow finer grained profiling, because I think one possibility is to burn CPU cycles to reduce memory cycles, bit bashing to use all the spare bits in a structure for example.

  17. Achieving high speed without speculation: part 1 of 2.

    Imagine, if you would, a processor with a number of decoding pipelines and a wodge of adders, multipleur, branch evaluators and the like, enough that at any given time, if you have enough separate jobs to keep the pipelines all busy, then you have the opportunity to allocate enough part-ALUs to each pipeline to be able to execute any instructions that do not have dependencies in parallel. Pipeline 0 might get 3 adders, 1 might get a branch, 2 might get a multiply and so on. In the next cycle, the allocations might well be completely different,

    Overall, with a typical workload of independent programs, you can ensure they all make good progress despite branches and cache delays, all without speculation.

    And yes, this is substantially what Sun’s T5 is/was aimed at: throughput despite bocking delays.

    1. Achieving high speed without speculation: part 2 of 2.

      Now adress SICK: how much performance can you get on sequential codes if you’re designing a T5-like processor?

      On average, there are about 5 instructions between branches in common code. If you have a compiler that understands the dependencies of these instructions, you can partitions them into a sequence like

      ABCD followed by
      E (requiresA) F(requires D) and a branch

      If I tell the CPU that this programs is “important”, I can get four ALU-components in the first cycle, and then two ALU-components plus a branch processor in the second. Seven instructions in two cycles.

      This suggest that you only need a few parallel threads of execution in a SICK situation to speed the sequential part up significantly.

      This will fail if you have only ordered instruction sequences like A->B->C->D, but at the machine instruction level, that’s not the common case.

      I would suggest that we’re looking at too high a level: Amdahl’s law applies at the level of LI 1,42; LD 3,0xfff215; ADD 2,4; and not just at the level of init() followed by main() followed by for i:=0; i <12; i++ { go foo(i) }

  18. This reminded me of something Knuth said during an interview, where he expressed his skepticism about multicore architectures, which basically springs from the fact that most computations are inherently serial. Here is the relevant quote:

    I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the Itanium approach that was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write.
    Let me put it this way: During the past 50 years, I’ve written well over a thousand programs, many of which have substantial size. I can’t think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading.»

  19. Aren’t compilers as they exist now a dead end for the kind of code you are talking about? An interpreter can, in theory: 1. Detect hot areas of code, 2. Recompile sections of code based on changing operations, 3. Allocate extra resources based on need. I have always assumed the future for extremely demanding code would be a tightly integrated processor/interpreter where the interpreter, using significant resources on it’s own, had access to chip level counters and diagnostics which feed back into the code to influence future execution.
    Humans have no hope of understanding the execution of complex code on a complex machine and grasping the details necessary to optimise it themselves. Only the processor has the necessary insight into what is happening now on the chip. But the processor then does not have a high level view of how the pieces fit. This is the interpreters job, or perhaps some kind of hypervisor sitting above it. I think this is how the previously mentioned GPU’s will be integrated into the execution environment and how the newish Big/Little model popular in phones will break upwards into more and more powerful systems.

    Incidentally, with reposurgeon, can you do a multi-pass approach to do well defined transforms to remove horrible edge cases, easing the work your main code then has to do?

    1. >An interpreter can, in theory: 1. Detect hot areas of code, 2. Recompile sections of code based on changing operations, 3. Allocate extra resources based on need.

      In the real world, where would I find such a magical beast?

      I’ve tried JIT via PyPy. It gave me some speedup. Not enough. Eventually, after optimization have been exhausted, one is bound to hit the fact that Python’s interpreter overhead is really high, to the point where it is 10-15x slower than equivalent C code. I think we’ve reached “eventually”.

      >Incidentally, with reposurgeon, can you do a multi-pass approach to do well defined transforms to remove horrible edge cases, easing the work your main code then has to do?

      In some cases – that’s what repocutter is for. In most cases, not possible.

        1. >PyPy2 or PyPy3?

          esr@snark:~$ pypy --version
          Python 2.7.13 (5.10.0+dfsg-3build2, Feb 06 2018, 18:37:50)
          [PyPy 5.10.0 with GCC 7.3.0]
          
          1. Given that reposurgeon involves its own interpreted language, do you have any thoughts on using the RPython toolchain?

            1. >Given that reposurgeon involves its own interpreted language, do you have any thoughts on using the RPython toolchain?

              I don’t understand it well enough to use it. Except insofar as I do use pypy to interpret reposurgeon.

  20. I have followed this line of reasoning for a good while.

    I think C can be told to be low level on certain aspect, as for example the natives types in a C compiler will be matched by CPU types, and it doesn’t add much abstraction layer on top of what cpu can do.

    On an axiomatic reasoning line, C is still low level, because the axioms at the base of the language are the same axioms there is at cpu level. Which is think is conveniant because it’s still relatively easy to know what kind of machine code will be generated by a C compiler out a given C code.

    But when it comes to many other things, C clearly is deficient, even beside the issue you mention, regarding multi threading and cache issues, it doesn’t really support interrupt/exception handling for example.

    I tried to get into micro kernel coding some while back to see how far from the hardware C actually is, regarding ACPI/APIC, PCI bus, and all those things that are below the multi core /cache handling and how it gets done at bare metal level, and the amount of assembly code it require to make it works.

    It’s clear doing this in C necesarily imply to make the code in sort to take in account how hardware and multi threading works manually that fly ahead of the compiler. Which can become very dicey.

    There are many writing from linus pesting at GCC developpers on those issue, how to mannage instruction ordering, memory fences, and all kind of issues rising from hardware that the compiler is not aware of at all.

    The problem also i think is largely due to PC architecture, where the CPU is not the only thing to take in account, as on hardware level, the PCI bus already manage memory sharing with other chips, the APIC is all behind multi core architecture too, to mannage cache and execution scheduling, and synchronisation with all the other chipsets on the computer.

    Looking into PCI/APIC architecture already give a good clue about how those issue of cache, memory layout, and aysnchronous execution can become very complex at hardware level, and large part of it is even out of hand of the CPU itself. There is a deep connection betweeen the MMU, cpu cores, cache and PCI/APIC.

    I’ve been attempting these past years to develop a runtime to enable a certain number of things on top of C, which i see like a sort of runtime with an ABI to be used to make C programs instead of the C standard library that add some feature to have garbage collection, lockless shared structures using atomics, and double deference in many place in order to be able to do cache optimisation, and other kind of memory optimisations, and removing locks as much as possible.

    I already have good example of lockesless shared lists, like for message queue, or http requests, where one thread add requests, and all threads compete to execute them in parrallel without lock. I think my system can avoid using lock in many simple cases.

    Like for database like software, it could be possible for example that when manipulating cached sorted list, all objects in the list could be relocated contigously in the list order to maximize cache efficiency when traversing the list, or same with tree when the traversal order can be predicted.

    I thought also about doing a system where for each pointer, a timer keeping access time would be added, and then sorting the memory location taking in account temporality of access.

    But i can confirm that debugging a runtime to manipulate lists shared in multi thread with garbage collection is clearly a hot mess, but i’m getting to the point it start to be fairly stable in my tests.

    But for certain number of cases, especially when it come to exploit multi core for server architecture, i would say it’s maybe better to follow the php road, and being able to execute many independants sequential requests in parrallel, rather than trying to speed up each requests one/one using parrallel code which will increase the complexity of the code, for not that great results, compared to keep request code simple and sequential, but easy to execute many of them in the same time, which i think is quite what GO language is wanting to do.

    For all the case where some relatively simple code has to be run millions time with different parameters like in case of http/smtp server, GO might be an efficient way to do this.

    And i would say lot of cpu demand nowaday come into being able to run huge server infrastructure like facebook or twitter which is millions of independants executions of relatively simple code, rather than having to run very intensive and long sequential code.

    So already having language able to do this can be a very good thing, but it’s limited to cases like in server software where the same code need to be run in millions independants instances executing in parrallel on their parameters and data.

    If what you need is lot of cpu power to process sequential data, seems you are out of luck in current state of things.

    1. Maybe a language that would fit better to this definition of ‘low level’ would be AML language, that is part of ACPI specification, because it’s closer to modern PC architecture, support threading, memory mapped registers structure, multiple cores, interrupts handling, power managment and so on.

      I think this is the kind of language that should be used to replace C for ‘low level’ task, i’m pretty sure even drivers could be made with it. And it has built in support to describe hardware access through pci bus, and asynchronous execution etc

  21. I think one of the problem of what you are pointing out is also due to a conflation of two meaning into the term ‘low level’, because essentially in the context of computer science, it really means two things in the same time, that the language has a low level of abstraction( mostly use self proven axioms), and that the language is ‘close to the hardware’.

    This assumption that the hardware operate at a low abstraction level is i think what create this confusion about what should be considered a low level language. Does it mean a language with low level of abstraction, or does it mean a language that is close to the hardware ?

    Originally the silicon was supposed to be fairly simple and mostly implement simple arithmetic and memory operation, and the more complex logic is supposed to be implemented in software, but with a modern PC architecture, it’s clear the hardware already contain several levels of abstraction, and it would require an high level language to actually reach the level of abstraction that the hardware operate on.

    One thing that i often find quite frustrating is that for example asynchronous execution is supposed to be a feature of high level language, like event listener in AS3 or javascript or android sdk with worker threads, as this sort of things is not present is so called ‘low level language’ such as C, it’s assumed it require some level of abstraction over the hardware but it’s not really the case actually.

    The hardware even very old hardware always had support for dma/irq, and mechanism to share memory between different chipset, with the irq to synchronize the access, and on all level an interupt mechanism is very similar to a thread.

    It’s just that the kernel mostly flatten the dma/irq mechanism to a synchronous thing, and then high level language re do the asynchronous execution on top of it with a software layer.

    It’s the kind of things that make me wonder how it could turn out if languages like GO would be interrupt driven, when they are supposed to be used to answer network request, because obviously in those case, the main drive behind execution schedulling will be network activity, if the language could have access to the interupt controller, it would be easy to have a load balancing of the network load on the different cores purely on hardware level, with a very low level language (in the sense close to the hardware, requiring very small runtime / software layer).

    For server software who consist essentially of reading data from harddrive, and sending this data to a network controller, like vanilla http/ftp server, having a way to program directly the dma would probably avoid some amount of copying and bypass the cache entierely because the data is just to be sent between two chipset other than the main cpu, and should never hit the cache at all, or even possibly could be used directly ‘as it’ without copy at all without the cpu would have much to do other than schedulling the dma transfer.

    But one of the problem when it come to memory and cache is that it’s very dependent on the hardware, even most algorithm that use threads to gain performance due to parrellel execution will only leverage any gain if there is one or two thread / core, having 180 threads on a machine that contain only 4 or 8 cores will not be faster than something with only 8/16 threads, even most likely slower, and the size of cache line, pipelining, memory model etc will be highly dependant on the cpu architecture.

    It’s in large part what i intended to do with my runtime, with basically rewriting from scratch all the memory mannagement, to have better control over alignment, multi threading and garbage collection, with heaps exclusive to each threads, with double deferencing for real time rellocation, lockless access, and construct using atomic operations.

    Having a language that is able to produce efficient machine code (like “close to the hardware”) would require something with lot of runtime detection, like ASL with all the hardware definition made available to the language, including cache line size, memory model; page size, for each cores, eventually access to memory mapped registers, dma buffer address, interrupt controller, in order to be able to efficently program them.

    Something that can abstract over all the complexity of modern cpu and computer architecture, which require problably several layers of abstractions over simple axioms like arithemetic and memory operations, hence making it an high level language, even if it just map the hardware.

    It could ressemble something with JIT where the logic would be expressed with an language with several layer of abstraction, but would then be compiled at runtime to quite simple machine code that take in account the feature of the hardware that the program is being run on.

    It’s the kind of paradox where it would need an high level language to express concepts that are very simple and straightforward at hardware level, and could be compiled to rather simple machine code, in other words, the source code could become more complex than the final compiled machine code for the source code to be able to take in account all the possible optimisation that the hardware can do, even if that would be invisible in the compiled machine code.

    Deep down it’s probably what happen behind the scene with GLSL/openCL, and the compiler optimise the binary machine code with the specifics of the hardware, regarding number of core/units, cache line, what float/int format the card supports, and it can only works this way because the program is compiled via tools that are specific to the hardware it will run on, and the source code definition is flexible enough to be efficient on different hardware, with different number of core/units, different format etc The types used in GLSL/openCL are more abstraction than low level binary representation of the data.

    To avoid speculative execution, the compiler should be able to know which circumstances can trigger speculative execution, and how to compile the code in order to avoid them if necessary. Which could require to express the program using abstractions to represent the cpu inner architecture, pipeline etc in order to produce the good machine code even if those informations will not really appear in the compiled machine code.

    In the current state of things for general programming i guess we are stuck with the less than ideal state of things where we have to use language with low level of abstraction to program hardware that would require an higher level of abstraction to be programmed efficiently, and high level language that are completely oblivious to the hardware, as their abstraction concern more application logic than hardware logic.

    Other than language like GLSL/openCL that have compiler dedicated to produce efficient machine code for their target hardware, most likely it will remain like this for a forseeable future.

  22. > because some operations bottleneck on on SICK algorithms
    Double “on”?

    The big observation I have: Big-O notation (which computer scientists love) when used to describe algorithmic efficiency generally assumes sequential processing. It is a measure of the number of operations, which is generally assumed to be a bound on execution time…this is not untrue but is easily misleading when computations occur in parallel. This is perhaps the most obvious comparison metric for algorithms for solving a particular class of problems, but as it completely ignores any potential parallelism, it strongly encourages selection of SICK algorithms over others that arguably could solve the same problem with better performance on modern hardware (and those “less efficient” algorithms may even be easier to understand and implement).

    I’m not going to argue that it is always possible to eliminate SICK algorithms; there are certainly cases where such algorithms are impossible or unfeasible to eliminate; I’d say that cryptography is almost certainly one of these, particularly with current encryption schemes. I’m also not surprised that reposurgeon performance is limited by serial performance, because version control history is usually a mostly-linear DAG, and any situation where the data itself is sequential will not parallelize easily, if at all.

    However, I think that a number of the other algorithms listed as examples (Dijkstra’s algorithm and depth-first search in particular) have alternatives that could make better use of available parallelism at the hardware level. This will depend, of course, on the actual requirements; in some cases the alternatives may be overkill, and on constrained hardware they may be too expensive.

    As an example, if you need shortest paths between more than one pair of nodes, I would explore the Floyd-Warshall algorithm in preference to Dijkstra; I suspect given a brief examination of the algorithm that on modern hardware (with low per-thread overhead) the time complexity could be something much closer to ?(V) than the purely serial ?(V^3), and is likely to be be better than worst-case Dijkstra; I’d have to spend time to find a good average-case analysis of Dijkstra to see exactly where the break-even point is.

  23. Pingback: Grandvest

Leave a Reply to Emanuel Rylke Cancel reply

Your email address will not be published. Required fields are marked *