The big break in computer languages

My last post (The long goodbye to C) elicited a comment from a C++ expert I was friends with long ago, recommending C++ as the language to replace C. Which ain’t gonna happen; if that were a viable future, Go and Rust would never have been conceived.

But my readers deserve more than a bald assertion. So here, for the record, is the story of why I don’t touch C++ any more. This is a launch point for a disquisition on the economics of computer-language design, why some truly unfortunate choices got made and baked into our infrastructure, and how we’re probably going to fix them.

Along the way I will draw aside the veil from a rather basic mistake that people trying to see into the future of programming languages (including me) have been making since the 1980s. Only very recently do we have the field evidence to notice where we went wrong.

I think I first picked up C++ because I needed GNU eqn to be able to output MathXML, and eqn was written in C++. That project succeeded. Then I was a senior dev on Battle For Wesnoth for a number of years in the 2000s and got comfortable with the language.

Then came the day we discovered that a person we incautiously gave commit privileges to had fucked up the games’s AI core. It became apparent that I was the only dev on the team not too frightened of that code to go in. And I fixed it all right – took me two weeks of struggle. After which I swore a mighty oath never to go near C++ again.

My problem with the language, starkly revealed by that adventure, is that it piles complexity on complexity upon chrome upon gingerbread in an attempt to address problems that cannot actually be solved because the foundational abstractions are leaky. It’s all very well to say “well, don’t do that” about things like bare pointers, and for small-scale single-developer projects (like my eqn upgrade) it is realistic to expect the discipline can be enforced.

Not so on projects with larger scale or multiple devs at varying skill levels (the case I normally deal with). With probability asymptotically approaching one over time and increasing LOC, someone is inadvertently going to poke through one of the leaks. At which point you have a bug which, because of over-layers of gnarly complexity such as STL, is much more difficult to characterize and fix than the equivalent defect in C. My Battle For Wesnoth experience rubbed my nose in this problem pretty hard.

What works for a Steve Heller (my old friend and C++ advocate) doesn’t scale up when I’m dealing with multiple non-Steve-Hellers and might end up having to clean up their mess. So I just don’t go there any more. Not worth the aggravation. C is flawed, but it does have one immensely valuable property that C++ didn’t keep – if you can mentally model the hardware it’s running on, you can easily see all the way down. If C++ had actually eliminated C’s flaws (that it, been type-safe and memory-safe) giving away that transparency might be a trade worth making. As it is, nope.

One way we can tell that C++ is not sufficient is to imagine an alternate world in which it is. In that world, older C projects would routinely up-migrate to C++. Major OS kernels would be written in C++, and existing kernel implementations like Linux would be upgrading to it. In the real world, this ain’t happening. Not only has C++ failed to present enough of a value proposition to keep language designers uninterested in imagining languages like D, Go, and Rust, it has failed to displace its own ancestor. There’s no path forward from C++ without breaching its core assumptions; thus, the abstraction leaks won’t go away.

Since I’ve mentioned D, I suppose this is also the point at which I should explain why I don’t see it as a serious contender to replace C. Yes, it was spun up eight years before Rust and nine years before Go – props to Walter Bright for having the vision. But in 2001 the example of Perl and Python had already been set – the window when a proprietary language could compete seriously with open source was already closing. The wrestling match between the official D library/runtime and Tango hurt it, too. It has never recovered from those mistakes.

So now there’s Go (I’d say “…and Rust”, but for reasons I’ve discussed before I think it will be years before Rust is fully competitive). It is type-safe and memory-safe (well, almost; you can partway escape using interfaces, but it’s not normal to have to go to the unsafe places). One of my regulars, Mark Atwood, has correctly pointed out that Go is a language made of grumpy-old-man rage, specifically rage by one of the designers of C (Ken Thompson) at the bloated mess that C++ became.

I can relate to Ken’s grumpiness; I’ve been muttering for decades that C++ attacked the wrong problem. There were two directions a successor language to C might have gone. One was to do what C++ did – accept C’s leaky abstractions, bare pointers and all, for backward compatibility, than try to build a state-of-the-art language on top of them. The other would have been to attack C’s problems at their root – fix the leaky abstractions. That would break backward compatibility, but it would foreclose the class of problems that dominate C/C++ defects.

The first serious attempt at the second path was Java in 1995. It wasn’t a bad try, but the choice to build it over a j-code interpreter mode it unsuitable for systems programming. That left a huge hole in the options for systems programming that wouldn’t be properly addressed for another 15 years, until Rust and Go. In particular, it’s why software like my GPSD and NTPsec projects is still predominantly written in C in 2017 despite C’s manifest problems.

This is in many ways a bad situation. It was hard to really see this because of the lack of viable alternatives, but C/C++ has not scaled well. Most of us take for granted the escalating rate of defects and security compromises in infrastructure software without really thinking about how much of that is due to really fundamental language problems like buffer-overrun vulnerabilities.

So, why did it take so long to address that? It was 37 years from C (1972) to Go (2009); Rust only launched a year sooner. I think the underlying reasons are economic.

Ever since the very earliest computer languages it’s been understood that every language design embodies an assertion about the relative value of programmer time vs. machine resources. At one end of that spectrum you have languages like assembler and (later) C that are designed to extract maximum performance at the cost of also pessimizing developer time and costs; at the other, languages like Lisp and (later) Python that try to automate away as much housekeeping detail as possible, at the cost of pessimizing machine performance.

In broadest terms, the most important discriminator between the ends of this spectrum is the presence or absence of automatic memory management. This corresponds exactly to the empirical observation that memory-management bugs are by far the most common class of defects in machine-centric languages that require programmers to manage that resource by hand.

A language becomes economically viable where and when its relative-value assertion matches the actual cost drivers of some particular area of software development. Language designers respond to the conditions around them by inventing languages that are a better fit for present or near-future conditions than the languages they have available to use.

Over time, there’s been a gradual shift from languages that require manual memory management to languages with automatic memory management and garbage collection (GC). This shift corresponds to the Moore’s Law effect of decreasing hardware costs making programmer time relatively more expensive. But there are at least two other relevant dimensions.

One is distance from the bare metal. Inefficiency low in the software stack (kernels and service code) ripples multiplicatively up the stack. This, we see machine-centric languages down low and programmer-centric languages higher up, most often in user-facing software that only has to respond at human speed (time scale 0.1 sec).

Another is project scale. Every language also has an expected rate of induced defects per thousand lines of code due to programmers tripping over leaks and flaws in its abstractions. This rate runs higher in machine-centric languages, much lower in programmer-centric ones with GC. As project scale goes up, therefore, languages with GC become more and more important as a strategy against unacceptable defect rates.

When we view language deployments along these three dimensions, the observed pattern today – C down below, an increasing gallimaufry of languages with GC above – almost makes sense. Almost. But there is something else going on. C is stickier than it ought to be, and used way further up the stack than actually makes sense.

Why do I say this? Consider the classic Unix command-line utilities. These are generally pretty small programs that would run acceptably fast implemented in a scripting language with a full POSIX binding. Re-coded that way they would be vastly easier to debug, maintain and extend.

Why are these still in C (or, in unusual exceptions like eqn, in C++)? Transition costs. It’s difficult to translate even small, simple programs between languages and verify that you have faithfully preserved all non-error behaviors. More generally, any area of applications or systems programming can stay stuck to a language well after the tradeoff that language embodies is actually obsolete.

Here’s where I get to the big mistake I and other prognosticators made. We thought falling machine-resource costs – increasing the relative cost of programmer-hours – would be enough by themselves to displace C (and non-GC languages generally). In this we were not entirely or even mostly wrong – the rise of scripting languages, Java, and things like Node.js since the early 1990s was pretty obviously driven that way.

Not so the new wave of contending systems-programming languages, though. Rust and Go are both explicitly responses to increasing project scale. Where scripting languages got started as an effective way to write small programs and gradually scaled up, Rust and Go were positioned from the start as ways to reduce defect rates in really large projects. Like, Google’s search service and Facebook’s real-time-chat multiplexer.

I think this is the answer to the “why not sooner” question. Rust and Go aren’t actually late at all, they’re relatively prompt responses to a cost driver that was underweighted until recently.

OK, so much for theory. What predictions does this one generate? What does it tell us about what comes after C?

Here’s the big one. The largest trend driving development towards GC languages haven’t reversed, and there’s no reason to expect it will. Therefore: eventually we will have GC techniques with low enough latency overhead to be usable in kernels and low-level firmware, and those will ship in language implementations. Those are the languages that will truly end C’s long reign.

There are broad hints in the working papers from the Go development group that they’re headed in this direction – references to academic work on concurrent garbage collectors that never have stop-the-world pauses. If Go itself doesn’t pick up this option, other language designers will. But I think they will – the business case for Google to push them there is obvious (can you say “Android development”?).

Well before we get to GC that good, I’m putting my bet on Go to replace C anywhere that the GC it has now is affordable – which means not just applications but most systems work outside of kernels and embedded. The reason is simple: there is no path out of C’s defect rates with lower transition costs.

I’ve been experimenting with moving C code to Go over the last week, and I’m noticing two things. One is that it’s easy to do – C’s idioms map over pretty well. The other is that the resulting code is much simpler. One would expect that, with GC in the language and maps as a first-class data type, but I’m seeing larger reductions in code volume than initially expected – about 2:1, similar to what I see when moving C code to Python.

Sorry, Rustaceans – you’ve got a plausible future in kernels and deep firmware, but too many strikes against you to beat Go over most of C’s range. No GC, plus Rust is a harder transition from C because of the borrow checker, plus the standardized part of the API is still seriously incomplete (where’s my select(2), again?).

The only consolation you get, if it is one, is that the C++ fans are screwed worse than you are. At least Rust has a real prospect of dramatically lowering downstream defect rates relative to C anywhere it’s not crowded out by Go; C++ doesn’t have that.

233 comments

  1. An orthogonal problem is the type system: in most object-derived systems, there is a complex type system with at least single inheritance. This leads to an error a former customer made: we /completely/ modelled a tournament in the form of a type hierarchy.

    Net result? When you wanted to change it, you had to change everything. When you wanted to add something new, you had to apply it to everything. We re-invented spagetti code, only this time it was spagetti data structures.

    Instead of abstracting and simplifying, we made it more complex. Bummer!

    1. >Instead of abstracting and simplifying, we made it more complex. Bummer!

      Yeah, this is why Rust and Go don’t have class inheritance. Good call by both design teams.

      1. Absolutely, inheritance in large projects tend to cause so many problems and makes it difficult to understand and follow! OOP with composition and interfaces is all you need.
        Except for the lack of Sum types and Generics :D

      2. At that point I am not even sure what is the point of OOP. Since SQL tables as a single “type” are useful for a tremendous range of purposes, while I never tried systems programming, if I would try I would probably use any “list of thingies with named properties” idiom that comes my way, be that a hash table or a struct.

        OOP was pretty much *invented* for inheritance, at least the ways I was taught at school, these awesome chains of concepts that a BMW inherits from Car and Car from Vehicle, i.e. basically what David described was taught as good design at my school… but if it is not, then why even bother? I take an SQL table or the language equivalent thereof, hashtable, struct, whatever, name it Car, some of the fields will be Make and Model and call it a day.

        Essentially I have to find the sweet spot in the conceptual category-subcategory tree, which in this case is car. Inheritance was meant to be able to move up and down on this tree, but the tree is not cast in stone because the Chevrolet company can acquire Daewoo and next week the Daewoo Matiz is called Chevrolet Matiz, then I am sure as hell not having any object class called BMW: that will be data, easily changed not part of the data structure!

        Encapsulation is a better idea but unless I have massive, database-like data structures (which in real life I always do but system programmers maybe not), how am I going to automatically test any function that works not only with its own parameters but pretty much everything else it can find inside the same object? I mean great, objects cut down global variable hell to a protected variable minihell that is far easier to eyeball but is it good enough for automated testing? I think not.

        I am afraid to write things like this, because only a narrow subset of my profession involves writing code and as such I am not a very experienced programmer so I should not really argue with major CS concepts. Still… for example Steve Yegge had precisely this beef with OOP: you are writing software yet OOP really seems to want you make you want to make something like unchangeable, fixed, cast in stone hardware.

        1. OOP was hugely hyped, especially in the corporate world by Java marketers, whom extolled the virtues of how OOP and Java would solve all their business problems.

          As it turns out, POP (protocol-oriented programming) is the better design, and so all modern languages are using it. POP’s critical feature is generics, so it’s baffling as to why Go does not have generics.

          Basically, rather than separating structures into class hierarchies, you assign shared traits to structures in a flat hierarchy. You can then pull out trait-based generics to execute some rather fantastical solutions that would otherwise require an incredible degree of copying and pasting (a la Go).

          This then allows you to interchangeably use a wide variety of types as inputs and fields into these generic functions and structures, in a manner that’s very efficient due to adhering to data-oriented design practices.

          It’s incredibly useful when designing entity-component system architectures, where components are individual pieces of data that are stored in a map elsewhere; entities consist of multiple components (but rather than owning their components directly, they hold ID’s to their components), and are able to communicate with other entities; and systems, which are the traits that are implemented on each entity that is within the world map. Enables for some incredible flexibility and massively parallel solutions, from UIs to game engines.

          Entities can have completely different types, but the programmer does not need to be aware of that, because they all implement the same traits, and so they can interact with each other via their trait impls. And in structuring your software architecture in this way, you ensure that specific components are only ever mutably borrowed when it is needed, and thus you can borrow many components and apply systems to them in parallel.

          1. > POP’s critical feature is generics, so it’s baffling as to why Go does not have generics.

            translation: I don’t know anything but will take a word anyway

    2. Ah! Trying to directly mimic your prototype in the class structure. So tempting: it feels like you are encapsulating Reality, and saving lines of code all at the same time. Infinite wins For Great Justice!

      Right up until you slam into one of the many brick walls that litter the path (as I did earlier today).

    3. > An orthogonal problem is the type system: in most object-derived systems, there is a complex type system with at least single inheritance.

      Other language like python also provide multiple inheritance.

      Either way, C++ moved past class hierarchies two decades ago. The standard algorithms and containers are built around regular types instead of class hierarchies. See “Inheritance Is The Base Class of Evil”:

      https://channel9.msdn.com/Events/GoingNative/2013/Inheritance-Is-The-Base-Class-of-Evil

    4. > Instead of abstracting and simplifying, we made it more complex.

      Yep. Plus, not reusable. You can’t just snip out a useful bit of code for reuse because everything is dependent on something else. Effectively, every project is written in its own unique language.

  2. @Eric, nice article, still you haven’t said a thing about Nim lang that is both high level, extensible like lisp, and has a GC suitable for embedded systems and kernels.

    1. >still you haven’t said a thing about Nim lang

      If you show me a Nim deployment that is more than a toy I might get interested.

    2. We use Nim in a data science deployment. We like it, because it shares a lot of go’s nice features but has much better c interoperability.

      I prefer go, but Nim is pretty neat.

      1. >I prefer go, but Nim is pretty neat.

        Huh. And now it turns out that a guy who tried writing a Unix kernel in Rust bailed out to Nim.

        This surprises me. I had bought into the pro-Rust argument that, once it matured, would be a good match for kernel development on theoretical grounds related to the tradeoff between the benefits of type safety versus the additional friction costs of Rust. Now someone actually working at that coal face says “Nope.”

        More detail here.

        I suppose it’s possible that sticking to Rust would have been a better choice and that the guy is just incompetent, but his discussion of the issues seems pretty thoughtful.

  3. C nonplussed is less nonplussed than C++.
    There may be a wound, but the C++ band-aid over it obscures whether it is superficial or infected with gangrene.

    One problem is that there are no non-manual mappers. One ought to be able to put the entire set of Posix commands with GNU extensions like grep or find, and have them pop out in Python, even if not terribly efficient (see below), but that doesn’t happen. Everything is recoding even when not trying to duplicate undocumented behavior.

    But the performance is NOT trivial. Python will not be as effecient, so a “find . -… -exec grep … {}” can be interminably slow. Note we CAN do a Python COMPILER after the interpretave version passes all tests. But we don’t do that either.

    Go looks nice, but I think it is chicken-egg. Only when a good portion is moved to Go and doesn’t merely duplicate C (at average equal LoC, efficency, etc.) will it be adopted. I can’t do linux kernel stuff in go.

    This is something like the failing and flailing transitions to electric cars. The nearest “supercharger” is about 100 miles away – half a battery charge for a Tesla. But gasoline and diesel are easily available. There are worse locations even more impractical for electric cars near where I live.

    Malcolm Gladwell discribes “The Tipping Point”. It has not occurred yet with C. Any bare metal programming is easier in that – go and rust aren’t alternatives to my knowledge, only assembler. Perhaps they can be but I won’t hold my breath until the Arduino IDE – the ultimate bare-metal IDE using C to create non-OS but very functional things – changes into Go or Rust.

    Fortran isn’t even dead yet, nor Cobol, yet neither are at all versatile, and at best extensions are added so legacy defecations can be updated. At least I don’t have to use the keypunch machine.

    But this is the universal problem. NO ONE will do a complete and verified auto-port so any arbitray ForTran or Cobol program can be translated – perfectly (even with legacy libraries) – in any other language. Y2K is 17 years old, but it was a serious problem. So is 2038 32 bit time in secs since 1970. No matter how much easier it might be to address in non-legacy languages, it won’t happen.

    Another bit is the unix/posix call set – open/close/read/write/ioctl – reinvented badly many times. Never improved.

    1. “I won’t hold my breath until the Arduino IDE – the ultimate bare-metal IDE using C to create non-OS but very functional things – changes into Go or Rust.”

      Good heavens, no, at least not Rust. The target audience for Arduino is exactly the audience who Rust would send screaming off into the night, never to return.

      1. Rust has an unofficial AVR target suitable for use with Arduino, and libraries to support that board.

        The target audience for Arduino — young hardware hackers — is one of the most likely to appreciate the benefits of Rust, and they are tentatively embracing it.

        1. The target audience for Arduino is people who are not experienced in low-level programming, perhaps not experienced at programming at all. Dropping those folks in the deep end with a language like Rust is more likely to scare them off than to induce them to learn.

          1. Not actually. I have read many success stories from newcomers to programming. Even some which had tried to pick up programming several times in the past with C++ and C. Rust was able to take the cake because of it’s superior degree of documentation; a better, explicit syntax; an informative compiler that points out the mistakes, including the borrow checker, which helpfully points out memory unsafety; and a very vibrant and friendly community that’s always standing by to help newcomers get started and to demonstrate idiomatic Rust.

            The problem with ESR, on the other hand, is that he never attempted to reach for any of these resources when he tried out Rust. I never saw him make a post in Reddit, the Rust users forum, or visit any IRC/Mattermost channels. He simply wrote a post a misinformed post about Rust because he was doing something he didn’t understand, and wasn’t aware that what he thought was missing in the standard library was actually there.

            Even I, myself, come from a background of never having programmed anything before Rust. And yet Rust was the perfect entry into programming. I can now write quality C, C++, etc. because the rules enforced by Rust are the best practices in those languages. And I can now do everything from writing kernels and system shells to full stack web development and desktop GUI applications — all with Rust. All the rules in Rust are intuitive and instinctual for me today.

            1. “All the rules in Rust are intuitive and instinctual for me today.”

              Three years in. How instinctive were they the first time you tried it?How many times did you get frustrated and want to throw that damned computer out the window?

              1. Honestly, I never got frustrated. I have a general philosophy that if you struggle with something, it’s probably because you’re going about it the wrong way, and that you should instead take a step back and review.

                In addition, if you are having difficulty changing your perspective to figure out what you’re doing wrong, you’re free to reach out to the greater online community, where armies of other developers ahead of you are eager to answer your questions.

                As it turns out, Rust’s borrow checker is like a Chinese finger trap — the more you resist it, the more you will struggle. If you instead go with the flow and internalize the rules, the struggles disappear, and the solutions become apparent. Everything suddenly makes sense when you simply accept the rules, rather than trying to fight the rules.

                I initially struggled to wrap my mind around all the new concepts during the first week, but by the end of the second week, all of the concepts were well in-grained within my mind: what move semantics are and how they work, the borrowing and ownership model, sum types and pattern matching, traits and generics, mutexes and atomics, iterators and map/fold/filter/etc.

                And that’s talking about the state of documentation that was really poor when I initially picked up Rust. Rust of today has significantly enhanced documentation that covers every area, and does so better than any other language I’ve ever seen. If I had that to reference when I started, then I’m sure that I could have mastered it within a week.

                After learning Rust, I found that I could easily write C and C++ as well, because they were more or less ancient history in terms of systems language concepts. The rules enforced by the Rust compiler are best practices in C/C++. It’s just annoying how much boiler plate you need in those languages to achieve simple tasks that Rust’s standard library already encompasses, and how the core language is so critically lacking that you have to attempt to emulate sum types by hand.

              2. Honestly, after 2 years of Rust, I often get frustrated at Go not providing the same safety and convenience.

                I may be spending hours trying to make the borrow checker happy with my data usage, by I regularly spend days trying to debug segmentation faults in Go…

                My point is that “instinctive” depends heavily on what you are used to use.
                Go might be “instinctive” when you come form C, and Rust might be too different from common languages to be instinctive at all, but once you get used to it, you wish you never have to turn back.

                1. >I may be spending hours trying to make the borrow checker happy with my data usage, by I regularly spend days trying to debug segmentation faults in Go…

                  Odd. How does that even happen without bare pointers in the language?

                  I’ve never seen one myself.

                  1. Lacking generics, many libraries accept interface{} to simulate them, then type-assert or just try to access the data counting on the fact that you will pass the correct type.

                    Sometimes they take a reference but don’t check if it’s nil.

                    Many just panic, because it’s simpler than trying to return the correct errors, counting on the fact that you will use goroutines and your main will keep running, so you have to handle the panic yourself, which may be easy if you are using http from the std lib, a little less if you’re writing your own process pool.

                    Maybe it’s not technically a segfault, but the effect is the same.

            2. “All the rules in Rust are intuitive”

              I call bullshit on anyone who says anything other than the nipple is intuitive (and some babies actually have to be taught how to use one of those).

              What people really mean when they say “intuitive” is “I didn’t have to learn anything new to do this”. That’s a very good thing, but it isn’t really “intuitive”.

              1. Perhaps you should look up the meaning of the word, rather than assume that everyone else is wrong.

    2. I was highly disappointed when I found out that Google’s new microkernel, Magenta/Zircon, was written in C++ instead of Go.

      1. What Google found, from developing and dogfooding Go, was that Go made a better Python replacement than it did a C++ or C replacement.

        Go is not a systems programming language.

        Oh, and fun fact: crosvm, Google’s virtualization layer for ChromeOS, is written in Rust.

        1. Well, more specifically my disappointment was “Another C based operating system? When will we ever learn.” I kind of wrote off the whole project at that point. I didn’t care that they specifically didn’t use Go aside from that they had it on hand. I’d love to see them make a Zircon replacement in Rust.

      2. Microkernels like Zircon are *exactly* the place where C/C++ will likely remain a reasonable choice, if not the best choice, for years to come. The primary requirements are performance and complete access to the bare metal. A true microkernel has a small code base (or it isn’t a *micro* kernel!), so it isn’t a “programming in the large” situation. A small team working on a small code base *can* maintain the discipline to use C++ effectively and avoid its pitfalls.

        On the other hand, the various services built on top of Zircon are free to use other languages. Many are in C++ now, but they don’t have to be. The FAT filesystem service is written in go.

        1. >Microkernels like Zircon are *exactly* the place where C/C++ will likely remain a reasonable choice, if not the best choice, for years to come […] On the other hand, the various services built on top of Zircon are free to use other languages. Many are in C++ now, but they don’t have to be. The FAT filesystem service is written in go.

          This is what I think the relatively near-term future will look like, yes. Go pushing down the stack towards kernels, but not displacing C there.

    3. Another bit is the unix/posix call set – open/close/read/write/ioctl – reinvented badly many times. Never improved.

      Bwahaha. Where is the support for overlapped (asynchronous) I/O in the base POSIX call set? Answer: There is none. Sure, there’s an AIO extension to POSIX that no one uses, that is completely inadequate when compared to a kernel such as Windows that has support for async I/O with sophisticated control features like I/O completion ports designed in, and that is implemented under Linux by spinning off user-space worker threads. Since completion-based AIO is a first-class citizen under Windows, you can set up overlapped I/O operations — to network, devices, or disk — to notify you upon their completion and then busy your CPU cores with other tasks, rather than the POSIX model of spinning in select loops and interleaving “are we there yet? are we there yet?” polling with other processing.

      So yes, the POSIX model has been improved on. You know that old Dilbert cartoon where the Unix guy says “Here’s a quarter, get yourself a real operating system”? Dave Cutler — lead designer of VMS and Windows NT — does that to Unix guys.

      1. There have been entire papers written on that.

        It’s not that “UNIXy OSes” are inferior to Windows, it’s that the standards organizations are derelict in their duty to provide a portable API in the style that everyone actually wants to use.

        Be it Linux, FreeBSD, OSX, or what have you, there ARE heavily-used equivalents to the Windows APIs you mention in POSIXy OSes… they’re just all different.

        (I say UNIXy and POSIXy because it’s intentional that Linux aims to be “certifiable but not officially certified” due to its rapid release cycle.)

  4. Do you see anything replacing C in the small embedded systems (bare metal, 1MB flash, 256 KB RAM) space? I don’t. Possibly Rust for new development, but I just don’t see anything displacing C for a very long time.

    And there are lots and lots of controllers out there that aren’t beefy enough to run a Linux, even if you could get real-time performance out of it…

    1. >Do you see anything replacing C in the small embedded systems (bare metal, 1MB flash, 256 KB RAM) space? I don’t. Possibly Rust for new development, but I just don’t see anything displacing C for a very long time.

      No argument. That’s pretty much the niche I foresee C holding longest, analogous to the persistence of assembler on small systems 30 years ago.

      1. I find myself thinking that said small embedded systems are, in a way, echoes of the minicomputers that C was originally made to run on. I say echoes, because while I’m pretty sure a PDP-11 had more raw compute power and I/O throughput due to its architecture, the memory numbers seem similar. While I read that a PDP-11 could be configured with up to 4 MiB of core and multiple disks, I doubt a majority of them were delivered or later configured to be fully maxed out. And when I look up the PDP-11, I read that a great many of them were employed in the same job that today’s embedded systems are: as real-time automated control systems. Being a whippersnapper who wasn’t born until at least a decade later, I may well be overgeneralizing, but I don’t think I’m completely wrong either.

        So, when considering that, it makes sense that the aforementioned niche is where C is likely to hold out the longest. It’s a similar environment to the one it was originally adapted to.

        1. >I find myself thinking that said small embedded systems are, in a way, echoes of the minicomputers that C was originally made to run on.

          Oh hell yeah. Dead obvious to those of us who remember the old days. Your conclusion “the aforementioned niche is where C is likely to hold out the longest” is also obviously correct.

        2. I’m pretty sure a PDP-11 had more raw compute power and I/O throughput due to its architecture, the memory numbers seem similar.

          While today’s proficient embedded programmer would be right at home with the PDP-11, and while this statement holds true for some embedded systems, it’s certainly not true for all.

          Moore’s law has done some interesting things for us. Package pins are costly, so for a lot of small embedded systems, it may make more sense to have all the memory on-chip. Once you have all the memory on-chip, you don’t want too much of it, or your cost goes up again because of the die size. Performance costs are somewhat orthogonal to memory costs: performance increases by adding more transistors and by reducing feature size. Both of these are expensive, but not always as expensive as adding memory.

          One cool thing about on-chip memory is that since you’re not constrained by pins for an external bus, you can have a really wide bus to memory. Another cool thing is that you can have interleaved buses if you want, simply by splitting memories up in odd ways. Interleaving buses allows for simultaneous access from a CPU to one word with DMA to an adjacent word.

          So there are a lot of niche devices that fit in this C niche, in fact that are small enough to not even want to run an OS on, never mind an interpreter or garbage collector — that are nonetheless performant enough to, for example, saturate a few full-duplex gigabit ethernet links while doing complex DSP calculations. In other words, a $5.00 chip might very well exceed a PDP-11 by orders of magnitude in memory bandwidth, CPU power, and I/O bandwidth.

    2. http://micropython.org/

      That at least fits the memory requirements you’ve laid out.

      Personally, I’d love to write Oberon-2 code for micro controllers. Much cleaner than C++, but just as fast. Unfortunately that language never really caught on outside of ethz.ch.

      1. Which is a shame because Wirth essentially showed it to be capable systems programming language by developing the Oberon operating system.

        It’s an excellent and small garbage collected language that could have supplanted efforts like Java. I suspect the only thing that actively prevented it from doing so was: promulgation, and the fact that it wasn’t a C-family language.

      2. The interpreter will fit, but how much functionality also fits? With C, you can fit an application plus complete zigbee and bluetooth stacks in a 512k part.

        I’m curious about Rust for new development, but C won’t get displaced in this space for a long time. There’s some value to a very limited subset of C++ to pick up things like namespaces and compiler enforced encapsulation, but that doesn’t fundamentally change things.

    3. FORTH? It seems to be a good fit for small embedded, and it is nonetheless “high-level” in a sense, in that it trades off machine time for ease-of-development time. But I’m a bit skeptical about Moore’s law being relevant to small embedded systems these days – it seems to have stalled early there, so something closer to the metal ala Rust will also find plenty of use.

      1. I agree about Moore’s law on small embedded systems, or more narrowly, on their CPU’s. I sometimes write code for these things – in C – because a customer uses them for their very low cost. The processors are a little faster than in the 1980’s, and they’ve added a few instructions, but basically, a PIC18 is still a tiny little machine with a horrible machine model, just like its 1980’s progenitor. A 68HC05 is still a 6805 which is just a bit more than a 6800, from the 1970’s.

        However, Moore’s law does appear – the greatly increased chip density leads to very chap SoC’s – a dollar for a machine with RAM, Flash, EEPROM, and a whole bucket of built in peripherals and peripheral controllers.

        The good news is that you can indeed use C on these things, rather than assembly (which is true torture on a PIC). And, the C optimizes pretty well.

        1. Not all sub-$1 microcontrollers are PICs, thankfully. Some are 8051s (meh)…and some are ARM Cortex systems, on which C does very well, thank you. Microchip even has a PIC32 series that’s MIPS-based.

          Check out this article that surveys 21 different microcontrollers, all under $1 each.

    4. The embedded Rust community has been able to get Rust cross-compiled for targets like that. Pretty much the only thing you need is compiler support, and a language that doesn’t require a runtime. All systems languages get compiled down to the same machine code in the end (especially when they all use the same compiler backend).

  5. Typos:

    …languages like Lisp abd (later) Python…

    …due to programmers trepping over leaks and flaws in its abstractions…

    (At least, I assume this is a typo, but the E and I keys are nowhere near each other… Does your dialect smush those vowels together, or is trep a jargonic verb I’m not familiar with?)

    …or, in uunusual exceptions like eqn,…

    I’ve never come close enough to the bare metal to have anything more substantial to add!

    1. the E and I keys are nowhere near each other

      They are, in Colemak. Home row: arstd hneio.

  6. @esr: The first serious attempt at the second path was Java in 1995. It wasn’t a bad try, but the choice to build it over a j-code interpreter mode it unsuitable for systems programming.

    I agree Java was a poor choice for systems programming, but I don’t think it was ever intended to be a systems programming language. The goal of Java was “Write once, run anywhere”. Java code compiled to bytecode targeting a virtual CPU, and the code would actually be executed by the JRE. If your hardware could run a full JRE, the JRE handled the abstraction away from the underlying hardware and your code could run. The goal was cross-platform, because the bytecode was the same regardless of what system it was compiled on. (I have IBM’s open source Eclipse IDE here. The same binary runs on both Windows and Linux.) For applications programming, that was a major win. (And unless I’m completely out of touch, the same comments apply to Python.)

    More generally, any area of applications or systems programming can stay stuck to a language well after the tradeoff that language embodies is actually obsolete.

    Which is why there are probably billions of lines of COBOL still in production. It’s just too expensive to replace, regardless of how attractive the idea is.

    But I think they will – the business case for Google to push them there is obvious (can you say “Android development”?).

    Maybe. Android has a Linux kernel, but most stuff running on Android is written in Java and executed by the Dalvik JRE. The really disruptive change might be if Google either rewrote the Linux kernel in Go, or wrote a completely new kernel intended to look and act like Linux in Go. The question is whether Linux’s days are numbered in consequence.

    >Dennis

    1. We have an existence proof of a kernel in Rust. Has anyone written one exclusively in Go?

      1. >We have an existence proof of a kernel in Rust. Has anyone written one exclusively in Go?

        No, That would be a silly thing to try until the next major advance in GC technology. If then.

        1. The major advances in GC already exist, they are just only available in proprietary software: the Azul C4 Garbage Collector (Continuously Concurrent Compacting Collector) for their variant of the JVM, Zing. You pay the price of the GC read-barrier, but then you enjoy the benefits of a massively scalable concurrent GC with no pause.

          Like a Lisp machine, on steroids. (And yes, the Lisp Machines had the entire OS in Lisp, and there was a variant with a guaranteed real-time collector.)

        2. @esr Why is it silly? Honestly curious given that Niklaus Wirth developed the Oberon operating system back in the 80’s using a garbage collected descendant of the Modula-2 programming language (also named Oberon).

              1. I’m still curious as to how much the overhead is an issue. Wirth showed that one could develop an OS with a garbage collected language. Is this an issue with particular kinds of operating systems (real-time for example, I can see this being an issue)? For a general purpose system however, how much of an issue it?

                1. OS’s with garbage-collected languages used to build their kernels and user space are often designed with serious tuning of the garbage collection cycles. Problem is, the more load you put onto these kernels, the more GC cycles are required, and thus they seriously buckle under stress.

                  Would you like a desktop OS with high-latency audio, video, and input responses? I’d think not.

              2. You evidently didn’t read the whole article. By tuning the Go GC he got the performance he desired. He also states that “… this type of benchmarking is definitely an edge case for garbage collection.”

                1. Such tuning would not be required at all with a proper language that does not require a runtime garbage collector, a la Rust.

                  I really don’t understand the obsession with GC languages. Even with Go, you end up writing far more boiler plate code and overly-convoluted solutions in comparison to GC-free Rust. Why pay for GC when you don’t even need it?

                  1. Such tuning would not be required…

                    It’s an “edge case”.

                    I really don’t understand the obsession with GC languages. … in comparison to GC-free Rust.

                    Rust. Rust. Rust. Rust. Rust. Rust. Rust. Rust. Rust. Rust.

                    You’re an odd one to speak of obsessions.

                    Why pay for GC when you don’t even need it?

                    Why pay for manual memory management when you don’t even need it? See how it is to write un-proveable assertions. Even I can do it.

                    Rust appears to have a lot to offer. It behooves all of us to get to know it.

                    The biggest impediment to Rust’s adoption is the people promoting it.

                    1. Amen. I’m about to quit following this post because of the monomaniacal Rust fanboyism. I’m not learning a damned thing about the language except that it incites Scientology-level cultism in its adherents.

                    2. > I’m not learning a damned thing about the language except that it incites Scientology-level cultism in its adherents.

                      Not all of them. I’ve had rational conversations with core Rust people on their fora. I cited some in my second Rust post.

                      The funny part is that Michael Aaron Murphy doubtless believes “I am an effective Rust advocate” when the reality is “I make people run screaming from the thought of having anything to do with Rust.” With friends like him the language doesn’t need enemies.

                    3. > It’s an “edge case”.

                      I spent two years experimenting with Go, and I can tell you that tuning the GC is not an edge case. It’s very common.

                      > Why pay for manual memory management when you don’t even need it? See how it is to write un-proveable assertions. Even I can do it.

                      You aren’t paying for manual memory management. Rust’s compiler and language does all of that for you. You’re trying to argue against an absolute. What a shame. Either you pay hardware costs to implement a solution, or you create a simpler solution that doesn’t need to pay those costs. It’s obvious which of the two are better options!

                      > The biggest impediment to Rust’s adoption is the people promoting it.

                      Purely false. Rust has a healthy adoption rate. It arrived at precisely the right time when it did, to take advantage of all the concepts and theories that had been developed at the time it started development, and has been adopted at precisely the correct rate, to enable the Crates ecosystem to catch up to the needs of the developers adopting it. Rust’s community is growing exponentially, regardless of how much you snarl your nose at it. It doesn’t matter what you or I say. Any publicity, is good publicity!

                      > Rust appears to have a lot to offer. It behooves all of us to get to know it.

                      It does nothing of the sort. It is simply the correct tool for the biggest problem in the software industry. Either you choose to use it of your own volition, or you fall behind into obscurity, and a new generation of software developers replaces you.

                    4. Michael Aaron Murphy:

                      Help is available. There are many new medications that show great promise. Support groups are nearby for you in your time of need. You don’t have to suffer alone.

                    5. You’re an odd one to speak of obsessions.

                      In all fairness, the advantages of Rust’s approach to memory allocation and deallocation predate Rust itself, with antecedents in C++ and even Objective-C. Rust merely builds on and enhances these things.

                      But there is an inherent cost to runtime garbage collection that simply is not paid when your language determines object lifetime at compile time. Tracing GCs are, in a word, obsolete: 1960s technology does not fare well in the face of 2010s problems of scalability and performance.

                      Rust earned its position as the prime candidate to replace C as the default systems language in two ways: by not adopting a GC and not sucking as bad as C++. Three ways, actually, if you count being hipster-compliant (which Ada is not).

  7. Rust has tools such as corrode that makes transitioning from C relatively painless.

    Regarding your question about select(), things are moving lately:

    https://github.com/crossbeam-rs/rfcs/pull/22

    But I agree that it will take another round or two of refinement (so about 2 years) to make the language painless for gps or ntp purposes.

    1. Although it should be noted that it’s often times faster to rewrite something from scratch than to use tools like corrode and manually rewrite the converted code into something idiomatic to best practices.

    1. I saw that when you posted it on G+. Mostly agreed with it, except that I thought the “training wheels” crack about Python was unjustified and I miss #ifdef more than you – my stuff wants to have code conditionalization for stripped-down builds.

      You were right on about one thing, especially. Go is static typing done right – the compiler error messages are genuinely helpful to extent that after C and C++ is rather startling.

      1. Ada and Rust also provide helpful compiler error messages.

        If Go is “static typing done right”, why is the number one complaint among Go users about the weakness of Go’s type system?

        1. I can make a good case that Go is the local optima (or the “done right”) of that type of manifest typing.

          The problem is that the next local optima that we as a community know about is a fairly significant step up on the complexity curve. A lot of the nice type features really need other nice type features to make them work, which then need support from other nice features, which need support from other features… this is not an infinite regress and you end up somewhere where you can do things that you really, really wouldn’t want to do in Go (Servo would be a nightmare in Go), but I tend to agree that Go is going to have a niche for a long, long time.

          1. >The problem is that the next local optima that we as a community know about is a fairly significant step up on the complexity curve.

            Quite, and see “low transition costs out of C”. Ken Thompson is the single most brilliant systems engineer in the history of computing; where he drove this language wasn’t towards an optimum in theoretical design space but an optimum in terms of the economics of actual existing computing.

            You know what? This doesn’t want to be a comment. It needs to be another blog post.

            1. Quite, and see “low transition costs out of C”. Ken Thompson is the single most brilliant systems engineer in the history of computing; where he drove this language wasn’t towards an optimum in theoretical design space but an optimum in terms of the economics of actual existing computing.

              …Circa 1970. It’s reasonable to assert that the kinds of sophisticated type checking and static verification required in languages like Rust and Haskell would have been too costly on the hardware Thompson and Ritchie had to hand (and anyway, the type theory hadn’t even been developed yet). Besides which, they weren’t building software for flight avionics or radiation dosing machines, they were building more or less a sandbox for them and their fellow hackers to mess around in. At the time they designed C’s type system, they weren’t expecting their language to bear the full weight of the internet, including all its malicious actors; they could reasonably expect the average C program to be small, short-lived, and only used by persons within the same computing facility as the author. They didn’t even have to deal with today’s new normal: people carrying always-on internet machines in their pockets, Russian hackers swinging major elections, the Internet of Things. All of which swing the requisite standards of reliability, performance, and security far towards the “flight avionics and radiation dosing machines” end of the scale.

              Unix was a reasonable system design in its time, far less so today. And there’s no need to provide a comfortable transition from what C hackers know because by 2017 standards, what C hackers know is broken. And most of today’s developers have spent their larval stages working in C++, Java, or C# anyway.

          2. A lot of the nice type features really need other nice type features to make them work, which then need support from other nice features, which need support from other features…

            Exactly, which is why once you’re exposed to Haskell you tend not to take type systems weaker than Haskell’s seriously. You simply cannot do monads very well without Hindley-Milner, higher-kinded types, and type classes.

            The difference between you and me is I see the added type complexity as essential complexity whereas you see it as accidental complexity. It’s complexity inherent in the system you’d have to deal with one way or another. Whatever you don’t pay for at compile time accrues interest at run time — and the APR is a bitch. So the more invariants that can be asserted by the type system (like Rust encoding object lifetimes directly within its types) at compile time before the program even runs, the less you’ll have to sweat about at run time. And, ultimately, the more money you’ll save in the long run.

  8. What about Ada and its recent offspring Spark? They seem to me to address your issues with C++, technically speaking, they also do have (very) large deployments, are mature and well supported. However, they did not and do not seem to replace C because of the vast mass of systems C programmers and code. There will continue to be a lot of job openings in C, making it worth learning, and maintaining the size of the programmers body. One will keep being more likely to easily find systems programmers that know C than ones mastering any other language. This should keep C going even if technically better options appear (they already exist, in fact).

    1. One thing Eric’s been looking for that Ada has and Rust lacks, is standard POSIX bindings. And when I say “standard”, I mean IEEE Standard 1003.5b-1996 — Ada bindings are part of POSIX. An open source library that implements this standard, called Florist, is readily available for the GNAT compiler and there’s even an Ubuntu package for it.

  9. Insightful post.

    After a long break from programming it was seeing what they had done with UB that ruined C for me.

    However, don’t you think that there is unlikely to be a single successor to C? That was a product of very different times and the size of the market was much smaller then.

    I agree with you about the merits of Go for many purposes. But one gives back modelling power, and there are times when you need it.

    When was the last time you checked out D? You sound like you are doing nothing more than repeating talking points. Languages develop at different rates and D is much more ambitious… Unsurprisingly then it wasn’t ready till quite recently for broader adoption – libraries, tooling, documentation.

    Its taken off quite sharply since 2014. Have you seen the stats?

    http://erdani.com/d/downloads.daily.png

    And there’s a growing need for it because when your logs hit 30 GB a day Python is no longer quite fast enough.

    Here are some commercial users:
    https://dlang.org/orgs-using-d.html

    I’m one of them (at a 3.6bn hedge fund).

  10. @esr:
    > Therefore: eventually we will have GC techniques with low enough latency overhead to be usable in kernels and low-level firmware, and those will ship in language implementations. Those are the languages that will truly end C’s long reign.

    I don’t know. As I haven’t managed to write anything more than toy code at any level yet, I’ll defer to anyone that has shipped production code at that level of the stack, but the OS/firmware level is what is most interesting to me, and if I ever do manage to get off the ground, something in that regime is likely to be the kind of stuff I’ll be writing. And my gut feeling is this: people working at that level are going to be reluctant to use garbage collection that they did not write themselves. So low-latency garbage collection needs to be developed, and then it needs to be distilled into something that people whose expertise isn’t necessarily language development can implement. Because at the kernel level, while it isn’t very far to the bottom, you absolutely *have* to be able to see all the way down, and a GC implementation you didn’t write is a potential impediment to that.

  11. One point that might be refined from your machine-centric/programmer-centric distinction is the asymmetry in adaptability, not just the asymmetric cost trends. Modifying the machine to better accommodate a programmer-centric language doesn’t often pan out well; you wind up with odd artifacts like Lisp machines. On the other hand, programmers self-modify; they adapt to such things as pointer semantics and registers as part of learning low-level languages.

    I find Forth seems to live at a knee in the machine-efficiency/programmer-friendliness curve. Its RPN stack is fairly easy to wrap ones head around, and it’s close enough to the metal for use in low-level code like bootloaders. But that knee is pretty close to the machine-efficient end of the spectrum because of the asymmetry between how well programmers can mentally model a machine executing their code and how poorly computers emulate a human mind.

  12. C++ will keep taking over for C until we get verification that scales. I’m betting on a language like Whiley (whiley.org) or Dafny/Boogie (Microsoft) in the long term.

    For higher level server-oriented programming, most likely something based on actors such as Pony (ponylang.org), but with verification.

    Such languages are in the works, so maybe a language like Creol (http://www.sciencedirect.com/science/article/pii/S0304397506004804) which appears to provide verification for high level concurrency mechanisms could be a starting point for real change.

    Go has a nice runtime, but needs semantic improvements and a significant cleanup regarding how error handling and abstractions are done. Won’t happen with the current team. So it is a dead end… unfortunately. D is the same category. Necessary changes won’t happen with the current team.

    Rust doesn’t really solve C++’s issues, except maybe the learning-curve. Lifetime management is not a big problem for proficient C++ programmers (i.e. C++11/17). C++98 is essentially a different language than C++17/20 in terms of idiomatic source code. So new idiomatic code bases would be incomparable to old code bases in terms of maintenance. (Assuming you have proficient people to do code review.)

    1. > Lifetime management is not a big problem for proficient C++ programmers

      Try telling that to the Firefox developers who’ve posted a review of their Project Quantum Stylo integration in the new release of Firefox today. They replaced 160,000 lines of C++ with 85,000 lines of Rust. And memory safety was the single-most reason for the project succeeding where all previous attempts in C++ failed. These are incredibly-experienced veteran C++ developers.

      https://blog.rust-lang.org/2017/11/14/Fearless-Concurrency-In-Firefox-Quantum.html

      1. Incredibly experienced might mean that they are stuck in a C++98 mentality. Anyway, no team is better than the process it is managed by… And Mozilla are hardly neutral when it comes to Rust.

        That a rewrite leads to better code, well, hardly surprising. Is it?

        Now, simple substructural subtyping like Rust provides can prevent you from doing some mistakes, but it might also ban valid code which introduces inefficiencies because you are forced to choose a particular structure.

        I don’t mind it, but it doesn’t help much on the hard to find bugs I run into. Only more advanced behavioural typing and program verification is sufficient for preventing typically expensive low level bugs.

        1. > Incredibly experienced might mean that they are stuck in a C++98 mentality. Anyway, no team is better than the process it is managed by…

          Actually, Mozilla are well known for having some of the best C++ programmers in the field. They are among the first to adopt newer standards of C++. They are also quite versed in the latest research in computer science, especially given that they have around 30 people in the core Rust team with PhD’s in computer science.

          These are both highly educated recent grads that are up to date with all the latest research, and highly experienced veterans of C++ that have been developing the most complex software in the world — web browsers. They can’t afford to not up to date with the latest strategies and techniques.

          For example, Servo is actually constructed using an entity-component system architecture, rather than an OOP architecture. Servo’s been built with the latest techniques that modern game developers are using to build their AAA games. If these were people trapped in C++98 days, they’d have no knowledge of such concepts, or be able to create something as advanced as Rust.

          > And Mozilla are hardly neutral when it comes to Rust.

          No organization would support a product that isn’t giving them a benefit. You have to also realize that they are among the best-suited to represent Rust. Not only did they kickstart, orchestrate, and develop the language (with the help of three other companies), but they have managed to create very complex production-grade solutions with it, and it’s shipping today in one of the most widely used applications on the planet.

          > Now, simple substructural subtyping like Rust provides can prevent you from doing some mistakes

          Luckily, Rust does more than simple substructural subtyping.

          > but it might also ban valid code which introduces inefficiencies because you are forced to choose a particular structure.

          Nothing is banned in Rust (except undefined behavior) — you are not forced to choose a particular structure. You are heavily encouraged to choose the correct structure, on the other hand. Have any examples to provide?

          > Only more advanced behavioural typing and program verification is sufficient for preventing typically expensive low level bugs.

          Luckily, we have very well-educated experts on these topics that have gone about implementing formal verification techniques within Rust. Ticki comes to mind when he began talks about implementing a Hoare Logic in Rust within the MIR.

          1. C++/Rust/browsers are mostly atheoretical, so not sure how phds would be relevant? Also not sure why people keep claiming that browsers are the most complicated… The complexity in browsers arise from being big and implementing big standards while having to work with existing web sites. But that domain isn’t particularly hard in terms of theory…

            Typesystems are filters. They typically filter out both valid and invalid programs. Of course, you can get around it by writing more code, you only ned a TM to do that. But it isn’t convenient… C++ is currently more convenient than Rust as Rust still is quite immature with a limited eco system.

            You probably meant to say that they are going to add asserts with quantifiers to MIR. Hoare logic is just a set of deduction rules.

            1. > C++/Rust/browsers are mostly atheoretical, so not sure how phds would be relevant

              You don’t understand why the well-educated would be interested in using their knowledge to develop new solutions and solve real problems that we are facing today? Or that Rust & Servo were research projects by Mozilla that have been ongoing for five years? The heck.

              You should serious read some Rust RFCs[1]. Case in point is the generic associated traits RFC (HKTs)[2]. Each file there is basically a paper describing the specifications of that feature and it’s implementation. You’ll quickly realize why we have PhD’s on board helping to shape the future of Rust and it’s language design.

              > Typesystems are filters. They typically filter out both valid and invalid programs. Of course, you can get around it by writing more code, you only ned a TM to do that.

              I don’t think you understand what you think you understand, regarding Rust. Rust’s type system isn’t acting as a filter to filter out ‘valid programs’. The only thing that comes to mind that you seem to be referencing are Non-Lexical Lifetimes, whose RFC is almost fully implemented.

              Yet the lack of NLL being fully implemented yet only serves catch newcomers to the language whom have yet to internalize the rules, and have no baring for the rest of us who never write software that falls into that trap (because we understand how lifetimes work, and how to properly use scopes to ensure that those lifetimes don’t conflict).

              > But it isn’t convenient… C++ is currently more convenient than Rust

              I think you’ll find many here and elsewhere who disagree with that. Otherwise, there would be no point in using Rust, and Mozilla would not have succeeded in their Project Quantum efforts to integrate components of Servo within Firefox. There’s a serious contradiction between what you’re stating here, and what people and companies are actually doing today with Rust.

              From Patrick Walton: https://twitter.com/pcwalton/status/929065687632330753

              > Rust still is quite immature with a limited eco system

              Not true. Have you seen how many crates that we have today? And the quality of the top crates, and associated blessed crates by the community? We have a lot of top-notch solutions for everything under the sun. I’ve had major success developing all kinds of different software with Rust because of how powerful a lot of these solutions are, which C++ has no parallel to.

              Can you automatically deserialize raw text of any given format into a native hierarchy of data structures, and vice versa serialize native data structures into a string without having to write any code? That’s what serde does. The following is how you can automatically GET JSON from a web server and deserialize it into a native data structure.

              let data: Data = reqwest::get(URL).json()?;

              Any type that has the #[Deserialize] attribute above the structure definition can be used in the above. That’s just one of many examples of how powerful Rust is at achieving complex tasks, simply.

              [1] https://github.com/rust-lang/rfcs/tree/master/text
              [2] https://github.com/rust-lang/rfcs/blob/master/text/1598-generic_associated_types.md

              1. And here we go with the educational elitism. You come off like a Brie-eating bicoastal elite, CS subtype, who is bound, damned, and determined to tell us all how to think and work, just like the Brie-eating bicoastal elites are bound, damned, and determined to tell us all how to live.

                Here’s a free clue: A PhD does not guarantee a damned thing except that the person who’s entitled to call himself one has put up with the leftist bullshit of academia for longer, with greater intensity, than those who merely have a bachelor’s, and has played the academic politics game – which inevitably involves hewing closely to the current SJW line of crap – better than the average student.

                I don’t care about academic credentials. I care about getting work done. Period.

                1. “Brie-eating”

                  Tastes differ, but real brie (from raw milk) is a good cheese. I do enjoy it regularly. But I am neither a member of the elite nor a CS programming language designer.

                  “A PhD does not guarantee a damned thing except …. than those who merely have a bachelor’s”

                  Do I sense some envy?

                  Anyhow, you got it totally wrong. A PhD is proof you can perform independent research and write it down, that is, you are a qualified scientist (or the equivalent in other academic disciplines). Nothing more, nothing less.

                  If you want to incorporate the latest progress in research, it often helps to hire relevant PhDs, as they generally have been exposed to the latest advances. Because that is their job.

                  If you try to create cutting edge technology, you are a fool if you discriminate against PhDs in particular, and people with different political ideas in general.

                  1. To be clear: I like Brie. I use it as an example of what the bicoastal elites do as opposed to the folks in America’s heartland, who rarely encounter it.

                    And no, you don’t sense envy. I’ve been out earning my way in the world, instead of sponging off others in academia and getting a good thorough Marxist indoctrination.

                    Computing doesn’t need esoteric scientists who know more and more about less and less until they know everything about nothing. It needs people who care about getting the work done, and have been there and done that.

                    1. I’ve been out earning my way in the world, instead of sponging off others in academia

                      Research is productive work. All technology is based on scientific research, so your livelihood is depended on past research. Scientific research is built on PhD students. No PhDs, no science and no technological progress.

                      and getting a good thorough Marxist indoctrination.”

                      Science and technology are agnostic towards politics an religion. If you are not, you would indeed not be suited to do science.

                      Moreover, you are so far to the right of the political spectrum that almost all of humanity is far out to the left of you. You cannot fault them for that.

                      Computing doesn’t need esoteric scientists who know more and more about less and less until they know everything about nothing.

                      You mean people like Turing and Shannon? Or Knuth? Or the guys at Bell Labs and Parc?

                      These people also had PhD’s. And they lived in the coastal regions of the US.

                      It needs people who care about getting the work done,

                      But these people you talk about use the legacy of those “esotheric scientists” all the way back to Shannon, Turing, Schrodinger, Maxwell, Herz,
                      Faraday, Ampere , and Volta.

                      Rejecting PhDs is rejecting future progress.

                  2. You missed *entirely* what he was getting at.

                    And were you an American you would identify more with the “costal elites” (they aren’t always on the coast, and they’re only elite in their own minds).

                    What Jay is complaining about is the Rustifarian’s attitude.

                    1. “And were you an American you would identify more with the “costal elites” ”

                      Indeed. I have been in the US several times and I can feel quite at home in the “coastal” areas. I particularly liked Portland and Boston. San Fransisco is nice too. I must admit that my experiences of inland US have been limited.

                      “What Jay is complaining about is the Rustifarian’s attitude.”

                      Could be. But he is attacking PhDs and science to do so. Therefore, I think his beef is with science more than with Rust.

                2. Congratulations. You made no effort to refute any of my counter-arguments. You merely went into an off-topic tirade about people with PhDs. I definitely see some serious anger, denial, and envy in there.

              2. A phd is usually a very narrow research project and typically has nothing to do with C++, you would usually be better off using a more powerful high level language for phd work. As far as programming goes a well rounded bachelor in either comp sci or software engineering covers most of what you need, from thereon it is all about your own exploration and desire to expand your horizons. A phd is just one way to do that, in a very narrow way.

                All typesystems are filters. The formal definition of a languge is that it is a set of strings (programs). A type system will reduce that to a subset.

                1. You’re going to need more than a bachelor’s degree to be capable of leading bleeding edge language research, and employing it successfully in practice at scale…

                  A bachelor’s degree only guarantees that you can write software using a language that’s already been constructed for you, by the people who have PhD’s!

                  1. > A bachelor’s degree only guarantees that you can
                    > write software using a language

                    No, no it doesn’t.

                  2. Very few books on advanced topics assume much more than a bachelor, if they did they wouldnt be able to use them in higher level courses. You usually also find whitepapers and surveys that can help. So, for an attentive person a bachelor opens the doors you need open if you have the interest.

    2. “Modern C++” is nowhere near eliminating pernicious memory safety issues, such as use after free bugs. In some ways, it’s making things worse. Here’s an example: https://github.com/isocpp/CppCoreGuidelines/issues/1038

      References are semantically equivalent to bare pointers, modern C++ uses them pervasively, and they create lots of opportunities for use-after-free whether or not you use smart pointers.

      Beyond memory safety, Rust solves lots of other problems that C++ doesn’t, like preventing data races, proper type bounds for generic types (concepts will only solve half the problem), easy management of library dependencies, powerful metaprogramming facilities like “custom derive”, …

  13. Sounds like somebody is not up to speed with modern C++. The issues you raise have been addressed.

    1. The core issue ESR raised was C++’s backwards compatibility. How can you possibly claim that’s been addressed?

    2. Until C++ is reformed into a brand new language with zero backwards compatibility, none of it’s issues have been address. All of the C++ developments have merely been adding lipstick on a pig, and by that I mean that each new major feature that should be the default behavior, requires a major degree of boiler plate to use, making the language incredibly verbose to use, while still providing zero guarantees about any of the code. How do you ensure that every usage of code in your own code base, and all dependent code bases, are using best practices? You can’t.

  14. You’re talking about the past as if it were still the future, and you’re wrong about it. C++ is a massively successful language and isn’t going away anytime in the next 30 years.

    – Every major browser is written in (mostly) C++
    – Many major language implementations are written in C++ (LLVM/CLANG, GCC, V8, many or most JVM implementations, including Android’s Dalvik and ART which are installed on billions of devices)
    – All of the big internet/software companies (Apple, Google, Facebook, Amazon, Microsoft) have hundreds of millions of lines of C++ in their backends and consumer products.
    – Every big AAA game / game-engine includes C++ as a significant, if not majority component.

    In addition, it is trending up. The number of C++ repos on github grew faster than the github average between 2012 and 2016: http://githut.info/

    Lastly, on GC: It will never be cheap enough to be used in areas like games and kernels. It isn’t about the cost, it’s about the ability to reason locally and attribute the cost to the code that incurred it. If you have a latency problem in C++, its either the fault of the OS (true for all languages), or its the fault of the code that isn’t running fast enough. If you have a latency problem caused by GC pauses, its the fault of the entire system, and fixing it becomes a boil-the-ocean problem.

    1. @Mark: You’re talking about the past as if it were still the future, and you’re wrong about it. C++ is a massively successful language and isn’t going away anytime in the next 30 years.

      I agree, but I’ve been watching the progression with interest. I think of the problem in linguistics in when a dialect of an existing language diverges enough to be a whole new language. The joke tends to be “A language is a dialect that has an army and navy.”

      A chunk of the issues I’ve seen with C++ are precisely that sort of thing. Programmers were treating it as a dialect of standard C, and not properly comprehending the differences. They’d have been better served to think of it as a whole new language.

      Another problem is that all compilers are not created equal. There’s a grimly amusing Mozilla Developers document on writing portable C++ code, and what you must do for your code to be portable. A lot of it reduces to “Just because it works in Microsoft Visual C++, don’t assume it will work elsewhere!” Mozilla was trying to be very portable in the early days, and the problem child was the C++ compiler shipped with HP-UX which choke on constructs everything else would compile. (These days, they support Windows, Linux, and OS/X, and if you run something else, getting the code to build and run is your problem.)

      >Dennis

    2. Every major browser is written in (mostly) C++

      Yes. And they all stink.

      Memory leaks, excessive cpu consumption, an endless stream of security exploits, unexplained crashes … the usual complaints.

      Having been on the receiving end of these applications for quite a long time, my cynical conclusion is that writing bug-free C++ code at scale, on deadlines, and with sizeable teams of skillset-diverse coders is testing the limits of what humans can do.

      1. Aye. “AAA game studios do it” is a good way to find yourself sailing off a cliff with all the other lemmings.

    3. How much of that is inertia though? I’d wager a lot, if not most of it.

      Every major browser has a code base older than any of the up and coming systems languages. All the major language implementations listed were started before any of the upcomers were mature. All of major internet/software companies product lines and codebases are older than any of the upcommers. And new games continue to be written in C++ because once again, the tooling for making games is older than the upcommers. It makes sense; when you start a project, you reach for the best tools you can get, and for a while, C++ was that tool.

      But the times, they are a changin’. C++ may well have been good enough for larger systems, compared to C, but that doesn’t mean wasn’t or is no longer without its flaws. As I understand, Rust was born because Mozilla knew they needed to move Firefox to a multi-thread/process model to make things faster, but also knew that attempting such in C++ was a recipe for Firefox having as many bugs as a AAA game does when first released. And now, the newest versions of Firefox running on the newer multi-thread/process design are incorporating components written in Rust.

      I also have my doubts on GC becoming good enough for (some kinds of) games and OS kernels. But, how things have been and are they now does not mean they will forever be that way in the future. I may well be wrong about GC.

    4. > Every major browser is written in (mostly) C++

      Times are changing. Firefox just replaced 160,000 lines of C++ with 85,000 lines of Rust[1] — and that’s just the Stylo component. That’s not counting the upcoming WebRender and Pathfinder components that are about land in subsequent Firefox releases.

      > All of the big internet/software companies (Apple, Google, Facebook, Amazon, Microsoft)

      All of these (minus Apple) have been posting jobs looking for Rust software developers, so times are changing here too.

      > Every big AAA game / game-engine includes C++ as a significant, if not majority component.

      Maybe so today, but DICE and many other AAA and indie studios are highly interested in, or already are using Rust. DICE uses Rust extensively internally. I know that Chucklefish are creating a new game that will be available on consoles, written entirely in Rust.

      [1] https://blog.rust-lang.org/2017/11/14/Fearless-Concurrency-In-Firefox-Quantum.html

      1. Incredibly experienced might mean that they are stuck in a C++98 mentality. Anyway, no team is better than the process it is managed by… And Mozilla are hardly neutral when it comes to Rust.

        That a rewrite leads to better code, well, hardly surprising. Is it?

        Now, simple substuctural subtyping like Rust provides can prevent you from doing some mistakes, but it might also ban valid code which introduce inefficiencies because you are forced to choose a particular structure.

        I don’t mind it, but it doesn’t help much on the hard to find bugs I run into. Only more advanced behavioural typing and program verification is sufficient for preventing typically expensive low level bugs.

      2. > Times are changing. Firefox just replaced 160,000 lines of C++ with 85,000 lines of Rust[1] — and that’s just the Stylo component.

        I rewrote a system with a base code being 20000 lines of PHP and another 80.000 lines of PHP in the framework, to a system that now uses 3000 lines of code in total that does a better job then the original.

        The magic language with was called R … no, just simply PHP… So by your logic PHP is going to replace PHP because i reduced the code by 33 times and reduced the bugs and issues to almost zero. Good to know …

        Those Rust number mean nothing. Any project will build up crud over the year and when you rewrite the entire code base, with the new knowledge of what it needs to look like, you will reduce your code by massive amounts. That is just basic knowledge that most programmers know.

        Its not a argument to use how good Rust is… So please stick more to facts that are not # code lines on a project rewrite. I know few project when rewritten that will have the same or a increase in code lines.

    5. To all the prior replies: Great points, all. I certainly wouldn’t claim that C++ is the pinnacle of language design or the evolutionary endgame of programming. Hopefully rust and others will replace C++ in areas where they are better suited. My only point was that ESR seems to be laying out an argument for why C++ can’t succeed, and my reply was intended to point out that it already has, in spades.

      1. >ESR seems to be laying out an argument for why C++ can’t succeed,

        Not exactly. I’m claiming it will not replace C and lower post-C defect rates, because it has C’s leaky-abstraction problem baked in.

    6. I love C++ and am highly productive in it, and I particularly love C++11, because it supports garbage collection.

      But because of backwards compatibility, all abstractions leak, and sometimes making them not leak is subtle and tricky, and worse, much worse, sometimes the very clever, nonobvious, and subtle things you do to prevent leakage fail on C++98, and work on C++11.

      Having a program that crashes horribly because g++ defaults to C++98, and C++11 is an option, is really bad. “Gee, it works when I run it on my machine”

      (Always put the following in CMakeLists.txt:
      cmake_minimum_required(VERSION 3.5)
      set(CMAKE_CXX_STANDARD 11)
      set(CMAKE_CXX_STANDARD_REQUIRED ON)

      Or else mystery bugs ensue between one man’s compile and another man’s compile. Because abstractions leak differently between C++98 and C++11!

      If someone else’s abstractions leak badly and surprisingly, then you have bugs that are hard to track down.

      I am unfamiliar with rust, but according to several commenters, Rust enforces C++ best practices – and is therefore backwards incompatible. It is C++ with backwards compatibility thrown overboard.

      And, as I have discovered, C++11 is not perfectly backwards compatible with C++98

  15. Transition costs. It’s difficult to translate even small, simple programs between languages and verify that you have faithfully preserved all non-error behaviors.

    I think the value of debugged code has always been severely underestimated. Hence the periodical insanity of trying to re-start projects from scratch that only rarely succeed.

  16. It seems to me that you’re giving short shrift to the value users place on very fast response times. The idea that 0.1 second response times ought to be fast enough for anyone has a Gatesian smell about it. Delays accumulate. Delays get magnified on old or limited systems, or when a system is under heavy load. There was a comment in the previous thread noting that Python can’t always be counted on to respond at human speeds.

    Even when response times to the human-facing part of a system are reliably 100ms, there is still value in making the response even faster. 100ms shouldn’t be thought of as “ideal” but rather as “minimally acceptable.”

    1. Acceptable response in a web browser for the reload button, not acceptable in a reflex-based videogame. Maybe barely acceptable for head tracking VR, maybe, but for eye tracking that is finally coming (attack helicopters had that decades ago!) surely not.

  17. This video gives a good run-down of how C++’s recent efforts to retrofit modern features complicate rather than simplify the language: https://youtu.be/VSlBhAOLtFA?t=1411 . C++’s time is drawing to a close, it’s just a matter of which language replaces it.

    However I’m not sure I share your pessimism about Rust’s future, for one simple reason: Rust is _extremely_ similar to Apple’s Swift, both in syntax and semantics. Rust has even inspired features in Swift, such as its error-model, and Apple has hired Rust compiler & library developers to work on its Swift team.

    Consequently Rust benefits from association by Apple’s promotion efforts. For a Swift developer Rust is a lot easier to learn than Go, and Rust has support for features absent from Swift such as compile-time memory-management and race-detection.

    I agree Rust needs another five years to be a solid platform, however I think its foundations and trajectory is such that in five years it will surpass Go. Python was released in 1992 after all, and trailed Perl for a decade, yet now, twenty years later, it has completely replaced it.

  18. I think you make a lot of interesting points and generally characterize the current situation pretty well. Of course I do disagree with your last paragraph, as I do write rather a lot of Rust :P Personally garbage collection conceptually bothers me enough that I still prefer the borrow checker, even if it’s slightly harder to use. Why should I burn CPU cycles on memory management when I have the choice to just write code that doesn’t need GC?

  19. > Rustaceans

    Given the…passion of some of them, shouldn’t it be “Rustifarians”?

      1. How many times have we seen this movie?

        New platform/framework/protocol emerges, young puppies get all excited about teh new hotness, and run around yipping until they get the attention they crave.

        Look at what I can do, daddy!

        What’s the quote about 2 types of languages – the ones people bitch about, and the ones nobody uses.

        Personally, I don’t give a shit about the endless language pissing contest…they’re just tools in the toolbox.

        1. >Personally, I don’t give a shit about the endless language pissing contest…they’re just tools in the toolbox.

          Sadly. I think I have to give a shit. C isn’t good enough at scale, and C++ isn’t either for the same reasons. We’ve all been dancing around the problems for decades now but as LOC goes up we are entering a regime where they can no longer be evaded.

          We’ve dealt with some pretty obnoxious Rust puppies, but I give them credit for being obnoxious about a real problem. This is not the classic My Favorite Toy Language scenario where it’s just “Look at what I can do, daddy!”. If they’d been five years sooner and hadn’t had Ken fscking Thompson competing, I’d be sucking up the horrifying C to Rust transition costs now. As it is, I’m deeply grateful that I won’t need to.

          1. > If they’d been five years sooner and hadn’t had Ken fscking Thompson competing, I’d be sucking up the horrifying C to Rust transition costs now.

            I really don’t understand why Go puppies like yourself would have a view like this. I mean, you’ve yet to come up with a rational argument to use as pro for Go over Rust — yet you have no problem talking down about Rust and championing Go religiously.

            I picked up and experimented with Go for two years before I converted to Rust when it achieved the 1.0 status. If that’s the best Ken Thompson can do, then he’s clearly out of touch with best practices, and the last 40 years of progress in PLT. Well, here’s my rationale, again, for where Go has failed to the point that it’s basically a joke.

            Go has made so many fatal mistakes that the language is simply non-redeemable. It’s not a replacement for C, and it’s not even a better replacement than Rust for rewriting Python scripts. Go solutions continually require more LOC and boiler plate to do the same job as a Rust solution, and this only serves to increase the incidences of logic errors.

            Go should have had a trait system backed by generics, rather than this bizarre interface system with runtime pseudo-generics via reflections. This would have opened the door to major zero-cost abstractions, code flexibility, and would have eliminated a significant degree of boiler plate. As it stands, you can’t even implement a generic writer or iterator — both of which are critical in all software.

            It also should have focused on making it safe to write multi-threaded applications, but instead it’s only equipped for the same jobs as you’d use Elixir and Erlang for (and honestly, Elixir may even be better than Go!). Channels are nice and all, but they aren’t revolutionary at all. Most modern languages feature channels, in addition to other mechanisms. Yet Go is basically choosing to use a hammer for every task, whereas Rust allows you choose to the correct tool for each scenario.

            Rust provides channels too, but it also did not neglect taking advantage of generics to create Arc / Mutex / RwLock wrapper types, and utilize the borrowing and ownership model and traits (Send/Sync) to ensure thread safety; and easily-installed third party crates provide access to a wide range of lock and lock-free data structures. Maybe you want an atomic hashmap, or a lock-free stack.

            Then there’s the issue of the double return types for error handling. This is beyond incredible stupidity. It is simply bizarre. Not only do you have to allocate the Ok type, but you also have to allocate the Err type too! In what world does that make sense!?

            Decades ago, a better solution was created, and that solution is often referred to as sum types, which are backed by pattern matching. Rather than allocating for two types, you only allocate for one (typically an 8-bit tag + the size of the largest variant). This then allows you to beautifully handle errors using methods, and Rust’s early return operator was simply genius for eliminating the boiler plate of error handling entirely, which is empowered by the From/Into traits.

            Then there’s the anemic Go standard library. You’ve touted it as being more complete than Rust’s, but I just don’t see how that’s possible. I’ve memorized by Go and Rust’s standard library, so I have a good inkling that you’re simply full of it with that remark. Go’s standard library is a complete catastrophe in a number of different ways.

            Firstly, it’s incredibly anemic. It doesn’t ship with very many useful primitives. Those that it does provide are not equipped for a large number of use cases, Try comparing a primitive type in Go and all of it’s methods, to the same type in Rust and it’s methods too. Sorry to say, but Rust’s standard library is very comprehensive.

            It covers significantly more use cases, and therefore enables Rust developers to get away with not having to re-invent the wheel and bloat the complexity of their code. It’s one reason why Rust implementations always use less lines of code than Go implementations.

            Then there’s the issue of the complete lack of a good majority of higher level data structures that are commonly used across all kinds of software. Obvious sign that Go is half-baked and incomplete? I think so.

            Then there’s the issue that, for some reason, Go includes some libraries within the standard library that don’t belong in the standard library! These implementations are quite awful, and have lead to many in the Go community to recommend to avoid using the libraries in the standard library, and to use superior alternatives from third party libraries. Another sign that Go is half-baked and incomplete? Definitely!

            Go faces the same issue as Python, only it’s doing so right out of the box. Here you have a standard library that ships some libraries that should never have been included in the first place, but that Go will have to continue to support for the rest of it’s life. Third party libraries instead offer better libraries than what the standard library is providing, so a decent Go programmer will avoid using anything in the standard library.

            And Go’s designers are already talking about a Go 2, so good luck porting all of your Go 1 software to Go 2! Go was not carefully designed or geared to be used in public, whereas Rust is. That’s why you have Go 2 now in development, whilst Rust’s designers will state that they have zero reasons to consider a Rust 2, because there are no critical issues to address that can’t already be addressed in a forward-compatible manner.

            Then there’s the whole packaging issue. Go does not have an official package manager, nor does it handle semantic versioning. The best you get is directly embedding git URLs in your source code. This is inexcusable.

  20. Tangentially related:

    We all know Greenspun’s 10th rule. In that vein, I’m looking forward to the ‘return of the lisp machine’ – presumably, all we need is an open-source, high performance, concurrent GC with guaranteed low enough latency, and then you could write the entire stack from kernel to userland in a suitable lisp.

    I’ve been playing with a theory that I call “Greenspun’s 10th dual”: any sufficiently complicated static type system is an ad-hoc, inelegant, more difficult to use version of something from the ML family. So while I don’t agree, I think I understand why Go doesn’t have generics: they’re trying to avoid adding cruft to make sure Go’s type system doesn’t turn into something resembling C++ templates or Java 8.

    As garbage collectors get better and better, I think there’s an opportunity for a high-performance language that no one (as far as I know) is seriously looking at: a systems-level ML language. Something with good concurrency primitives (probably CSP style), a straightforward module system, eager evaluation by default and enough of a ‘safety hatch’ to do the functionally-impure things that are necessary for low level programming.

    Haskell’s lazy evaluation rules this out right away and I don’t think OCaml can be adapted either – the entire standard library is not thread safe (I think the reference implementation has a global interpreter lock). I think a problem is that a lot of academic work focuses on making 100% functionally pure programming languages; I don’t think 100% functional languages work too well in practice, but “85% purity” is pretty great.

    (I know, I know. If I were serious about this idea, I should write some code instead of just talking about it.)

    1. Garbage collectors are getting «better» at having shorter pauses, but in terms of efficiency you would need a major change at the CPU level… Might happen over time, just like CPU designs for decades adapted to existing C code. An oddity today is that C doesn’t match up so well to the hardware as it once did. So something new ought to come along at some point… but a more drastic change than Rust or Go.

      Academic work focus on publishing papers, so such languages tend die from a lack of funding… but:

      ML -> DependentML -> ATS https://en.wikipedia.org/wiki/ATS_(programming_language)

      You also have examples of refinement from Haskell to C:
      http://www.cse.unsw.edu.au/~kleing/papers/sosp09.pdf

      Today you have to be an expert in formal methods to get it right, the tricky part is getting this into a form where it can be done by non-experts in reasonable time.

      1. The mill cpu will have some features to support the implementation of GCs. I expect that they will give a talk describing them somewhere in the next two years or so.

        1. Sounds interesting! Hiding the cost of memory barriers is nontrivial, but probably doable if you choose a less intuitive memory model and architecture than x86. Might make low level programming harder, but that sounds like the right tradeoff when we look ahead…

    2. a systems-level ML language

      I really wish mythryl hadn’t died (literally)

      This seems relevant:

      A wider goal is to foster the development of a complete mostly-functional software ecology to eventually replace the current C-based open source software ecology, which is starting to smell distinctly “past pull date”. — Mythryx

      .. the things I would do if the project was resurrected and I had a mentor (for a couple of decades). Alas, it’s not even 64bit-capable.

  21. While Swift is not yet ready to be used as a system language now; since that is one of its stated goals, I would be interested in any thoughts both on where it is now and and where it might go in the future in this respect.

  22. Nice post.

    > eventually we will have GC techniques with low enough latency overhead to be usable in kernels and low-level firmware, and those will ship in language implementations.

    This hypothetical GC has been coming “real soon” since the LISP machines. I know of people that has been saying this for 20 years, and the only thing that has changed is that nowadays they can’t hold their poker face when I teld them “that’s exactly what you told me in 1995, what do you mean with _real soon_ ? In another 30 years? I’ll probably be dead by then” :D

    > but too many strikes against you to beat Go over most of C’s range. No GC,

    Wait what? If one could use a GC why would one be using C or C++ in the first place?

    But from your article it seems that we cannot use a GC, or at least not one that introduce a cost.

    So IIUC your assumption that Go is the right choice completely hangs from the hypothesis that the 40 year old promise of a zero-cost GC will become true “eventually”.

    IMO your logic is then flawed. Go might be the right choice (or not) once its garbage collector becomes zero-cost. Until then, it is not even a language to consider for applications that cannot have a GC.

    If right now I’ll had to spearhead a new boringly-coded high-erish performance large-scale application that can tolerate a GC, would I chose Go? I don’t know. There way to many boring languages with good GCs to choose from: Java, Kotlin, C#, Go, …

  23. So far, one thing that was never questioned is the von Neuman’s architecture itself. It’s already demonstrated that there can exist chips which execute high-level instructions without performance prejudice, contrary to the c-machines we have today. For an example of inspiring ideas on the topic, see:
    http://www.loper-os.org/?p=1361
    as well as other posts at the same blog.

    1. I suppose we could arrive at inspiring ideas by questioning the roundness or solidity of the Earth as well, but they won’t get anywhere. Those “inspiring ideas” sound like quite a bit of wanking without reference to real systems doing actual work. Show me a working general dataflow CPU architecture that doesn’t get mired in the sheer size and complexity of its own dependency graph, and I’ll start paying attention.

      In the meantime, “C machines” won because they started trouncing Lisp machines in performance even while Lisp machines were commercially available. And CPU architectures and compiler technology have advanced to the point where Lisp code on a conventional arcitecture runs as fast as or faster than it would on a Lisp machine, even if you had a multi-gigahertz Lisp machine.

      Another thing about “C machines” is that their instruction sets can encode assumptions and implement control structures that cannot be encoded or implemented in plain C. So “C machine” is a misnomer. You may as well call today’s computers “JavaScript machines”, since it seems that language will in due course dominate all others in terms of what gets run.

      Nevertheless. Von Neumann is a Chesterton’s fence of computing: you will have to show us a good reason (as in real hardware doing actual work offering clear advantages) to remove it — and even then the economics may not favor displacing Von Neumann in all cases.

      When discussing tools to get actual work done, mental masturbation about what could or might be is just hastening the heat death of the universe.

  24. @Gregory Gelfond Is this an issue with particular kinds of operating systems (real-time for example, I can see this being an issue)?

    I think there’s a fair bit of misunderstanding of “real-time”. This is generally parsed as “instantaneous response”, when the apparently correct definition is “guaranteed to respond within X period of time”, where X doesn’t have to be “instantaneously”. (It does still have to be a lot faster than acceptable response time for human operator.)

    So whether the overhead of GC rules out a language for real time use may depend on the hardware it runs on and the number you need X to be. I don’t think there’s a one-size-fits-all answer to the question, though you can probably argue that you would like to avoid needing to do GC.

    >Dennis

    1. In DSP-land, “realtime” means only that you ensure that your output keeps pace with your input – even if there’s a buffer of seconds in between.

    2. Well, I don’t know what the technical definition of “realtime” is. What I care about is that, when I see a scale reading cross a specific threshold, I am able to close the gate or valve to stop product flow in a short and repeatable time period. It needs to be significantly shorter than the fraction of a second an air-powered gate takes to close. The idea is that we need to be able to set an anticipation value – how much before the product’s desired quantity do we need to start stopping the flow – and when that value is set right, have the delivered product consistently be within tolerance of the requested amount.

      A PC running Windows can’t do this. The response time is just too unpredictable. But we don’t have to be exact down to the microsecond, either.

  25. The D programming language, with betterC, is now an excellent step up from C. It’s 100% open source and Boost licensed. It only needs the C runtime library. Program files can be mixed and matched C and D code. It’s easy to translate C code to betterC. And then you’ve got modules, nested functions, compile time function execution, purity, templates, array bounds checking, scoped pointers, and RAII (next release).

  26. (A bit long – please bear with me)

    A computer language (plus its runtime) is supposed to help translate our thoughts on an algorithm or process into something a computer can actually accomplish via its instruction set. C comes close to mimicking an LSI-11 instruction set, so it was great for low-level programming on DEC machines and later, with adequate compiler support, not so bad with other architectures.

    As esr points out, it breaks at scale in two main areas, memory management (because it has none) and abstraction (because what it has is too low-level). The discipline needed to make it work in large projects requires either very good programmers that place cooperation above egotism or people management that enforces project members’ continued paychecks over egotism. Otherwise the egos win and one ends up with an inconsistent convoluted mess.

    C++ tried to address these areas but did it in such a way that it became hard to reason about program behavior. It’s possible to write systems in C++ that would be much more difficult in C, but they tend to have even more obscure bugs that are even harder to find. The existence of large programs doesn’t mean the language was good, it just shows that with enough time, energy, and money Frankenstein can be debugged into existence – warts, bolts, jumper cables, and all. Look at the comments of the previous post: there’s still new production Fortran code being written in the weather forecasting biz and all it takes is a fat Government contract administered by a sleek, low-overhead outfit like Raytheon. That doesn’t make Fortran the future.

    Kernels still do pretty much what they did in the 70s and 80s so C is still an ok answer there. Same for many near-kernel system services and apps. Go might be incrementally better, but is it worth the effort to replace existing debugged code? Duh.

    But what apps do and the environment they run in have changed quite a bit, at least on servers. Everything is bigger (needs automated memory management), extensibile (needs a type system that prevents interface mis-match), and is concurrent; single threaded apps on multi-processor 2U boxes (native or in VMs/containers) can’t scale to use available resources and services that don’t run across multiple boxes don’t scale much further in addition to missing modern availability requirements. C and C++ fail spectacularly here – add locking and distributed RPC faults to poor error handling and you have a recipe for buggy software and long-term job security. Java hasn’t really moved the needle very far.

    I don’t know if Go does enough to really help here but I don’t think so. It’s incremental improvement at best in an area that’s just fine to all but the cognoscenti. esr and you guys reading this are cognoscenti; the NCGs assigned to maintain large code bases as immigration projects are hoi polloi.

    Rust and Scala seem to have help for objects and local concurrency as do functional languages like the ML family (including OCAML) and Forth (the assembly language of functional programming) but isn’t quite ready for prime time and I don’t think functional languages have passed the test of long-term maintenance at scale yet. Erlang is a stab at distribution and has certainly met some scale and maintenance tests but hasn’t been picked up by the community at large.

    Some may claim that the programming language isn’t the place to look for help, but I disagree. If it can prevent language errors in the first place (memory management, type systems) and help me use available resources (concurrency), and deal with expected failure (distribution) then I want it in the flow of the program (my description of what should happen), not in some box bolted onto the side. And it has to be efficient, because I’ve only a few cycles to waste and no IO or memory.

    So that’s where I’d look for action in the programming language field – not to improve C, an imperfect solution to yesterday’s problems; I want something that helps with apps that are big, distributed, concurrent, and efficient because those are the more important problems people are solving today and in the future.

  27. >The wrestling match between the official D library/runtime and Tango hurt it, too. It has never recovered from those mistakes.

    Do you actually know what you’re talking about? The split was way too many years ago and has been recovered from since. In fact nobody is using Tango anymore, unless you’re stuck with D1, which barely anyone is.

  28. Two points that are worth nothing, because I have no standing…

    1. A language that requires precision about the entities you are handling and the methods you are calling, otherwise the build fails, seems incongruent with the whole concept of garbage collection. JavaScript, OK. Different assumption.

    2. If you’re looking at future engineering of high performance applications do you create a new perfect language or do you go with the evolutionary approach. With multiple implementations across multiple platforms and a cross-industry committee that now seems on top of progress. Who wouldn’t want to be part of a new idealism? Yet Google just open-sourced millions of lines of C++ library code. Brave New World? Or standing on the shoulders of giants?

    I like C++, but I wouldn’t go into battle for it. But I don’t see any actual need for anything else in the area from the metal up to Python or JavaScript. Go and similar overlap the junction, but without obviously justifying GC..

    Paul

    1. Anything that involves maintaining a mutating graph justifies a GC from a maintenance POV. Without a GC you have to be careful which introduce design overhead that doesn’t really pay off. What C++ needs is some kind of local GC.

      Go is a language with flaws, but having a tuned low latency GC is a big selling point that offsets its shortcomings.

      1. I think can be reasonably straightforward within the language. Depending on the detail of course and especially if your structure has a lifetime and you can afford to wait until the end before cleaning up everything it has ever used. And whether those bits need actual destruction rather than simply returning resources to the OS.

        And I see C++ has a garbage collector interface, but doesn’t supply an implementation as standard. Didn’t know that…. I guess this is because the people who would routinely use it would simply dig themselves into deeper holes than they do already, but there is a practical use case, especially in a typically crufty code base that is already leaking.

        Paul.

        1. Yes, you can write your own reference type and keep tabs on them and use that for pointer tracing, with some overhead.

          But a language solution would be more efficient if it at compile time can statically reduce the memory regions it has to go through to find pointers into a specific heap. So it needs type system support and maybe complicated pointer analysis to cover the general case. And some runtime support. But it would involve some possibly draconian constraints on where pointers to a specific heap can be stored… Reasonable for C++, but not for a high level language.

      2. C++ HAS some kind of local GC – local variables on the stack. If you can’t do most of your allocations on the stack, you have design issues.

  29. Research is productive work.

    Bahaha… you obviously haven’t seen most computer-science research. Sturgeon’s Law applies without question, and it’s not hard to identify 80% of the “crap” portion. A large part of my reason for dropping CS grad school was the “infosec” professor who spent two weeks going through SET, insisting that this was how you actually bought things online. Similar detachment from reality was rampant.

    On the other hand, open-source collaborations, groups like Sun’s Java team, and (as much as it pains me to say it) Microsoft Research have been making major strides in solving real problems relating to both computing and programming.

  30. Oh. Oh, dear.

    Software engineering is an engineering discipline. Just because people are allowed to attempt to build a doghouse without education doesn’t mean that we let the untrained free with I-beams to build sky-scrapers. Complaining that it’s possible to screw up with C++ is like complaining that it’s possible to screw up with dynamite. Lots of power, but nobody uses picks any more to carve through mountains when building freeways, either. Another example: it’s like writing off all firearms owners because of the sample shown on TV.

    C++ provides a whole host of options available that C just doesn’t work for. It allows for the compiler to handle all sorts of validations automatically which aren’t really possible with naked C. For example, the STL iterator model makes it very easy for safe data access to different containers, all of which have been independently tested, and which concurrently provide bare-metal performance.

    Another is the ability to use virtual classes to provide mocks and fakes for writing unit tests. It’s very hard to be able to write methods/modules/libraries in C which are supposed to operate against external I/O (whether it be network RPCs or worse, databases) which can be easily substituted for validation. But subclassing an interface allows you to use eg. an Oracle database for production but a lightweight in-memory database for unit tests just by changing the database object you pass in. Error injection is also made much easier. As exemplified in GoogleMock

    My 10+ year career has mostly involved codebases with 1000+ people working on them. C++ makes using, and more importantly, re-using and testing them much easier. It’s the raw C code where I find the worst hacks, trying to get some of the features provided for with C++ without having the compiler to actually do the work reliably/safely for you. I find that it’s the larger codebases where the better interface design/description/clarity starts to outpace the initial overhead of understanding the additional overhead introduced by C++. For small projects, it’s probably irrelevant (until you need something which can easily be provided by an STL container at which point you should just use that).

    I suspect that many of the C projects haven’t converted to C++ purely out of bigotry. The C++98 standard (from 19 years ago) required a fair bit more verbosity of scoping to be used which some might consider ugly, though this has been substantially reduced. The Linux kernel went out of its way to prevent any C++ from being used.

    1. That’s all good, but C++ as it stands makes incremental learning difficult. In meta programming there are like 10 different ways to do the same thing, of which 1 or 2 are commendable. Slashdot is nearly useless because most of the advice there is either outdated, wrong or nonidiomatic. So that leaves us with cppreference.com as the only reliable source… And the insanely long list of C++ Core Guidelines… C++ is the better upgrade path from C, but it sure still is in need of a make over.

    2. Complaining that it’s possible to screw up with C++ is like complaining that it’s possible to screw up with dynamite.

      Rust is existence proof that a language that can get down to C++ levels of nitty-gritty can also be approached safely by n00bs. “If you don’t know how to use the language properly, you deserve to have your totin’ chip taken away” doesn’t hold water as an argument. Requiring conscientious use not to blow up spectacularly in your face is a language design fail.

      1. Of course, it could then be argued that the absence of conscientiousness is evidence of a poor engineering mindset ;P

        Bad code monkey blame toolz

        1. “Bad code monkey blame toolz”

          I’m actually kindof tired of this statement. Sure, it’s true that a good craftsman can do anything with any tool, and sometimes a craftsman doesn’t get to choose the tools available to use. However, a good craftsman also understands that you use the right tool for the job — and that if you have to use a lesser tool, it’s going to make the task that much more difficult.

          And some tools are no longer used, because not only are they inherently dangerous, but there are better tools available now: they can be safer to use, or perhaps allow you to do more — or both. A modern carpenter would be silly not to use power tools nowadays — both because most of them make the work a lot easier, and some of them (despite the dangers they present, being powered) are ultimately safer to use as well.

          And thus it is with computer languages. After all, how many of us elect to use INTERCAL or Br**nfudge for production code? These tools are useful, but only in showing just how mind-bendingly painful “Turing complete” can be.

          And it’s perfectly fine make the case that languages like C++, Perl, and PHP have their place, which places being niches in a museum with shiny brass plaques explaining their historical importance.

    3. I suspect that many of the C projects haven’t converted to C++ purely out of bigotry. The C++98 standard (from 19 years ago) required a fair bit more verbosity of scoping to be used which some might consider ugly, though this has been substantially reduced. The Linux kernel went out of its way to prevent any C++ from being used.

      The Linux kernel started with C++, but by 1993 Linus gave up on seriously trying to use it. In 1998 ksymoops stopped being a C++ program.

      There was once a time when every few thousand lines of C++ code routinely tripped over a crash-the-compiler bug in GCC (or worse–it might have produced a binary, and then run into all the runtime library’s problems). I remember struggling through that myself in the mid to late 1990’s. It was possible to write C++ programs that worked, but you wouldn’t want to bet money on them working across multiple compiler versions or runtime environments.

      C++ has finally caught up (at least to the point where the runtime or compiler aren’t the first things that are going to crash any more), but in the Linux kernel there is a different situation now: having two languages (and two–or at least N+1–toolchains) for kernel development is a serious maintenance challenge, one that is arguably even worse than using C as the only language.

  31. > Sorry, Rustaceans – you’ve got a plausible future in kernels and deep firmware, but too many strikes against you to beat Go over most of C’s range. No GC …. Where’s my select(2) again?

    “No GC”. Of course there is. It is decided at compile time for 95% of the objects or more, because the type system tracks ownership and lifetime of objects. Most data structures tend not to be complicated, and are in-fact tree-shaped, not graphs. For the few cases (relatively) speaking (such as compilers) where DAGs are required, the shared nodes can be under the aegis of a ref-counted node; the count management is _managed by the compiler_. In all cases, the compiler is aware of memory and where it needs to be deallocated.

    Have you seen the Rayon library? It uses the facit that the compiler is aware of lifetimes to work with pointers to stack allocated objects in other threads. Fearless concurrency is now the new watchword.

    select(2). Why? epoll is so much better. Have you seen mio, a Rustified libev?
    https://github.com/carllerche/mio

    —-
    Go is primitive but fun,and its libraries and tooling are excellent. However, their shared memory concurrency behaviour is no better than Java. Miss a mutex lock and wonder what went wrong.

      1. Sounds like you are pretty well behind on the capabilities of reference counting if you think that they can’t be used for cyclic data structures. Rust uses both lifetime annotations, as well as the generic Rc/RefCell/Arc wrapper types to solve this problem. See The Book[1]. There’s also the Learing Rust With Entirely Too Many Linked Lists[2] tutorial series.

        With an advanced understanding of lifetimes, you can construct graph data structures that contain fields with differing lifetime annotations, to signal to the compiler that the lifetime of each reference is different from the lifetime of the structure as a whole, and other references.

        Yet you can also easily avoid writing these kinds of data structures if you take a step back and look at techniques like ECS.

        [1] https://doc.rust-lang.org/book/second-edition/ch15-06-reference-cycles.html
        [2] http://cglab.ca/~abeinges/blah/too-many-lists/book/README.html

        1. Sigh. No. There are (academic) solutions for catching cycles with refcounting, but they have been benchmarked as slower than using rc + tracing gc. And Rust doesnt provide them either…

    1. [GC] is decided at compile time for 95% of the objects or more, because the type system tracks ownership and lifetime of objects. Most data structures tend not to be complicated, and are in-fact tree-shaped, not graphs.

      I don’t know you, and you don’t know me. Yet you erroneously think you can speak for me and the programs I write. This is bullshit.

    2. >Most data structures tend not to be complicated, and are in-fact tree-shaped, not graphs. For the few cases (relatively) speaking (such as compilers) where DAGs are required, the shared nodes can be under the aegis of a ref-counted node; the count management is _managed by the compiler_.

      I missed this silly claim earlier.

      My code manipulates DAGs routinely – reposurgeon is only the best known example. I’m actually drawn to that kind of work by my math background; I like thinking about graph-structure transformations.

      So what you’re telling me is that the Rust ownership model is inadequate for one of the kinds of programming I’m most interested in. I’m guessing this is not the information you intended to convey.

      1. I’ve heard warning about trying to do code with graphs in Rust before, and a search turned up this. I’ve only scanned it thus far, but if I understand what I’ve read so far correctly, Rust’s ownership model isn’t easily compatible with graphs, because graphs have properties that are in conflict with it. I’ve not read all the way through yet, so I don’t understand the suggested remedy.

        I don’t think that I’m into graph programming natively, but graphs struck me as the best way to model part of a project idea I’d had — routing data between sources and sinks in a real-time audio system. I was kinda excited about Rust, and was considering using it when I attempted to tackle said project, as I learn better when I have a thing I want to make. But if Rust isn’t suitable for dealing with graphs, well, damnit. Back to attempting to build everything myself from scratch with nothing but C’s raw pointers, and all the footguns that entails.

        1. I am also into a bit of audio programming. Dataflow graphs are very common. Just look at Apple’s AudioUnit system or good old CSound’s internals. While you don’t necessarily need a cyclic graph, you usually want back pointers even in acyclic graphs for various reasons (either convenience or performance). Also if you do something feedback-like with wave-guides cycles will arise… The Rust people in this thread has already suggested a component model as a solution. Component based modelling was a business-oriented decoupling/reuse buzzword 15 years ago. So what they essentially want you to do is to replace your pointers with integers… and some indirections… Rust people: yes, we know that we can use array indices instead of pointers… but that just means that we have bypassed the type system and essentially have switched from a well-typed referential type to an essentially untyped referential type. Which you of course can remedy by wrapping the array indexes/integer identities in yet another nominal type… but geez… A language is supposed to support the implementation of a natural model for a program, not force an unnatural structure with arbitrary indirections onto your model just to satisfy a type system…

          1. Here’s an example of a cyclic graph with pointers and _semantic_ cycles.

            https://gist.github.com/sriram-srinivasan/05e781ef5f015bf6758222eebdf35824

            Here’s an example of a cyclic graph with direct pointers, and ref counting. Note that one doesn’t have to worry about collecting the memory, the point that started this thread.

            https://gist.github.com/sriram-srinivasan/517fe37c607099f6ae0c5d1cedde3556
            The graph is readonly for the most part, but allows key-hole surgery. Again, there’s no issue of forgetting to decr the ref count or incrementing the ref count. The compiler will handle the former, and remind you about the latter.

            (Also, see my reply to esr elsewhere).

            1. The graph is readonly for the most part, but allows key-hole surgery.

              Talking about semantics…

              “readonly for the most part” is either meaningless, or needs to be parsed so finely that thinking about it would give me a headache, and having to program to that model — well, I’m sure I’ve done worse, but why would I subject myself to that if I didn’t have to?

              1. I mean that by default it is readonly, and the values can be safely accessed without further ado as if it were a direct pointer.

                But if you wish to mutate some internal value, you have to write some extra code to borrow it for writing purposes (requesting a write capability), and make that change. If you consider mutation and alienability to be a dangerous combination (as I do), this keeps a tight rein on things. You’ll be able to make a change only if the pointer is not shared with other consumers.

          2. > The Rust people in this thread has already suggested a component model as a solution. Component based modelling was a business-oriented decoupling/reuse buzzword 15 years ago. So what they essentially want you to do is to replace your pointers with integers… and some indirections…

            I would have hoped that you would have realized that pointers are also integers… and therefore pointer dereferencing is also…. an indirection!

            Now imagine that you have your cyclic data structure where you need to derefence 10 pointers to get to the data you need, whereas an ECS model with components only needs 1 level of indirection!

            > A language is supposed to support the implementation of a natural model for a program, not force an unnatural structure with arbitrary indirections onto your model just to satisfy a type system…

            Well guess what? Graph data structures are entirely unnatural structures for hardware to work with, and they are incredibly inefficient. They are littered with arbitrary hierarchy of indirections.

            There’s good reason why basically all AAA games today are being created with ECS models, instead of OOP models with graph data structures. Graph data structures are incredibly dangerous, and highly inflexible. It’s why most complex software written in C and C++ has remained to be single-threaded. Newer software and games written with ECS models, on the other hand, lead to incredibly flexible architectures that can be made massively parallel.

            1. Games isn’t really the right place to look for wisdom. 1. They don’t have the life expectancy of real world application. 2. They can change the design spec to fit the hardware limits. 3. Failure is inconsequential.

              90% of the development of a real world application happens AFTER deployment. So you better be prepared for change.

              No, direct pointers are not indirections as such and more importantly they are usually statically typed which is very useful in a graph with many heterogeneous nodes like the high level audio data-flow graphs we are talking about here. These are big nodes that basically carry the equivalent of an oscillator or filter each. And each node has a different size so goodbye to your array of nodes. You need indirections if you use an array.

              1. You’d be surprised. Most games these days are backed by the same game engines, which have had to adopt techniques like ECS because they have to support thousands of games and game developers, and to keep up the pace with hardware improvements and graphics API changes. That’s not an easy feat.

        2. Rust’s ownership model is certainly compatible with graphs — you just have to model them in such a way to avoid pointer cycles. For example, have a vector of nodes and a vector of edges that reference the nodes by index. This complicates and slows down inserting or removing a node or edge, but those operations are usually less frequent than traversing the graph anyway. You can get some simplicity back by using a hashmap of nodes, each with a unique name, and having the edges reference the nodes by name rather than by index.

          1. I assume you are talking about
            https://en.wikipedia.org/wiki/Adjacency_list

            That’s all fine for a very limited set of usecases where you deal with very uniform mathematical graph and a fixed requirement spec. It is no good when you want to support the implementation of a model that match the hardware.

            The main challenge is to come up with a model that match the hardware and cachelines, if you also have to match the programming language then you have one ball to many to juggle. Substructural type systems and recursive structures are all good, for high level coding, but it predominantly forces you down the lane of a structure that suggest a functional implementation strategy, not iterative imperative programming.

          2. >Rust’s ownership model is certainly compatible with graphs — you just have to model them in such a way to avoid pointer cycles

            What Ola Fosheim Grøstad said. You’re telling me I have to use an overcomplicated, unnatural data representation that is going raise my expected downstream rare and like it because Rust is orgasms and rainbows. Nope.

            1. Unnatural to whom? On modern CPUs, linked data structures are unnatural and you will suffer a huge slowdown by using them. It is a pearl of C++ wisdom to always prefer vectors to linked lists, because the latter force more accesses to main memory. The benefits to be had by keeping everything in cache as much as possible far exceed even the losses from having to copy every element from n+1 to the end of the list to insert an element at index n, for realistic list sizes.

              If you’re going to be working at that level, you will have to choose your data representations carefully anyway — and the obvious choice may not be the right one. Game programmers learned this the hard way in the 90s.

              But realistically, you would be using a crate that provides graphs as an abstract type and not have to worry about the implementation at the data-representation level at all.

              1. If you’re going to be working at that level,

                A big assumption. The programs that esr are talking about don’t necessarily need to work at that level.

              2. Game programmers in the 90s were predominantly young and selfeducated… Eventually game programmers rediscovered some basic strategies from array processing and HPC… Good for them, but their ignorance doesnt mean that rest of us are… Your “advice” is wasted without a context.

      2. Of course you can do DAGs. The Rust compiler needs it, so we need look no further for examples.

        My point with the “95%” is two fold.
        Look at the large number of programs that work with simple values, hash tables and arrays (pretty much all enterprise apps, web apps, data analyses, etc) … that’s what I mean by tree-shaped. It is not to say that 95% of _your_ apps are like that; mine aren’t either.

        Where we do need DAGs or graphs with cycles, there are a large number of graph representations possible that play with memory differently. For example, here’s one example (in Rust) of a graph that has cycles. Nodes don’t point to each other; edges contain pointers to nodes. But the cycles are semantic. All one needs to do is to convince the compiler that the nodes will not be deallocated before the edges (that’s what the lifetime annotation ‘a is for). The Rust compiler emits code to automatically deallocate all objects at the end of scope. Here the objects are on the stack, but with a little change, could be heap allocated as well. No matter.

        https://gist.github.com/sriram-srinivasan/05e781ef5f015bf6758222eebdf35824

        If we wish to represent a graph with mutability and direct pointers the way one is used to in C++/Java, we must suffer additional syntactic burden, where instead of ‘Node’ you’d pass around Rc, which wraps a node with a refcount. Note that the compiler still emits code for automatically decrementing the refcount.

        https://gist.github.com/sriram-srinivasan/517fe37c607099f6ae0c5d1cedde3556

        The point is there are many “container types” (such as Rc) that give you different levels of static and runtime guarantees. In all cases, the compiler takes care of deallocation. It is true that Rc _can_ create cycles, but there are standard patterns to avoid it, and a number of other Rust-specific mechanisms as well (weakref, for example). It doesn’t seem to be a worry in practice.

        —-

        That said, one does have to expend the mental energy for lifetimes, which is not something one ordinarily does in Go and Java. Or so it seems. The underlying problem — that of aliasing combined with mutability — is a fundamental one, particularly with concurrency, and especially so for library writers — who owns the objects returned by a library function, or by an iterator. You have to do the same lifetime and mutability thinking with ordinary GC’d languages as well. Except you get no compiler support, so you resort to conservative approaches like cloning objects (“just in case”)

        1. I think we have different ideas about what low level programming is. Java isn’t it. Go isn’t it either, but Go is useful for writing smaller services that benefit from a GC and doesn’t need in memory caching. Maybe there is a 10-20% penalty in time and up to 100% overhead in space by using Go over C, but that’s ok if you do caching in another process. C++ is not great for writing full applications, but is currently the best alternative for the computational engine of an application. Rust currently seems to be best suited for things where I’ll gladly take a 30-40% penalty in time an 100% penalty in space to get a GC and high level semantics.

          1. I agree with you, believe it or not. If Nim had its concurrency story straight, I’d prefer it to C/C++. For all IO-related stuff, I’d rather deal with Go or Python.

            I think Rust’s core competence is in building concurrency-heavy applications that require fine-grained parallelism and complicated data structures — operating systems, browsers, databases and possibly games. That’s low-level _and_ error-prone.

            1. For databases distribution seems to be more important than local performance.

              For OS full verification and refinement seems like the only viable future.

              For high level concurrency and protocols, again verification is around the corner.

              So maybe Rust has something going for low level concurrency, but there no overhead is acceptable and lock free strategies are the ideal, so not sure how Rust will fare there either in the long run, but we’ll see. Rust is an interesting language, but one of many interesting languages.

              1. Distribution is important no doubt, but performance is still non-negotiable.

                Verification depends on all kinds of assumptions; the wider the assumptions, the worse the post-hoc verification gets. I have the scars from working with Coq and separation logic, where there are no aliasing guarantees in the code.

                Given a constraint from the Rust compiler on separating aliasing and mutation, the job of a verifier is considerably simplified on that count. So I’d much prefer a verifier, or a process like IronFleet, but on Rust code. For some systems, of course. Otherwise, it is all a bit too much :)

                1. Responsiveness is apparently more important than absolute performance when people are willing to write distributed databases in Erlang… It is basically a trade off between correctness, development cost, maintenance costs and running costs… You need a very large deployment to justify optimizing for the hardware, the hardware also changes every 2-4 years or so. So “optimal performance” isn’t a one time investment it is an expensive long term commitment. Are you willing to implement your indexes on GPUs or FPGAs? Because that is what you have to do to get performance out of contemporary hardware… So performance is very much negotiable. Always have been, always will be.

                  For operating systems I think refinement is the most obvious verification solution. You can have specialists on both the high level, the verification work (proving correspondence between the high level spec and the low level implementation) and the low level concrete code. Actually, OS development is one of the few areas of software development where it makes sense to let experts in formal methods run the process.

                  1. Let me expand a bit on this. I grew up with the C64. The 6510 is a very simple and predictable CPU. In the late 80s we thought we had seen the limits of this machine and what could be achieved with it in machine language. Turned out we were wrong. People have since then continued to explore how to get the most out of it for decades and some of the current demonstrations blows what was done in the 80s out of the water. So much for performance not being negotiable… We are nowhere near the limits of current hardware… Getting there is waaay to expensive and time consuming.

                    1. In every era, maximum performance is what has been widely accepted. It may not have endured, but slow stuff doesn’t get out of the gate.

                      There are exactly two databases written in/for Erlang: mnesia and Riak. They are negligibly used in production. Riak wrapped up months ago.

                      For low-level stuff, you want tight control over memory layout and usage. ;that’s where the performance gains are.

  32. Sriram Srinivasan. No, not really. Performance should always be viewed in relation to a baseline. But acceptance is a psychological dimension related to customer expectations. For many end users performance is evaluated in terms of PERCEIVED latency, not in terms of throughput. And for many businesses firing up a few more instances is a negliable cost compared to paying someone to optimize the software… So easy scaling and latency are the more important dimensions IMO… Absolute performance, not so much.

    1. I agree that there is a level of “good enough” performance, at which point attention goes to other aspects … maintainability, hirability, scalability, energy usage etc. When I say performance, I mean it as the 99.xx percentile user experience.

      Of course, people turn to adding more instances when there’s more work than compute capacity. But given a choice of two products/approaches where one runs faster and hence requires fewer servers, people go for that. This is the exact reason people are moving from Python/Ruby to Go, not because Go is more fun to program in, and why Dropbox moved a lot of performance-critical work from Go to Rust. Performance is the reason why although cockroachdb has most of its upper level code in Go, its storage engine is RocksDB (C++).

      Perhaps you have read the “Scalability! At what COST!” paper, for why “scalability” for large-scale graph analyses is a false metric. There are some published algorithms of analyses done on 100s of cores that this paper handles with a sequential algorithm on a single core. The latter is faster _and simpler_, because it avoids coordination. The message of the paper, as we both agree, is quickness of user-perceived result.

      https://www.usenix.org/conference/hotos15/workshop-program/presentation/mcsherry

      (The implementation of this and other papers of frank mcsherry (in Rust) is on his GitHub repo.)

      I wouldn’t use Rust for most of my work (networked servers, applications), but where it is a mixture of high performance and concurrency, it would be my choice.

      1. Yes, one doesnt pick the slower alternative if they otherwise have similar qualities. But I’ll take the one with slow writes as long as it has nippy and solid queues (eventual consistency). Anyway, most regular databases now fit in RAM, so I think programmer culture is lagging behind hardware in their preferences and how they think about storage…

        I think Go isn’t so much replacing Python as it is used in addition to it, taking over in areas where performance got more important. Python is a more productive environment than Go, but dynamic typing is an issue… But then you have gravity. Python is the only language where I always find an existing solution/library to a problem I encounter,,,

  33. Michael Aaron Murphy, while I can admire and often respect ardent passion, you’re starting to grate on me. Your posts here were reminding me of xkcd #386 even a couple days ago. If Rust truly is the ne plus ultra of modern systems programming languages, wouldn’t your time be more enjoyable and better spent using it to get shit done, rather than preaching at the unenlighted heathen barbarians?

    1. >wouldn’t your time be more enjoyable and better spent using it to get shit done, rather than preaching at the unenlighted heathen barbarians?

      You don’t understand the personality type you’re dealing with. Santayana: “Fanaticism consists of redoubling your efforts when you have forgotten your aim.”

      1. I think it mostly is youngsters… They are trying to teach others stuff that they have recently discovered, but that we have internalized and take for granted… (As if anyone interested in low level programming needs a schooling in prefetching, keeping caches warm etc)

    2. Who says one cannot have time to comment here and write a lot of software? I have made major strides with Rust in a number of different areas, from systems-level software in developing a next-generation system shell with superior performance to Dash[1], to a variety of GTK desktop applications[2], a GTK Rust tutorial series that’s currently a WIP[3], distributed computing client/servers[4][5], full stack web development stuff, a handful of random crates, and contributions to several projects (such as parallel command execution in fd[6]). All open source, and largely within the span of the last year. What have you been doing with your time?

      [1] https://github.com/redox-os/ion/
      [2] https://github.com/mmstick?utf8=%E2%9C%93&tab=repositories&q=GTK&type=public&language=
      [3] https://mmstick.github.io/gtkrs-tutorials/
      [4] https://github.com/mmstick/concurr
      [5] https://github.com/mmstick/parallel
      [6] https://github.com/sharkdp/fd/

  34. I wonder is one reason for C’s lingering use might be the distant possibility of bootstrapping. Most higher-level languages have some sort of dependency on C, so it could be imagined that if your tool didn’t need anything more advanced, it might be ported early on, so later tools can rely on its existence. That might also explain a significant chunk of build script cruft. Too much dreaming of unlikely just-in-cases to trim code paths that will never be needed anymore.

    1. I can tell you that this is not the reason for C’s lingering use. C is not required to bootstrap an OS. Rust on Redox, for example, has zero dependency on C. Only itself. You’ll find C on Linux systems simply because Linux is written in C. Whereas C on Redox has to go through Rust, so the tables are turned.

      1. I was thinking less about creating a new OS and more about porting Linux to a new platform. Unless you were going to cross-compile everything, keeping most of the classic Unix tools in C would make it that much easier to have a basic working system to build everything else on. I don’t think many people do that sort of thing anymore, if ever, but I think that the mere possibility that it might happen is one of the factors making it less likely for those classic tools to upgrade to a non-C language.

        Increasing processing power and gradually improving tools make it a mostly irrelevant point now, outside of the rare occasion that someone chooses the inconvenience deliberately. But I think “this is a basic Unix tool, I want to keep dependencies (and especially dependency loops) low” will keep the default implementations on C for more decades to come.

        1. > I was thinking less about creating a new OS and more about porting Linux to a new platform.

          I’m not sure I understand the point. Rather than porting an existing monolithic kernel, why not do as Redox and create a next-gen microkernel to compete against Linux?

          > Unless you were going to cross-compile everything, keeping most of the classic Unix tools in C would make it that much easier to have a basic working system to build everything else on.

          We already have all of the ‘classic UNIX tools’ written in Rust from scratch, so there’s no need for C there. You can use them on Linux and Redox[1].

          > I don’t think many people do that sort of thing anymore, if ever, but I think that the mere possibility that it might happen is one of the factors making it less likely for those classic tools to upgrade to a non-C language.

          You’d be surprised.

          > Increasing processing power and gradually improving tools make it a mostly irrelevant point now

          Processors actually haven’t been getting much faster for some time now. It used to be that processors were getting major performance improvements every two years. Now they are basically stagnating with the same IPC, and increasing core counts instead. C is ill-equipped for writing highly parallel software architectures, and so that’s where Rust comes in. What was impossible to scale to multiple cores by human hands in C & C++ is now being done successfully with Rust in practice today[2].

          > But I think “this is a basic Unix tool, I want to keep dependencies (and especially dependency loops) low” will keep the default implementations on C for more decades to come.

          See [1]. There’s no reason for C to continue it’s existence from this point on. It’s up to you on whether or not you want to stick with a dying language though.

          [1] https://github.com/uutils/coreutils
          [2] https://blog.rust-lang.org/2017/11/14/Fearless-Concurrency-In-Firefox-Quantum.html

  35. All general-purpose tracing GCs force you to make tradeoffs between latency, throughput, and memory overhead. Go has focused on minimizing latency at the expense of the rest; in particular, with the default value of GOGC, your heap memory usage will be about double the size of your live objects — i.e. about double the memory that a C or Rust version would use. There is no reason to believe any breakthrough is looming which allows these tradeoffs to be avoided; we haven’t seen one in the 40+ years of GC research so far.

    But GC isn’t just a performance issue. One of the great things about C is that you can write a library that people can use from other languages via FFI. Go and other GC languages are not suitable for this because if the other language can’t use the same GC, at best someone has to write complex rooting logic at the boundaries, but more likely they just can’t use your library. On other other hand Rust can be used for these libraries; for example, see Federico Mena Quintero’s work on incrementally improving librsvg using Rust.

    It’s also worth pointing out that Rust offers important safety benefits that Go doesn’t — data-race freedom, prevention of “single-threaded races” like iterator invalidation bugs, catching integer overflows in debug builds, stronger static typechecking with generics, etc.

    Go certainly has a large niche. If you’re writing an application and you don’t need top-of-the-line performance or safety, it’s a fine choice. Rust’s niche is also quite large though, and it’s getting larger as the language and libraries evolve.

    1. >Go has focused on minimizing latency at the expense of the rest; in particular, with the default value of GOGC, your heap memory usage will be about double the size of your live objects — i.e. about double the memory that a C or Rust version would use.

      I think this is an excellent trade. It uses more of a resource that is cheap and still getting cheaper to minimize use of a resource that has stopped getting cheaper.

      >Go and other GC languages are not suitable for this because if the other language can’t use the same GC, at best someone has to write complex rooting logic at the boundaries, but more likely they just can’t use your library

      That is true, and a real problem. It remains to be seen whether it weighs heavily enough against the advantages of GC and low inward transition costs to scupper Go-like approaches.

      1. Whether it’s a good trade-off depends on the project. For projects where you don’t have a huge number of users and you don’t face competition from alternative implementations which avoid GC overhead — or if memory pressure is insignificant enough to be a non-issue — it’s often a good trade-off. But for commodity software like a Web browser, saying “we use double the memory of the competition — but our development was quicker and RAM is cheap!” doesn’t go well.

        > It remains to be seen whether it weighs heavily enough against the advantages of GC and low inward transition costs to scupper Go-like approaches.

        AFAICT it weighs heavily enough for now that no-one is even trying to introduce GC languages in libraries to be used from other languages.

        1. >but our development was quicker

          That wouldn’t be the pitch. Lower defect rate due to no manual memory management errors would be the pitch.

          1. Unfortunately it’s hard to get users to care about that. And if you’re competing against Rust, it won’t be true.

            1. >Unfortunately it’s hard to get users to care about that. And if you’re competing against Rust, it won’t be true.

              But normal users don’t give a shit about memory usage either, not on 2017 hardware.

              1. Well, they care about memory leaks, because those will eventually crash the app. But memory leaks are the problem garbage collection is designed to solve.

              2. Except a large number of users do care about memory usage in a world that’s now littered with bloated desktop applications consuming gobs of memory, and where not even 8GB of RAM is enough for basic desktop usage.

                > not on 2017 hardware

                Most people aren’t buying desktops with 16-64 GB of RAM. A lot of us are also purchasing simple, energy-efficient laptops with 4GB of RAM. I even develop from a budget laptop even though I have a much more powerful desktop, and memory usage is quite important to keep my dev environment running.

              3. I have 32 GB of RAM on this machine that I built a month or two ago.

                I care about memory usage because Mozilla’s memory use grows continuously, to the point that I have to restart it every few hours if I have a lot of tabs open.

                I know how to fix that. All they have to do is to use a variable-length-string-capable virtual memory library like the one I have written in (of course) C++.

                Then all of their data could be paged to and from backing store as necessary, without their having to manage it manually.

  36. “What works for a Steve Heller (my old friend and C++ advocate) doesn’t scale up when I’m dealing with multiple non-Steve-Hellers and might end up having to clean up their mess. ”

    Application programmers often don’t understand how to create reusable abstractions.

    Fortunately in C++ that isn’t necessary, because library designers like myself can (and should) do the heavy lifting.

    Let me design and implement the classes needed by the application programmers, and they won’t have to worry about misusing memory and other resources.

    Of course this is not possible in C because it is impossible to create abstractions that take care of these issues.

    (Note that I’m referring to applications, not system programming, which is a different beast that I haven’t spent much time doing.)

    1. >(Note that I’m referring to applications, not system programming, which is a different beast that I haven’t spent much time doing.)

      OK, Steve, trust me on this, you already thought like a pretty damn good systems programmer in 1980. :-)

      1. Thanks, I appreciate that!

        On a somewhat related note, I have a C++ library that I have been working on since before we met that allows access to an enormous amount of data, whether fixed- or variable-size.

        If you are accessing fixed-size data sequentially or nearly sequentially, it can read or write at speeds of 1 GB/second with an NVMe SSD with “get/put” functions. If you are doing random access, or aren’t concerned about maximum speed in the sequential case, you can just say “x[i] = y;” and the library takes care of the rest.

        (BTW, earlier versions of this library have been written in several other languages, C being the previous one.)

        1. >Thanks, I appreciate that!

          Actually, in 1980 you already had most of the mental habits of a good systems programmer and I didn’t yet – you just weren’t very conscious or articulate about them. Over the next decade I had a couple of “Oh, so that’s what Steve meant!” moments as I was growing into the role.

          The list of people who taught me important things (whether they intended to or not) when I was a progamming n00b is not a long one. You’re on it.

  37. Like a good number of C developers who may have used C++ in the past, and haven’t kept up with the latest developments, you are underestimating the benefits of C++.

    One of the key benefits as a systems programming language is the performance — the performance if well-written C++ code is hard to match in other languages, and it can even edge out C in many cases, due to the additional information available to the compiler for optimizations, including guaranteed compile-time constant expression evaluation. constexpr functions provide a lot of scope for evaluating things at compile time that would otherwise have required a separate configuration process to be run at compile time, or runtime evaluation.

    Secondly, C++ has the many ways to express zero-overhead abstractions. This eliminates a lot of errors, because the code can force compilation errors on incorrect uses, and libraries can take care of things like managing memory and non-memory resources such as sockets and file handles. You don’t need a general-purpose GC when you have deterministic destruction and guaranteed resource cleanup. Many bugs that are commonplace in C code just can’t happen in well-written C++, and the source code is shorter and clearer to boot.

    There is an increasingly powerful set of tools for analyzing C++, and migrating code to use more modern (and better) replacements for older error-prone constructs. e.g. clang-tidy

    The C++ Core Guidelines (http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines) also provide a lot of guidance on how to write good C++, and the guideline support library (https://github.com/Microsoft/GSL) provides a set of useful library facilities to make this easier. Compilers and code analysis tools are adding automated checks to confirm adherence to the Core Guidelines.

    C++ is an evolving language. When new languages come with useful facilities, we look at how those facilities can be incorporated into C++ in a clean fashion.

    The upgrade path from C is almost non-existent — most C can be recompiled as C++ with only minor changes (e.g. adding casts to malloc calls). Then you can use the tool support to migrate your code to better C++, e.g replacing malloc and free calls with use of containers such as std::vector or std::string, or std::unique_ptr or std::shared_ptr for single objects.

    1. There is a fairly long learning curve for programmers to get up to speed on C++, as it is a large and complicated language that has more than one way to do many things.

      But if you do learn how to use it properly, it is unmatched in versatility and performance.

      1. >But if you do learn how to use it properly, it is unmatched in versatility and performance.

        Yeah, the problem is still that “if”. In the hands of people less competent and dedicated than you, C++ becomes a defect horror-show – I learned this the hard way by having to clean up one of the resulting messes. If that weren’t so, Google wouldn’t have had to fund Go development.

        I’m glad you showed up to represent, though. Where are you living these days?

  38. “At which point you have a bug which, because of over-layers of gnarly complexity such as STL, is much more difficult to characterize and fix than the equivalent defect in C.”

    finally someone to admit it.

  39. Hmm.. Good point. But you are missing 1 thing. Go and Rust are made to and specifically made to replace C and not C++. They are very similar with C with of course no Mem management (with GC) and other excellent inbuilt networking libs. Bravo. But but but, they can’t possible replace C++ as they are not meant to be.

    If you look at Go and Rust performance wise, they are even slower then C# and Java, let aside C or C++. So if you have a security critical program or a legacy program written in C, they are best candidate to be switched over to Go or Rust.

    How can you dream of getting 60+ FPS using Go or Rust in a games like Assassins creed Origins on a finite hardware like PS4/X1, that is impossible except C/C++.

    So again for real time applications/games, C++ is not going to go anywhere. And probably Linus will not allow to write Linux kernel using Go, Neither MSFT will think of picking up poorly performing Go to write their Windows kernel, nor JVM will ever be written anything other then C.

    But yes, we can still keep dreaming of a better and a secure language.

    Also, in language charts all over the world, C/C++ has consistently been on top and improving(only after Java) and at the same time once Go seemed to pick up but it is falling again.

    Some links :
    https://jabroo.blogspot.in/2012/08/c-plus-plus-applications-list.html
    http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=go&lang2=gcc
    http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=go&lang2=gpp
    https://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=csharpcore&lang2=go

  40. I find it interesting that your critique of C++ matches up precisely with my own experiences … with Python.

    Python is great for short programs of a few hundred lines all written and maintained by one person. Where it runs into problems is with larger programs with multiple contributors. While easy to write, python code is hard to read and understand in isolation, because everything tends to depend on everything else, including what versions of which packages are installed. Debugging is a particular nightmare, as its easy to get objects of unexpected types in unexpected places, running through code apparently ok, but actually causing obscure problems that show up in some other completely unrelated part of the system.

  41. Every time someone criticizes Rust, all their zealots, their devs and their minions close their code editors and start writing a wall of text rebuttal.

    Please stop writing about how shitty Rust is otherwise they will never have time to improve it!

  42. What about Ada programming language, why doesn’t that get more used?
    Is the main reason over-verbosity?
    If yes, why doesn’t someone create a language with same semantics as Ada, but much less verbose?

Leave a comment

Your email address will not be published. Required fields are marked *