Language engineering for great justice

Whole-systems engineering, when you get good at it, goes beyond being entirely or even mostly about technical optimizations. Every artifact we make is situated in a context of human action that widens out to the economics of its use, the sociology of its users, and the entirety of what Austrian economists call “praxeology”, the science of purposeful human behavior in its widest scope.

This isn’t just abstract theory for me. When I wrote my papers on open-source development, they were exactly praxeology – they weren’t about any specific software technology or objective but about the context of human action within which technology is worked. An increase in praxeological understanding of technology can reframe it, leading to tremendous increases in human productivity and satisfaction, not so much because of changes in our tools but because of changes in the way we grasp them.

In this, the third of my unplanned series of posts about the twilight of C and the huge changes coming as we actually begin to see forward into a new era of systems programming, I’m going to try to cash that general insight out into some more specific and generative ideas about the design of computer languages, why they succeed, and why they fail.

In my last post I noted that every computer language is an embodiment of a relative-value claim, an assertion about the optimal tradeoff between spending machine resources and spending programmer time, all of this in a context where the cost of computing power steadily falls over time while programmer-time costs remain relatively stable or may even rise. I also highlighted the additional role of transition costs in pinning old tradeoff assertions into place. I described what language designers do as seeking a new optimum for present and near-future conditions.

Now I’m going to focus on that last concept. A language designer has lots of possible moves in language-design space from where the state of the art is now. What kind of type system? GC or manual allocation? What mix of imperative, functional, or OO approaches? But in praxeological terms his choice is, I think, usually much simpler: attack a near problem or a far problem?

“Near” and “far” are measured along the curves of falling hardware costs, rising software complexity, and increasing transition costs from existing languages. A near problem is one the designer can see right in front of him; a far problem is a set of conditions that can be seen coming but won’t necessarily arrive for some time. A near solution can be deployed immediately, to great practical effect, but may age badly as conditions change. A far solution is a bold bet that may smother under the weight of its own overhead before its future arrives, or never be adopted at all because moving to it is too expensive.

Back at the dawn of computing, FORTRAN was a near-problem design, LISP a far-problem one. Assemblers are near solutions. Illustrating that the categories apply to non-general-purpose languages, also roff markup. Later in the game, PHP and Javascript. Far solutions? Oberon. Ocaml. ML. XML-Docbook. Academic languages tend to be far because the incentive structure around them rewards originality and intellectual boldness (note that this is a praxeological cause, not a technical one!). The failure mode of academic languages is predictable; high inward transition costs, nobody goes there, failure to achieve community critical mass sufficient for mainstream adoption, isolation, and stagnation. (That’s a potted history of LISP in one sentence, and I say that as an old LISP-head with a deep love for the language…)

The failure modes of near designs are uglier. The best outcome to hope for is a graceful death and transition to a newer design. If they hang on (most likely to happen when transition costs out are high) features often get piled on them to keep them relevant, increasing complexity until they become teetering piles of cruft. Yes, C++, I’m looking at you. You too, Javascript. And (alas) Perl, though Larry Wall’s good taste mitigated the problem for many years – but that same good taste eventually moved him to blow up the whole thing for Perl 6.

This way of thinking about language design encourages reframing the designer’s task in terms of two objectives. (1) Picking a sweet spot on the near-far axis away from you into the projected future; and (2) Minimizing inward transition costs from one or more existing languages so you co-opt their userbases. And now let’s talk about about how C took over the world.

There is no more more breathtaking example than C than of nailing the near-far sweet spot in the entire history of computing. All I need to do to prove this is point at its extreme longevity as a practical, mainstream language that successfully saw off many competitors for its roles over much of its range. That timespan has now passed about 35 years (counting from when it swamped its early competitors) and is not yet with certainty ended.

OK, you can attribute some of C’s persistence to inertia if you want, but what are you really adding to the explanation if you use the word “inertia”? What it means is exactly that nobody made an offer that actually covered the transition costs out of the language!

Conversely, an underappreciated strength of the language was the low inward transition costs. C is an almost uniquely protean tool that, even at the beginning of its long reign, could readily accommodate programming habits acquired from languages as diverse as FORTRAN, Pascal, assemblers and LISP. I noticed back in the 1980s that I could often spot a new C programmer’s last language by his coding style, which was just the flip side of saying that C was damn good at gathering all those tribes unto itself.

C++ also benefited from having low transition costs in. Later, most new languages at least partly copied C syntax in order to minimize them.Notice what this does to the context of future language designs: it raises the value of being a C-like as possible in order to minimize inward transition costs from anywhere.

Another way to minimize inward transition costs is to simply be ridiculously easy to learn, even to people with no prior programming experience. This, however, is remarkably hard to pull off. I evaluate that only one language – Python – has made the major leagues by relying on this quality. I mention it only in passing because it’s not a strategy I expect to see a systems language execute successfully, though I’d be delighted to be wrong about that.

So here we are in late 2017, and…the next part is going to sound to some easily-annoyed people like Go advocacy, but it isn’t. Go, itself, could turn out to fail in several easily imaginable ways. It’s troubling that the Go team is so impervious to some changes their user community is near-unanimously and rightly (I think) insisting it needs. Worst-case GC latency, or the throughput sacrifices made to lower it, could still turn out to drastically narrow the language’s application range.

That said, there is a grand strategy expressed in the Go design that I think is right. To understand it, we need to review what the near problem for a C replacement is. As I noted in the prequels, it is rising defect rates as systems projects scale up – and specifically memory-management bugs because that category so dominates crash bugs and security exploits.

We’ve now identified two really powerful imperatives for a C replacement: (1) solve the memory-management problem, and (2) minimize inward-transition costs from C. And the history – the praxeological context – of programming languages tells us that if a C successor candidate don’t address the transition-cost problem effectively enough, it almost doesn’t matter how good a job it does on anything else. Conversely, a C successor that does address transition costs well buys itself a lot of slack for not being perfect in other ways.

This is what Go does. It’s not a theoretical jewel; it has annoying limitations; GC latency presently limits how far down the stack it can be pushed. But what it is doing is replicating the Unix/C infective strategy of being easy-entry and good enough to propagate faster than alternatives that, if it didn’t exist, would look like better far bets.

Of course, the proboscid in the room when I say that is Rust. Which is, in fact, positioning itself as the better far bet. I’ve explained in previous installments why I don’t think it’s really ready to compete yet. The TIOBE and PYPL indices agree; it’s never made the TIOBE top 20 and on both indices does quite poorly against Go.

Where Rust will be in five years is a different question, of course. My advice to the Rust community, if they care, is to pay some serious attention to the transition-cost problem. My personal experience says the C to Rust energy barrier is nasty. Code-lifting tools like Corrode won’t solve it if all they do is map C to unsafe Rust, and if there were an easy way to automate ownership/lifetime annotations they wouldn’t be needed at all – the compiler would just do that for you. I don’t know what a solution would look like, here, but I think they better find one.

I will finally note that Ken Thompson has a history of designs that look like minimal solutions to near problems but turn out to have an amazing quality of openness to the future, the capability to be improved. Unix is like this, of course. It makes me very cautious about supposing that any of the obvious annoyances in Go that look like future-blockers to me (like, say, the lack of generics) actually are. Because for that to be true, I’d have to be smarter than Ken, which is not an easy thing to believe.

199 comments

  1. … the capability to be improved.

    And, of course, the will of the language designer to use that capability correctly.

    I remember when I transitioned to C. I was using Modula-2, which at one point was a much better language for my needs. But then a couple of things happened: (1) C got ANSI-fied with things like type declarations in the headers; and (2) the C compilers got better at diagnostic warnings. Did you really mean to write “if c=1” ???

    The necessity to reduce inward transition costs means that a successful language is likely to be a creole. Just as English is good enough for commercial transactions, science papers, air traffic control, etc., C quickly became good enough to kill contenders like Modula-2.

    But creoles don’t look designed; they aren’t always aesthetically pleasing. Hence, many language designers’ egos will be driving them to act like the French Academy, in dictating stuff, removing stuff, and occasionally, adding other stuff that nobody will use, simply in the name of orthogonality.

    Perl will probably never recover from its fiascoes. Python is mostly healing from the 2-to-3 rift, but that wasn’t possible until 3.3, when enough stuff from 2 had been added back in to reduce inward transition costs for actual practicing Python programmers.

    Finally, there are a couple of truths that are so obvious to you that you didn’t even bother to mention them: (1) a new language without a solid opensource compiler is a non-starter; and (2) that opensource compile damn well better at least appear to be portable (e.g. well enough to attract the labor that will actually eventually make it portable).

    1. (That last point, of course, can be fully addressed with solid, working LLVM and GCC frontends.)

    2. >Finally, there are a couple of truths that are so obvious to you that you didn’t even bother to mention them: (1) a new language without a solid opensource compiler is a non-starter; and (2) that opensource compile damn well better at least appear to be portable (e.g. well enough to attract the labor that will actually eventually make it portable).

      I’ve actually been meaning for years to write a post about the trend towards single-implementation languages. That is, there’s just one open-source C engine you port around everywhere, guaranteeing that everything except maybe the effects of platform-specific API quirks are identical.

      Of course this in turn is only possible because of how much we can now take for granted about the C infrastructure used to build these.

    3. if ( 1 = c )…

      There are some reactionary coding habits you might not like, but learn until you just don’t notice you are defensively coding.

      1. I never do that. I mean, I understand the sentiment, but I’m not actually checking to see if the universe has redefined the value of “1” to be the same as my variable “c”, so I just don’t do it.

        In any case, my point wasn’t that C got good enough for me. It was that it got good enough to kill Modula-2 — good enough in a global sense, for a lot of different programmers with different use-cases, or as esr says, “tribes.”

        1. You don’t use Yoda conditionals?! Burn the heretic! His blasphemy will be the doom of us all!

          1. Yeah, that could be a huge problem if that code were run in superuser mode on a computer powerful enough to change the universe.

            1. Could make getting around a lot easier, but it would make it hard to see anything.

      2. I learned a long time ago to defensively parenthesize, not just in conditionals, but in the kind of pointer referencing chains that happen deep in the bowels of embedded code.

        It would have been really nice if C had a sane operator precedence.

      3. “1 = c”

        A very long time ago, I was programming in pop-11 (https://en.wikipedia.org/wiki/POP-11) which was at the time used to implement Prolog. It was quite a handy language that could do some remarkably useful things.

        I stopped using it after a very, very long search for a bug that kept baffling me. It proved that I had accidentally redefined the ==-operator in my program (or the =-operator, I am not sure) and there was no warning. Except, of course, that the rest of the program did not work anymore. Mind you, that happened due to a very simple typo.

        After that incident, I considered my time too valuable to waste on this type of stuff.

    4. One thing I miss from Modula-2 and Oberon that I have yet to see a more modern programming language is an analogue to the module system. Namespaces and Packages in the C++/Java sense don’t have the same semantics, and I’m not knowledgeable enough with Go to tell if it does or not.

        1. Not really. Package management in Go is abysmal. You don’t have a project configuration file to list dependencies, their versions, and their features to optionally enable, in addition to conditionally pulling dependencies according to conditional compilation rules. You also don’t have a project lock file that provides hashes, commits, and specific versions to ensure reproducible builds by any downstream user compiling the project. Nor do packages in Go adhere to semantic versioning rules. This is another area where Rust majorly exceeds Go’s design.

          1. You are right that package /management/ in go is terrible, but the package system / namespacing itself is very good.

  2. One thing I don’t see is any split between the language itself, and the libraries which accompany it.

    This is what C (and Python) gets right. C itself is very simple. It is like having letters instead of pictograms – latin v.s Chinese writing. And the libraries are reasonably done and flexible.

    I’ll add Python, since there is now CircuitPython for embedded devices.

    Is GC part of the language or the library? It really matters.

    I’ve used C extensively for embedded systems. It works as a universal assembler so I can turn a LED on by doing *(* uchar)(0x4006) |= 3. I can also use libraries with GC or even access massive databases.

    You have things intrinsic to the language like the above so a device with 2k of flash and 256 bytes of ram can be coded efficiently and use no libraries (unless the processor doesn’t do multiplicaiton or divisions).

    A corrollary of Moore’s laws is that simpler processors become cheaper. Many are under $1 now, but they are very limited in flash/ram/eeprom/etc. And if you had to use a $2 processor to use Go that you could use a $0.25 processor and use C? Think “dollar store”.

    What if there was a library that could be added to C to replace malloc/free with GC? There is NO reason it couldn’t be done.

    I should also add that there is the language itself, and layers of libraries. The language might not have simple math routines, but the inner ring library will. An outer ring library might handle HTML, with the network stack one ring in. That is what needs to be gotten right.

    1. >What if there was a library that could be added to C to replace malloc/free with GC? There is NO reason it couldn’t be done.

      Alas, schemes like that interact badly with having bare pointer in the language. It has been tried – Look up the Boehm GC – but has practical limitations. One is that GC has to be invoked explicitly; that means you either have to maintain enough extra state about your heap to know when you need to do it or pay for a lot of GC calls you don’t really need.

      1. Another is that it’s frickin’ impossible to implement something like Boehm as a precise GC because without type annotations at runtime, there’s no way of knowing whether any arbitrary machine word is a pointer, integer, float, etc.

        But the compiler knows which words are pointers! All the more reason for a C successor to handle object lifetimes at compile time à la C++ or Rust.

      2. I have found the talloc library to be very useful in easing the heap memory management issues in C.

        Unless I need the speed or am working in a very resource constrained environment I won’t touch C. I find that I use Python when speed is that least of my worries. Ocaml gets me close to C speed without the headaches. I like many of the libraries Jane Street has made public for Ocaml.

        When I started programming in the seventies with the home computers of the time (TRS-80 and Apple II) 4K was typical memory a nd the really fortunate had 16K or 32K. BASIC or assembly was all we had and having C which is assembly language on steriods was considered a huge improvement in the eighties.

    2. A corrollary of Moore’s laws is that simpler processors become cheaper. Many are under $1 now, but they are very limited in flash/ram/eeprom/etc. And if you had to use a $2 processor to use Go that you could use a $0.25 processor and use C? Think “dollar store”.

      Yes. But even complex devices with pretty generous resources are also getting to the ridiculously cheap price. Bear in mind that with the Raspberry Pi zero (and other similar boards) you are getting enormous amounts of memory and a 32 bit CPU for ~$5 retail. And yes you can run go (and even node.js javascript) on a pizero

      I suspect a pizero optimized for a particular app with built in flash instead of a MicroSD and removing extra USB ports, HDMI etc. that it wouldn’t need would easily be available in bulk at $1 a board. Compared to a resource limited $0.25 board the $1 pizero board will be far easier to actually develop for and that means time to market will be quicker and development costs will be lower. E.g. If it costs $1million to develop on the $0.25 board and $100,000 to do so on the $1 board you’ll need to ship around a million devices to start seeing a greater profit from the cheaper hardware. At this point the reasons to choose to use a $0.25 resource limited processor instead of a rather more powerful $1 one probably come down to other things than price – power consumption being the most obvious.

  3. @esr: I mostly agree with your analysis, but will bring up a few points.

    I agree entirely about transition costs. Those are the reason so many millions of line of Fortran and COBOL are still in production (along with the languages being good fits for the problems they were designed to address.) C may go away, but the legacy code base won’t. New stuff that might have been written in C might be written in something else, but the same transition costs mean the old stuff already in C will still be in production. It works, and would simply cost too much to replace. The same holds true for C++ (though I think of that as an entirely new language with roots in C.)

    The question of “easy to learn” is a difficult one. What makes a language easy to learn? I suspect there are actually two different answers to that, depending upon whether you are talking about someone for whom Python will be a first language, or whether they will learn it after programming in something else.

    For the latter folks, I think ease of learning will depend upon what they used before. Someone coming to Python from C should find enough similarities that they can build upon what they already know, and simply learn how Python is different. The same is true for any other language. How hard it will be to learn will depend upon what you already know and can build upon. Someone with a background coding in imperative languages will be far more challenged by learning a functional language like Haskell than by learning yet another imperative language.

    I sympathize with your comments on C++, JavaScript, and Perl.

    But I will note that JavaScript author Brendan Eich commented “If it hadn’t been JavaScript, you would have gotten something much worse later.” I think he’s exactly right, and part of the issues I’ve seen with JavaScript revolve around turning it into a (ECMA) standard rather before it was mature enough to be standardized. The development has been fascinating to watch. The weakness of JavaScript is that it’s a “batteries not included language, and you need to use libraries to do anything. The process of developing libraries has been a sort of Darwinian evolution, as some gain traction and get adopted, and others fall by the wayside, so things like node.js and jquery are hugely popular. There may well be better solutions for the problems those solve, but they failed to gain mind share. Meanwhile, JavaScript is everywhere and not going away. The transition costs are again too high, even if there was a contender in the wings. (And I’m grimly amused that we now have languages that compile to JavaScript, the way the original cfront C++ compiler generated standard C.)

    I think Perl is a victim of insufficient time spent defining the problem to solve. Its syntax shows roots in all sorts of things, and its mantra is “There’s more then one way to do it!” That’s arguably a bug rather than a feature. There a joke compilation of language comparisons where you compare languages based on how you shoot yourself in the foot. For Perl, the answer is “You shoot yourself in the foot, but nobody else can figure out how you did it. Six weeks later, neither can you!”

    >Dennis

    1. People look at me funny when I tell them I like Lisp. One of my standard responses to this is to tell people that Eich almost embedded Scheme in the browser! It was the higher-ups and marketroids at Netscape that nixed this and doomed us all to two decades and counting of JavaScript. A few Scheme goodies still found their way into the language, like first-class functions which Java, at the time, lacked.

    2. IMO Perl is a domain specific language suffering from dunning-krueger.

      As a glue/tool language–reshaping data, moving it from one program to another etc. it’s really, really awesome–and I say that as some who DOESN’T like it and generally refers to it as a write once language.

      You can get stuff done **FAST**. A little discipline and some commenting and you can figure out how you did it. It has saved my butt (or my customers butts) several times.

      When your primary task is to take the output of several other calls (for example examining the contents of a SVN repo where stuff is stored with \r\n and a linux filesystem where the \r goes away, and SVN updated the tags so you have some borderline kinky things with diff to sort it all out) well, Python *can* do that, but it’s rather more overhead.

      What it is not, IMNSHO, is a general purpose application language. Yeah, it can do that. And you can take the princess to the ball in a 1975 Ford F150 farm truck, after you wash out the cab with betadine and put a new seatcover on. But it’s not the best choice. Unless you’re in Kansas. But we’re not in Kansas any more.

      1. “you can take the princess to the ball in a 1975 Ford F150 farm truck”

        Well, you can try. You’ll get most of the way there, and the transmission will disintegrate into a zillion pieces which, together, don’t actually transmit power from the engine to the wheels. It’s a Ford, after all.

        /me walks off muttering darkly about fucking Ford products…

        1. /me walks off muttering darkly about fucking Ford products…

          If you keep feeding me these straight lines I’m going to get in trouble for sexual harassment or something.

      2. “You can get stuff done **FAST**. A little discipline and some commenting and you can figure out how you did it.”

        Indeed. Most of the time my problems were solved with Perl programs that had regular expressions on every other line. =~ with magical bindings saves soo much work. Every time I have had to use regex in Python, I have yearned for Perl.

        But indeed, when having to do other work, Perl becomes unwieldy very fast. “OO” and pointer work are all but incomprehensible.

      3. One morning about 20 years ago I wrote my first Perl program. It was real production code, processing a table of interlibrary loan statistics between different academic libraries in Sweden. The output determined how much each library would be compensated for provining the ILL service. I wrote it under the explicit promise that I wouldn’t have to maintain the code. At any rate, Perl is a very expressive language and including all the lookup I had to do in the O’Reilly Perl books, it took me a little over half a day to finish it. It ended up being something like 500 lines of code.

        It is still in production, and has been maintained by others.

    3. What I find frustrating with JavaScript libraries is that they are used to redefine the language syntax – sort of like FORTH. The result is that programs look as if they were written in a significantly variant dialect of the main language. I got some good mileage from FORTH back in the ’70s, and the resemblance is eerie.

      C doesn’t do this, which is a weakness and a strength.

    4. > And I’m grimly amused that we now have languages that compile to JavaScript, the way the original cfront C++ compiler generated standard C.

      There’s really nothing to be surprised by there, IMO. Zillions of dollars and programmer-hours have been spent on making JavaScript fast. It’s not going anywhere for the same reason the Intel and ARM architectures aren’t going anywhere.

  4. My experience with Go and Rust is pretty much the opposite.

    Maybe I tried them in different times or maybe my tendency to learn by reading other people code first and then make variations of solutions I know quite well (multimedia in my case, so not much networking needs :)) got me in a different situation.

    There is plenty of really well written rust (since the compiler tends to prevent you from writing shoddy code) and the Go code I read was pretty gory; plus the toolchain being much nicer to use for Rust.

    The Rust core team seems even too nice in replying to requests and feedbacks, so hopefully they will deliver the missing bits you needed.

    1. The Rust core team seems even too nice in replying to requests and feedbacks, so hopefully they will deliver the missing bits you needed.

      I believe that’s part of esr’s point about language maturity. When the answer to “How do I do this?” changes from “Oooohh, right, we didn’t think about that.” to “Like this…” then the language is mature. Of course, if “Like this…” is followed up with reams of insanely cryptic code, the language, though mature, might very well still be unusable.

      1. In my experience with rust the answer on “how I do this” is most of the time “we have it in nightly, help us out getting it right for stable by using it” and “there is $crate that does that for you, needs nightly because of $reasons”.

        In Go what is saw from the sidelines seems to be a “lol, no” way too often.

        I wonder if somebody tried kotlin already, it seems another interesting language striving to replace a big incumbent or two.

        1. > I’ve yet to come across a scenario that cannot be solved succinctly in Rust.

          Really. So what’s the one-line operator for concatenating two arbitrary strings, as found in every other modern language?

          1. > Really. So what’s the one-line operator for concatenating two arbitrary strings, as found in every other modern language?

            Are you serious? Strings have never been an issue with Rust. The ease of managing strings are one of the best feature’s in Rust.

            1) old_string + &other_string;
            2) concat!(&old_string, &other_string);
            3) [&old_string, &other_string].concat();
            4) old_string += &other_string;

            Take your pick.

    2. I also have a feeling of being able to grasp Rust pretty well whereas I struggle more with Go.

      Also, the tendancy for current languages to bake in a package manager and statically link everything does kind of annoy me… what’s supposed to happen when crates.io or Go’s packaging go kaputs? or you just want to do some offline development?

      With C and Linux distribution packages those are both non-issues. You could just have the entire Debian archive on a couple Blu-ray disks and access all of it without any internet connection. Rust at least seems a bit suited to offline work, but Go seems nearly a non-started.

      1. Cargo does not rely upon Crates.io anymore. You may, in fact, host your own crate repository and pull from that instead. Tools exist to get you up and running with your own crate repository, and is continuing to be developed to make it more user-friendly.

        You also don’t have to rely upon it for building software offline. It’s possible to copy crate archives from one machine to another, and build from that, for example.

  5. As a math major still learning programming on my own time, this discussion of how new programming languages get adopted (and which ones seem to be on the up and up) is quite helpful. Your endorsement of Go as an eventual C replacement has led me to start practicing that language (I’ve primarily practiced programming in C and Java up till now) – and it is proving to have a lot less time overhead in debugging.

    I was wondering if you have any opinion regarding the rather new Julia programming language? Its niche is scientific programming, and it is trying supplant Python in that field by (1) being only a few times slower than C, (2) having pythonesque syntax, and (3) being able to call code from C, Python, and FORTRAN with minimal overhead.

    It would appear to be trying to ‘cheat’ and use both near and far perspectives at once. It appeases the near perspective by allowing scientific programmers to use their existing code (almost exclusively in one of those three languages) and by stealing the vast scientific libraries around those three languages. It also has some interesting far perspective possibilities, such as becoming compatible with other languages or becoming very close to C in speed by improving its JIT compiler.

    Your post would suggest that the main pitfalls Julia must avoid, as an academic language being developed at MIT, are low userbase and difficult transition costs. The transition costs seem to me mitigated by its pythonesque syntax and ability to steal other languages’ libraries. The userbase problem would appear much more serious: it doesn’t have a big corporate sponsor making wide use of it and, as far as I can tell, travels primarily by academic word of mouth. And a low userbase makes developing the language and its libraries harder.

    As far as I can tell, it seems very promising and has a very novel near-term strategy, but its ability to get the kind of support it needs to mature is questionable. What would your take be?

    1. >As far as I can tell, it seems very promising and has a very novel near-term strategy, but its ability to get the kind of support it needs to mature is questionable.

      Read Giving Up on Julia before you commit a lot of effort. I have no direct experience of the language, but I’d want to be sure those problems are being addressed before getting involved.

      1. I notice that “Giving Up on Julia” was written about two years ago when Julia was on version 0.4. I haven’t used the language, (though I may give it a try if I can find the time) but Julia is currently on 0.6 (with 0.7 on the horizon) so the author was using a very early version and those issues might have been addressed in subsequent releases. Some of the comments are worth reading as well.

        I’m neither friend nor foe of Julia, but I can’t help noting that when the article was written has some implications.

        1. >I notice that “Giving Up on Julia” was written about two years ago

          In the index on his home page the article entry is dated 13 May 2016.

          1. Some of the comments are two years old, which would put them back to November of 2015 or so. (I wish people would do a better job of dating stuff.)

  6. C++11 supports safe code. Trouble is that the natural and easy way to write C++ is to write C – bare naked pointers everywhere. Writing safe C++11 is hard.

    Rust is C++11 with guard rails to help you use safe C++11 everywhere, and all the dangerous idioms disabled by default.

    I think Rust will win. Lot of rust code is appearing. Needs a good RustWxWidgets system and development environment.

    1. Hmm.

      Which makes me wonder if Go will displace C while Rust displaces C++, and they’re not particularly in competition.

      1. ISTM that in the medium/long run, Go has a good chance at displacing Java and C# in quite a few domains, at least for new/greenfield projects. (This is not a very outlandish claim if you know your PL history, either – Go is essentially a remake of two languages associated with Plan9, Alef and Limbo, which were in fact quite Java-like.)

        I have more trouble making sense of the claim that Go will replace C; on the contrary, I think that other languages, like Haskell and Erlang, will also be competing quite vigorously for Go’s actual, core niche. The closest competitor to real, actual C (as opposed to C++) will likely be Rust with #[no_std] pragma on and unsafe{} blocks liberally scattered throughout.

        1. I wouldn’t say liberally. Even with #[no_std], you have access to the alloc crate to get all your collection types, and you have the Iterator trait, among Option and Result. Most tasks can be accomplished without unsafe.

      2. How do you figure that Go will displace C? Go is not a replacement for C, or C++, but a replacement for Python, and even that’s a bit iffy because Rust is more concise and readable; and therefore more maintainable than Go.

        C is used where performance matters a lot, and where you need to export a C ABI. Rust is being used for the same purposes, as it has a zero-cost C FFI and C-like performance. Hence, Rust is often compared directly to C when it is being benchmarked; and it is being used in areas where C used to have exclusive access. Go does not have any such capabilities. It is costly to export and import through the C FFI, and performance is always a magnitude below Rust.

    2. More like C++17 and beyond, because C++11 (and even C++17) has no parallel to sum types, pattern matching, tuples, traits, trait generics, iterator adapters, etc.

      > Needs a good RustWxWidgets system and development environment.

      GTK app development in Rust is absolutely stellar. I’m currently writing a comprehensive GTK Rust tutorial at the moment, which is in it’s early stages. I’ve only worked on it for one day[1], but this is what I have published onto GitHub Page so far (generated via the official mdBook utility that’s written in Rust, and hosts many of Rust’s markdown-based books). I may have the second chapter published later today.

      [1] https://mmstick.github.io/gtkrs-tutorials/

        1. Show me how C++ can do this:

          match function(input) {
          Ok(Action::This(x, y, z)) if y == m => { … }
          Ok(Action::This(x, y, z)) => { … }
          Ok(Action::That(a)) => { … }
          Ok(Action::OrThis(b)) => { … }
          Err(why) => eprintln!(“{}”, why),
          }

          Or this:

          let cap = file.metadata().map_or(0, |x| x.len());

          Because if it can’t it doesn’t have sum types and pattern matching.

          And what about this?

          custom_structure.iter()
          .map(function)
          .filter(function)
          .zip(other_structure.iter().map(function))
          .for_each(|(x, y)| {

          });

          If it can’t do that, then you don’t have iterator traits and iterator adaptors.

          So incorrect? More like you have no idea what Rust is capable of.

          1. So the words you use don’t mean what everybody else understands them to mean.

            You wrote,

            “More like C++17 and beyond, because C++11 (and even C++17) has no parallel to sum types, pattern matching, tuples, traits, trait generics, iterator adapters, etc.”

            And then you provide examples of Rust that don’t compile and don’t support your initial claim.

            For example, this C++ fragment compiles,

            vector lst;
            generate_n(back_inserter(lst), 26, mt19937(99));
            sort(begin(lst), end(lst));

            Shows an iterator adaptor.

            This C++ doesn’t compile,

            list lst;
            generate_n(back_inserter(lst), 26, mt19937(99));
            sort(begin(lst), end(lst));

            Because sort requires random access iterators, so showing iterator traits.

            Have a play

            https://godbolt.org/g/mH4J5G

            You can do Rust and Go too.

            1. My terms are well-defined. You are the only person in the world who has issue with them. Any rudimentary search of those terms will educate you on those topics. That you have let your programming knowledge slip is your own fault.

              > And then you provide examples of Rust that don’t compile and don’t support your initial claim.

              All of the examples I provided compile when you actually build an application with them. Please don’t go full stupid, because I don’t want to have to reply to an idiot.

              > vector lst;
              > generate_n(back_inserter(lst), 26, mt19937(99));
              > sort(begin(lst), end(lst));

              This is not a demonstration of an iterator, and this syntax is incredibly awkward. Your second code example is not an example of traits. It’s a great example of a lack of traits and trait-based generics.

              pub structure { … };

              impl Iterator for Structure {
              type Item = T;
              fn next(&mut self) -> Option { }
              }

              The above is an example how to implement the Iterator trait on a custom data structure. Once implemented, it then has access to the complete list of iterator adapters[1], which can further be expanded with this crate[2].

              fn test_function<I: Iterator<Item = T>>(iter: I) { };

              And this is an example of trait-based generics with the Iterator type. Any iterator type whose Iterator implementation has Item specified as T can be used an in input, interchangeably.

              [1] https://doc.rust-lang.org/stable/std/iter/trait.Iterator.html
              [2] https://docs.rs/itertools/0.7.2/itertools/trait.Itertools.html

              1. I had thought you might actually be serious about software, and you had previously claimed C++ knowledge. Apparently I was incorrect on both fronts.

                Never mind.

                You might want to pay attention to the Rust CoC of which you appear to be in breach.

  7. I see one problem, which your point on poiinters and GC shows.

    C and its standard libraries are written in C. Except for some occasional special processor instruction like a locked RMW or memory fence that requires inline assembler, you can write anything in C.

    What is Python written in, including many modules?
    What are other languages written in?

    Can you write Go’s garbage collector in Go itself?

    I see one other unexpected danger from the social side. What happened to Django and Node.js – there is one of those “Codes of Conduct” for Rust with the usual, “if anyone make you feel uncomfortable, report it and they will be expelled”. Go comes from Google, Since they’ve abandoned meritocracy, they are likely to be SJW converged if it isn’t already in progress.

    Do I really want languages being developed and maintained by SJWs?

    1. We may not have much of a choice in the future. It would seem that the kinds of academic elites being held up as shining examples of who we should bow down to as the Right People to design languages are all going to be SJWs, or cowed by them.

      Still…I suspect you could write Go’s garbage collector in Go. Whether you’d want to for performance reasons is another question entirely.

      1. Maybe I’m missing something, but some time ago the effort was made to rewrite Go in Go. AFAICT that was done as of Go 1.5 released in 08/15 and now is at Go 1.9. Is the GC an exception?

        1. You are correct, though I haven’t found the GC in Go code, but I haven’t looked hard enough.

          Rust also looks like it is produced by rust.

          1. The Go compiler is written in Go, as others point out, but GC is presented as a run time layer. So, from within a Go program, no, you can’t replace the GC, in much the same way as within a C program, you can’t replace the way the dynamic linking works at startup time, because it’s the foundation your code rests on.

            (Yes, once a C program is loaded and running you may do something other than dynamic linking to load more code, if one is foolhardy enough. I’m specifically referring to how you can’t replace in C the stuff that occurs before your “main” is even called, in the C program itself. There’s things you can do with environment variables and a lot of exciting flags you can pass the compiler, but as far as I know there’s no C-language-level support for most of those.)

      2. Oh, don’t get your TRON suit in a bunch. Node has already been forked because despite adopting a CoC, leadership in the Node community wasn’t enforcing it to some people’s tastes. Last I heard the fork isn’t doing too well.

        “Being cowed by SJWs” now looks a lot more like adding “don’t be a dick” to README.md, checking it in, and saying “there, you happy? Can we get back to hacking now, please?” than it does some nightmare scenario of allowing the Junior Anti-Sex League free rein to point fingers at those who should be shunned from the community.

        1. > does some nightmare scenario of allowing the Junior Anti-Sex League free rein to point fingers at those who should be shunned from the community.

          Which is more or less *exactly* what they did. Or tried to do. From what I understand, the sequence went something like this:

          1) SJWs tried to get a long-term contributor (an Evil White Male, of course) ejected from the project for CrimeThink (i.e., linking to an anti-COC blog post).
          2) In retaliation, someone else filed a CoC complaint against one of those self-same SJWs, citing behavior far more egregious (e.g., “Kill all men”).
          3) The powers that be appear to have just trash-binned both complaints, hoping that it will all go away. That’s probably the best outcome for them, from a legal standpoint, but it more or less reduces the CoC to a useless encumbrance. I mean, if “Kill all (members of a protected class)” doesn’t get you sanctioned, what does?

          1. > if “Kill all (members of a protected class)” doesn’t get you sanctioned, what does?

            Ah, but “men” does not constitute a protected class, which is why SJWs can write such things.

              1. That’s not what a lawyer told me when I was terminated from a job. A woman, black, Hispanic, gay, or even someone of a less-common religion could have a claim to make, but not a White Heterosexual Anglophone Christian Male.

                1. Your lawyer told you wrong.

                  If you were terminated *because* you were white, heterosexual, anglophone, Christian, or male, the relevant laws absolutely apply to you.

      3. This could be another problem. The designers of C, C++, Java, and Perl at least were one person, or the one person with a vision, and many are/were not SJWs.
        That is why SJW convergence is such a threat, as the leaders and visionaries, and the main workers are usually the ones targeted first and expelled just to prove the unwisely adopted CoC is enforced.

    2. The main problem of code of conducts is enforcing it while making sure nobody plays with it as a weapon to backstab whoever they dislike.

      As long the enforcers aren’t amicable/colluded with the accusers the CoC itself backfires and leads to a cleanse of those poisonous people.

      Having a more verbose version of “behave nicely with everybody” around as guide shouldn’t cause major issues in getting great code written.

      1. > nobody plays with it as a weapon to backstab whoever they dislike.

        Are you kidding? That’s exactly why these CoCs are created. Exactly. It is their sole purpose.

        The part about making everybody play nice is just window-dressing for the rubes, to persuade them to hand over power to the power-hungry.

    3. I see one other unexpected danger from the social side. What happened to Django and Node.js – there is one of those “Codes of Conduct” for Rust with the usual, “if anyone make you feel uncomfortable, report it and they will be expelled”. Go comes from Google, Since they’ve abandoned meritocracy, they are likely to be SJW converged if it isn’t already in progress.

      Happily, the Go code of conduct has backed away from some of its least defensible parts. You can see its entire history and, in particular, where following it is merely encouraged while you’re doing Go-ecosystem things, and is only enforced in Official Go Spaces (official github projects, Go-itself code reviews, etc.).

    4. “Can you write Go’s garbage collector in Go itself?”
      – um, it is actually written in Go.

    5. For Redox OS, we are actually writing our C standard library in Rust, with much success. All one simply has to do is to enact the `#[no_std]` on a project, and voila, you are restricted to using only core language features, and can therefore write a standard library from scratch with much more finesse than what you could do with C in similar situations.

  8. I’m curious what your take on Swift is. I know you’re no fan of Apple, but Swift is open source and being ported to other platforms, and the transition costs from Objective-C don’t appear to be very high. (I don’t know Swift and have no idea how suitable it might be for systems tasks.)

  9. A lot of analysis of Go vs. Rust has been on writing memory-safe code using different approaches (GC vs. special syntax). And there are some good arguments that, as processing power increases across the board, while GC performance gets better, we may reach a point where GC is an acceptable mechanism even for most systems programming. Implying that Rust would be relegated to things like kernels and firmware, while everything else can use Go.

    But, paradoxially enough, I don’t think lack of GC is a big deal at all about Rust. Rust shouldn’t be seen as the “GC-free” language. GC is a red herring. Really. If that’s all that Rust’s memory model offered, there would not be a lot of reasons to choose the language.

    The real advantage of Rust’s memory model is the guaranteed lack of data races in parallel algorithms. And this means it’s possible to write highly parallelized applications without the insane overhead in complexity required to write sound and safe parallel algorithms. Rust’s latest blog post, https://blog.rust-lang.org/2017/11/14/Fearless-Concurrency-In-Firefox-Quantum.html , is a showcase of using Rust to deliver a parallel CSS renderer which doubled the overall performance of Firefox, with later improvements in the works, such as GPU rendering, which, of course, requires massive parallelization.

    Garbage collection doesn’t help with preventing data races whatsoever. Since data races are arguably the most complex issue one needs to deal with when writing parallelized code, Rust offers an inherent advantage which no other languages, garbage collected or not, offer. And for a language that didn’t enforce such strict guarantees from the get-go (e.g. Go), it might not be possible to introduce later without backwards compatibility problems.

    This implies the sphere of Rust is far more than kernels and microcode. Anything that requires performant parallelized code may find Rust the language of choice – from video game engines, to web servers, to distributed compute clusters, to military avionics (Rust’s overall emphasis on safety might make it compete with Ada when technology becomes so resource-hungry that parallelism becomes required).

    I’m not not trying to be an “evangelist” and I’m fully aware of Rust’s high barrier of entry and high transition cost. These are all very valid points and definitely hinder Rust’s ability to compete with the average developer who values ease-of-use and productivity very highly. Rust has some “ergonomics initiatives”, but I don’t think they go far enough. I hope, for Rust’s future, that it will be able to bridge the gap at least somewhat in that sense.

    But in conclusion, as the overall selling proposition, I think Rust should de-emphasize “no garbage collection” (which isn’t a game changer) to “parallel programming without data races” (which, I think, is.)

    1. >The real advantage of Rust’s memory model is the guaranteed lack of data races in parallel algorithms

      An interesting and (I think) quite defensible position.

      Go’s answer to this is thread-safe goroutines and channels – You can’t have races if your goroutines don’t modify global state in any way other than channel operations. There’s a raciness detector that checks for this.

      While it’s probably not as general as Rust’s provable lock-freeness, Go’s CSP model does have one other thing going for it – channel code is much prettier and much easier to reason about than a convential mutex/mailbox approach.

      1. > Go’s CSP model does have one other thing going for it – channel code is much prettier and much easier to reason about than a convential mutex/mailbox approach.

        I agree, for most use cases. Although Rust is working on introducing coroutine-based generators and async/await syntax, and I wonder if channels can be built as a library on top of that.

        Worth noting, that true to Rust’s zero-overhead principle, they use “stackless coroutines” (reduce to a state machine, run everything on the host’s stack) rather than “stackful coroutines” that Go uses (green threading, each with own stack). This emphasizes micro-optimization, but may hinder scalability due to stronger coupling and being more difficult to debug – e.g. no independent stack traces per coroutine).

      2. Go’s answer to this is thread-safe goroutines and channels – You can’t have races if your goroutines don’t modify global state in any way other than channel operations.

        This is true, if it was always possible or desirable. There is a reason they have the sync package. I have encountered a few too many occasions where shared data structures are much faster and less verbose; contrast writing a memcached server as entered around a goroutine that encapsulates a private hashmap, instead of a shared hash map implemented using lock-free structures. The performance difference is about an order of magnitude. This muddies the architectural waters

        Second, communicating with channels is safe only if you are communicating values. If you send pointers or array slices on channels, all bets are off. This is what Rust clamps down with its “Send” typeclass.

        For my money, Go is convenient and very easy to make progress, and their tooling is excellent. But safety and raw compute performance aren’t really its hallmarks. I’d much prefer to use Nim over Go, but that isn’t going to happen.

      3. “There’s a raciness detector”

        What, it rejects your code if you reference Debbie Does Dallas in a comment?

        1. “Debbie Does Dallas”

          For young readers, this reference is not like “Irma does Houston”. Not at all.

        2. I ACK the joke, but to be specific, what the language calls a race detector is probably better called a raciness detector because while it can positively identify race conditions and never emits false positives, it can and does have false negatives. Useful tool, but running your program with the race detector and seeing no reports doesn’t mean that you’re clean, so it’s not really a “race condition detector”.

      4. The Pony language is able to achieve data-race freedom using the Actor model as a programming model (similar to Go) but with the underlying memory semantics similar to Rust so that data isn’t copied between heaps (just pointer swaps).

        I also find Pony interesting because it’s garbage collected, but it’s GC is similar to Erlang’s, which means no global stop-the-world GC (and low tail latencies).

        It’s obviously less mature than either Rust or Go, and there’s still a question as to whether it would ever be able to cross the ease-of-use threshold to becoming anything more than a benighted niche language, but it seems pretty interesting to me as an Erlanger. I find the message-passing concurrency much easier to reason about, especially for complex, event-driven systems.

  10. @Jeff Read: People look at me funny when I tell them I like Lisp. One of my standard responses to this is to tell people that Eich almost embedded Scheme in the browser! It was the higher-ups and marketroids at Netscape that nixed this and doomed us all to two decades and counting of JavaScript.

    In this case, I think the higher-ups and marketriods were right. Embedding a dialect of Lisp as the script language would have presented a non-trivial challenge for the folks assumed to actually write scripts in it. The resemblance in JavaScript to imperative languages like C was a feature.

    My ire is reserved for whoever at Netscape decided that instead of being called LiveScript as Eich intended, it should be called JavaScript to capitalize on the popularity of the then new Java language. I’ve quite lost track of how many times I’ve had to explain that Java and JavaScript are completely different languages whose only similarity is Java in the name. In some cases, I’ve been surprised at the folks who didn’t know that.

    But I wouldn’t say we were doomed to two decades of JavaScript. It’s been implemented in all manner of places. The biggest example up till Firefox 57 was Firefox itself, with the look-and-feel provided by XUL, CSS, and widgets, and JavaScript actually performing the action when you clicked on something. The browser was simply another instance of something the Gecko engine rendered. (It would have been possible to implement a complete desktop in XUL, CSS, widgets and JavaScript. I’m sorry no one tried to.)

    I don’t think JavaScript has more warts than any other popular language, and I’ve seen all manner of interesting stuff done in it.

    >Dennis

    1. I liked this talk. Definitely targeting a different audience, but I think he makes some sound points.

      For better or worse, it seems that the bulk of the development jobs out there are focused around what Rich describes as “Information Processing”. As someone who is working on information processing systems, but has worked on other types of systems, the infosys stuff can definitely feel like drudgery.

      The primary source of this feeling of drudgery is, of course, the “Two for Tuesdays” – the deluge of seemingly arbitrary requirements inherent to software so thoroughly enmeshed in chaotic human social and economic systems.

      (Compilers, and other systems software, are also deeply embedded in social and economic systems, but those human systems are arguably far less chaotic. A by-product of the people who inhabit those systems.)

      The antidote to the problem of chaotic human systems is to develop a deep enough understanding of them that you can bring order to those systems first. Once you bring order to the human systems, developing a software system to work in the social and economic context becomes much more tractable.

      But even then, I agree with Rich that the primary languages we use to design information processing systems (Java, C#) really aren’t well suited to building those kinds of systems.

  11. Two trivial typos:

    “language-design apace” -> “language-design space”

    “likem, say, the lack of generics” -> “like, say, the lack of generics”

    1. >How are failed the languages you mentioned ‘failed’?

      C++: downstream defect rates are unacceptable at large scale, and it’s going to get worse rather than better.

      Lisp: inward transition costs are too high. (I say this as a person who loves the language and can think in it). It’ll probably never go truly mainstream – if it were ever going to that would have happened before Python. I hate this, but reality it is what it is.

      1. Thank you. Now I need to understand what exactly do you mean by ‘mainstream’? Is there going to be, ultimately, one language, or a couple of languages in which all code is written (and into which, eventually, all software in existance is translated)? By humans? Or are we evolving towards a world with many more languages that are each specific to particular problems _and_ to particular human cultures (for example: beginners, mathematicians, embedded developers on small controllers etc). Some of these languages are better done, some less so, but once we are stuck with millions of lines of code in any of them, code that is in common use, it seems still easier to fix and adapt the existing code than to translate everything in a new language.

        Is it possible to make a ‘unification’ of a field in a specific language, not to speak about a grand final unification of all software? I think I heared of a few historic attempts: PL/I in the seventies, Ada in the eighties and nineties, Common Lisp for lisps in the eighties and nineties; all succeeded to some extent, some minor dialects were indeed obliterated, we have largish repositories in each of these languages now, but it was at most a temporary and localised success wasn’t it?. Still, each is kind of mainstream and perhaps currently irreplaceable in its local little world now, although perhaps few would call them generally mainstream and many younger programmers may have never heared about them.

        Do you anticipate some general unification, sometimes in the future? Or are we going to have more and more streams?

        1. >Thank you. Now I need to understand what exactly do you mean by ‘mainstream’?

          You are a project lead or product manager. You propose “We will implement in X” to people who write checks big enough to fund multi-year development. If you’re not betting your job that nobody’s reaction will be “What the fuck was he thinking?”, X is mainstream.

          1. I would say Clojure is the best shot for Lisp becoming mainstream. I say this as someone who hates Clojure. I’m allerjic to it and some of its language-design choices make me crinje. Nevertheless, I think it’s trying to solve a problem I don’t have: dragging Java programmers kicking and screaming the rest of the way toward Lisp.

            As for myself, coming from Lisp, I see that there are several good Scheme implementations (and at least one of Common Lisp) on the JVM, with excellent foreign-function interfaces to Java libraries, and would be much more inclined to use one of those.

            1. Curious if you’ve looked at HyLang. I know nothing about it, except it uses an AST manipulator that I help develop…

            2. I’d also mention Julia as a modern Lisp.

              It doesn’t have the parentheses, but it does have syntax tree macros which are used extensively to implement language features, it’s highly dynamic, and it has multimethods. It’s fairly close to Common Lisp or Dylan in spirit.

              Also, it beats Go in most speed benchmarks that I’ve seen.

          2. I think this is a sound definition, that we could rely on towards making a list of mainstream vs non-mainstream languages.

            A class of languages that fit into your definition are those that are already in broad use inside the product manager’s organisation. There should already be many programmers, tools and experience there versed in that language and for this single reason one proposing that language would not be considered nuts.

            This suggests that ‘Matthew’s effect’ should apply: languages that are mainstream (again, in any sufficiently large organisation, not necessarily worldwide) tend to become even more mainstream, others tend to be rejected.

            Now, the list. C and C++, java, python obviously qualify. So does cobol, fortran, pl/i and rexx (for IBM), ada (for embedded systems), lisp (for autocad, ibm and other developers).

            The free software equivalent of the big check could be that sufficient people use software in a language for volunteers to contribute (to invest) into maintaining a largish body of such software.

            Here: https://sources.debian.net/stats/ we can see matthew’s effect in action, that is the size of the code in each language follows a more or less Pareto distribution. C and C++ hold about 2/3 of the latest debian repository, about half a billion lines of code each. The others are spread between 100 and 2 million. There almost no significant entries unde 2 million, they are either very small (a few thousand lines) or very specialised languages (sed, vhdl etc) or both. The Matthew effect is likely create such an apparent cut between haves and have nots.

            So, for free software, one probably could say: it is mainstream
            if it has more than 1 mil lines in the debian distribution.

            Once mainstream, it remains mainstream (we can also see that in the history of languages in debian). There would be sufficient people how would say one is not nuts to start a new project in one such language.

            1. > that is the size of the code in each language follows a more or less Pareto distribution.

              Yes, this is exactly what I would have expected.

  12. BTW, I’m nothing more than an amateur programmer, but the last three articles on the replacement of C have been very stimulating. Thanks.

  13. I’m new to programming and your articles are helping me a lot to understand what programming actually means. Keep posting articles like these it will help a lot of new beginners.

  14. Theoretically you could design for the far problem and then offer a subset for the near problem. Why can’t a LISP implementation just give people a bunch of standard macros that create a simple syntax that any “Mort” (https://blogs.msdn.microsoft.com/ericwhite/2006/05/11/who-are-mort-elvis-and-einstein/) use pretty much as a replacement for Visual Basic, and then those who are ready, when they are ready, can explore the underlying, actually correctly designed structure further? A “far” language with training wheels. Yes, I know, the training wheels will be de facto standardized as a “business app language” by all the Morts, but that sort of thing will happen anyway, so it should better happen on top of a good framework. So once a large company realizes they have a big ball mud written by all the Morts and better hire programmers to rewrite it, they don’t have to scrap the whole thing in one go, they can just rewrite it piecemeal out of the training wheel subset into the larger framework. You know. Transition costs. Not only they must be low, they also must be smooth. Pilot projects etc.

    1. I’ve known Morts, I’ve worked with Morts. You can bundle up common programming idioms (a while loop, for instance, which Scheme doesn’t come with out of the box) into a library and ship it (in scheme’s case, even easier with R7RS), but you cannot bundle good style into a library and expect Morts to exhibit good style. Some of them have trouble with the notion that you can bundle up common, repeated functionality into a named procedure or function and call that, so they keep writing the same boilerplate code over and over…

      Using Lisp alone won’t turn a bad coder into a good coder, though long-term exposure to Lisp and its idioms and culture might. If your audience is bad coders, perhaps they really are better off with VBA.

      That said, millennial Morts are far more likely to have grown up with JavaScript and so will be better equipped to, for example, internalize that (lambda () ...) is just funny talk for function() { ... }. Plus functional programming is all the rage in JavaScript nowadays.

      1. For at least some of us “Morts” it’s not that we don’t understand named procedures and functions, but that we’re not nearly as enthusiastic about them as the Elvii and Einsteins. From our point of view they’re generally premature optimizations that obscure the code and attract more bugs than they swat. Especially when the “common” code needs to deal with a maze of twisty little cases, all slightly different.

        1. Especially when the “common” code needs to deal with a maze of twisty little cases, all slightly different.

          That’s just when you most should bundle it into a separate function, so that when the damn thing breaks, whoever is called on to fix it doesn’t have to find and fix the same obscure bug in fifteen different places!

          1. Yes, this is the standard Elvis/Einstein argument, and it has a point when the fifteen different places that call the function are all clone-identical. But from the Mort’s POV, fixing the same obscure bug in fifteen different places is possibly more tedious but certainly less aggravating than dealing with the #$%@ bugs in the extra code needed so that the function can sort out the little differences wanted by each of the fifteen places that call it.

            You Elvii and Einsteins are quicker to optimize away from tedium (boilerplate, manularity), in ways us Morts find premature.

          2. That approach relies on your bugs having common causes, and it also helps if you can test all the code afterwards.

            The “ten functions that do almost the same thing” case comes up a lot when each test run costs non-trivial amounts of money, or when some of the test environments are unavailable to developers.

            Yes, this is awful software development process, but not everyone can afford good software.

    2. By the way, Lisp itself is perfectly usable by Morts who are compelled to get work done with it by circumstance. For example, Autolisp — long the extension language of AutoCAD — has a community that’s still pretty strong despite being really rough as far as Lisps go. Autodesk has been trying to get it deprecated for years, in favor of VBA and ActiveX, but there’s so much stuff available for Autolisp that they have to keep supporting it.

      Another example is that there are reports of secretaries learning and using Emacs Lisp under Multics Emacs in the 1970s to extend the editor to get their work done. So there is nothing inherent to Lisp itself that prevents ooccasional programmers from learning it and being competent in it. What’s changed between the 1970s and 1980s and now is the fact that Windows and Macintosh have infantilized the user base to the point where you don’t get any traction if you’re anywhere outside the comfort zone of a typical Windows/Mac user. It’s getting hard, for instance, for professional developers to learn and use Emacs; they would rather just use whatever IDE has been fitted to their language of choice.

      1. They haven’t “infantilized” the user base, they’ve expanded it to include people who wouldn’t have been computer users in 1970.

        When your 1970s Mulitcs Emacs secretary was hacking mail merges (I presume) in LISP the vast majority of secretaries were using IBM Selectrics where the most technically complicated bits they were expected to do was set the tabs, or know when to replace the ball.

        You had C and D level folks who could barely type who would either write stuff out long hand, or dictate it.

        As late as 1982 shorthand was still taught in highschools, and being a secretary/typist was considered a reasonably good job for a woman.

        Now we have Manglers/Directors/C* folks typing their own memos into Word or Outlook, and “secretaries” hardly exist.

        Not everyone is capable of the sort of mental modeling required to write even simple scripts. As you indicated in the comment about “morts”, some of them don’t get loops or procedures/functions.

        The thing about emacs as a tool for professional developers is that unless you’re a Unix dev, the interface is (at least initially) unnatural.

        Heck, I’m a ~25 year vi/Vim guy and I can barely function in GVIM under windows or MacOS. I just pop open a Cygwin/iTerm2 window and use it there. It requires too much of a headspace jump to move back and forth (oddly enough moving in and out of the shell is no problem *unless* I’m trying to cut and paste. And then the difference between “highlight right-click” and c-x/c-v drive me madder.).

        So yeah, it’s not surprising that folks who have spent their entire *lives* on Windows, or Windows/Mac as a desktop might prefer Eclipse or whatever to EMACS.

        1. @William O. B’Livion –

          > Heck, I’m a ~25 year vi/Vim guy and I can barely function in GVIM under windows or MacOS.

          I, too, “worship at the altar of Bill Joy“, and I have no problems with GViM under Windows. The secret is to open the tool and ignore the fscking menus and tool bar! Just treat it as ViM inside of a nicely resizable “xterm”-ish window, and use nothing but your usual keystroke commands.

          Two hints that might help improve the interface to the environment – use :browse r (or w, e, etc.) to get a standard Windoze file dialog box for I/O (if you want it). And the “named” buffer * ends up being the Clipboard for cut’n’paste. (E.g., “*d3w cuts the next 3 words to the clipboard.)
          (Interestingly enough, this also works under Linux for the browsers’ cut buffer.)

        2. > As you indicated in the comment about “morts”, some of them don’t get loops or procedures/functions.

          I’ve found this true of my sysadmin co-workers, who would not hesitate to write a batch/shell script of repeated nearly-identical commands that I’d write as a loop of one kind or another. They’d rather maintain the long list of concrete commands that they can see do what they expect than try to do anything with a loop.

  15. Eric, there is another axis that attracts people to a language and helps it stick for a while — of being fun to program in. Ruby and Python have capitalised on this the most, and now Go to some extent. What’s your take on this?

    On a lark, I measured what I call the “shift ratio” of various languages — the fraction of characters in a program for which I have to press the shift key (for a given keyboard, US in my case). The more shifts, the more painful it is to type. Nim, Python and Go come in around the mid-thirties, and Perl/Rust/C++ in the mid-forties. I personally seem to have a lot more fun with languages that have a low shift ratio!

    1. That sounds like typing a list of short first names – Lee, Ash, Don, Rob, Ann. All begin with a shift yet that list is quickly typed in. I really hope programmers spend more time thinking than trying to break the speed record of the old-timey touch-typist competitions.

    2. >What’s your take on this?

      Interesting. I think I get what you’re driving at – sort of a combination of expressiveness and low process friction. An expressive language that’s hard to write isn’t fun; an easy language that can’t do much isn’t fun either.

      Yes, Python makes the fun-meter jump higher than anything else I know. For limited domains Emacs Lisp is almost as fun, but in general Lisp’s fun-ness is limited by weak libraries. Perl is very fun provided you are writing less than 20 lines of it

      Yeah, Go is…hm. Not usually as fun as Python. Much more fun than C. Go becomes huge fun on a problem CSP is a good fit for because the channel/goroutine combination is so expressive.

      >I personally seem to have a lot more fun with languages that have a low shift ratio!

      Yup. I don’t think it’s lower typing effort that drives this, but rather lower parsing effort.

      1. Yup. I don’t think it’s lower typing effort that drives this, but rather lower parsing effort.

        I interpret this as “parsing effort on the part of a human” — the stream of relatively more words per line perhaps makes the language readable. It isn’t the typing effort.

  16. > Whole-systems engineering, when you get good at it, goes beyond being entirely or even mostly about technical optimizations. Every artifact we make is situated in a context of human action that widens out to the economics of its use, the sociology of its users, and the entirety of what Austrian economists call “praxeology”, the science of purposeful human behavior in its widest scope.

    So, when are you planning on writing the Summa Praxeologica? :)

    In all seriousness, I would be interested to see a more consolidated view of your ideas on systems engineering (though I have found CatB useful in this regard) or at least an Appendix N of the sources you found most influential.

  17. It is interesting to me that, on the same day this post aired talking about the possibility that we are nigh approaching a replacement for C (the bedrock of our computing infrastructure) Uncle Bob prognosticating that we are cusp of a plateau in programming language evolution (http://blog.cleancoder.com/uncle-bob/2017/11/18/OnThePlateau.html).

    His argument is essentially that:
    1) Hardware has evolved enough that hardware processing power has evolved enough that the dynamic and expressive languages we have are cost effective enough for most applications
    2) We are fast approaching the physical limits with regard to processor core density, and that at the limits we are currently facing a complete re-envisioning of our concurrency models is not needed (i.e. most people do not, and will not, have to program against 1024 cores).

    His conclusion is that software engineering will stabilize, and the focus will shift towards becoming more professional practice using a relatively small set of “good-enough” languages.

    On the one hand, this seems appealing to me, because even though I am fairly young and I enjoy learning new things, it also means that my knowledge will have a longer half-life.

    At the same time, it seems to me that there are obvious improvements in language design to help us better manage complexity, and I wouldn’t want us to stop just yet.

    1. In my world C faded away in 94 or so, C++ ruled from then. I never transitioned to C# or Java in any serious way just because the stuff I was doing didn’t particularly fit.

      I think a change in perspective can be illuminating. A modern CPU core spends about 50% of its cycles waiting for the outside world. The win from making it more efficient reduces accordingly. It doesn’t matter much how much effort you put into JIT if the algorithm being expressed in the programmer friendly language chases accesses all over the working set. The same obviously applies to harder core languages.

      I think Rust or Go are simply side-shows. Anything built on LLVM or the GNU back end is constrained in the limit by that back end. The next big thing will be AI compilers that take high level descriptions, interactively clarify the ambiguity and then generate optimum low level code.

      And in the limit the performance matters, every wasted cycle is wasted heat, and that has a quantifiable cost. In the future PHP, assuming nobody with taste is able to kill it beforehand, simply won’t be affordable.

      Paul

      1. I think that Rust is different from the rest for the time being, until a different programming paradigm arrives on the scene. It is not just a performance play. It is performance + certainty. By taking control over aliasing, it attacks several problems at the same time: memory management without runtime overhead, uncertainty over dangling pointers, and allows you to confidently mutate shared data structures in a concurrent setting without low-level data races. Now, one can legitimately argue that these are not problems in most application or framework code, just as the non-typed people say about type systems (“that’s not where the real problems are”).

        assuming nobody with taste is able to kill it beforehand
        haha

    2. From Uncle Bob
      “If Moore’s law was the driver of our language evolution, what will drive it now?”

      You already see two movements:
      1) Special purpose hardware can speed up work by orders of magnitude. See GPUs and Google’s TPU

      2) More efficient software, e.g., optimizing compilers to drive special purpose hardware and their associated languages (tensorflow?)

      It could even be that C will have a comeback to extract maximum performance from hardware.

  18. This crosssed my radar this morning. You might be interested to learn that Google is using Go in an embedded project that aims to replace x86 firmware. Seems to me that’s an area that would previously be considered to be the province of C and assembly.

    1. Better Go than Rust.

      You don’t want Rust in your hardware, you want it to Go.

  19. I keep looking for Ruby to make an appearance in these discussions since it usually gets mentioned in the same breath as Python and Go, but then nothing. Crickets…

    How does the community think Ruby fits into this discussion?

    1. I keep looking for Ruby to make an appearance in these discussions since it usually gets mentioned in the same breath as Python and Go, but then nothing. Crickets…

      How does the community think Ruby fits into this discussion?

      I suspect that the people who actually use Ruby and the people who comment at this blog are disjoint sets (though I could be wrong). Hence no one jas enough knowledge to comment. From my limited knowledge of Ruby I’d say it’s become a niche version of Python and that it can be treated as such.

      1. >From my limited knowledge of Ruby I’d say it’s become a niche version of Python and that it can be treated as such.

        Pretty much. Same use cases, similar features, similar performance,

        1. And yet I, personally, find writing Ruby code to be “fun” (as discussed above) and Python to be…well…not.

          I suspect, but do not want to formally justify at this point, that (again, for me) lispyness and “fun to use” are quite highly correlated. Ruby is way more lispy than Python.

          From what little I’ve seen of it, Rust would be about as non-fun for me as it is possible for a language to be.

          1. >I suspect, but do not want to formally justify at this point, that (again, for me) lispyness and “fun to use” are quite highly correlated. Ruby is way more lispy than Python.

            Maybe not. Have you seen Peter Norvig’s Python for Lisp Programmers. Python is pretty Lispy if you ignore the surface syntax.

            1. On HackerNews, Norvig also commented “Peter Norvig here. I came to Python not because I thought it was a better/acceptable/pragmatic Lisp, but because it was better pseudocode.” [[https://news.ycombinator.com/item?id=1803815]]

              Moreover, the preset recursion limit of 1000 that Python defaults into is significantly anti-Lisp. Yes, you can change the limit, but the fact that recursion is set to break at all runs against the Lisp grain. Broken recursion is far from surface syntax–that’s core.

              Guido is on record as saying his inspiration was ABC–a Pascal kind of language that he worked on before Python; whereas, Matz is on record as saying he was going for a Lisp that had the object system of Smalltalk and the scripting power of Perl. [[http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/179642]]

            2. Had not seen that… will take a look.

              I’m not comfortable calling something “lispy” unless it implements anonymous functions way better than Python’s implementation, though.

              1. >I’m not comfortable calling something “lispy” unless it implements anonymous functions way better than Python’s implementation, though.

                A fair point.

        2. I primarily develop in Ruby. In the commercial web dev world, it (together with Rails) is the language of “get shit done”. Mostly by gluing together shit done by people before you.

          Ruby’s not a high performance language, it’s doesn’t have nearly as much math and science stuff as Python, many developers scoff at it, yada yada yada. But it’s very expressive, readable, has a huge amount of productivity-oriented libraries and tooling, just about any function you’d expect on a website you can install off-the-shelf. Rails is heavyweight, sure, but it’s a “batteries-included” framework, and for rapid prototyping or “test the waters” development it’s great. It also built a community of 3rd party tools and services to ramp up sites fast, such as build and production deploy services.

          Once traffic or data volume reaches higher volumes, unfortunately, a slow interpreter, almost non-existent concurrency story, and weak support for async/stream IO, drive web devs to more modern languages like Node or Go.

          But for “we don’t even have a single customer, we need to get this out there pronto on a shoestring budget, but it also needs to last for a few years afterwards as we grow without becoming a dumpster fire of technical debt”, I find Ruby/Rails second to none. The second clause, notably, excludes PHP.

          1. But for “we don’t even have a single customer, we need to get this out there pronto on a shoestring budget,

            For this part you’re right.

            but it also needs to last for a few years afterwards as we grow without becoming a dumpster fire of technical debt”, I find Ruby/Rails second to none.

            This part? I haven’t seen a Fails app yet that didn’t rapidly converge on “dumpster fire of technical debt”.

            1. Where I’m orking now “technical debt” is code for for “it’s not SQL development, we don’t want to do it”.

              Oh, and “Sorry–I was multitasking” is code for “I don’t think what you’re saying is important, so I was doing something else”.

            2. How much of that is selection bias? As a very easy language to get something started in, it also attracts less-experienced people who otherwise might not be developing at all.

              Ruby’s extreme flexibility and composability makes it actually pretty easy to keep modularized, well-factored, and well-tested. Unlike PHP, the language and frameworks do provide good patterns to do so. Of course, the developers still need to use them (and the same people who complain about Rails being “too opinionated” are the ones who make a total mess when Rails doesn’t stop them).

  20. One interesting thing that no one has mentioned is the language used by malware. Malware is a significant sector of software development (of IT in general in fact since there is a whole infrastructure around it).

    I mention this mostly because of how the good guys work at figuring out the malware (e.g. see http://blog.ret2.io/2017/11/16/dangers-of-the-decompiler/ ) is heavily dependent on the language and compiler used. Right now a lot of malware is either a scripted language (javascript, powershell…) or C/C++ with occasional jumps into actual assembler to obtain specific results. It will be very telling to find successful malware written in a more modern language like Rust or Go – I suspect Go will be used first because it seems to be more popular at present – and it would be utterly fascinating to find malware that used something more esoteric.

  21. @Doctor Locketopus: There’s really nothing to be surprised by there, IMO.

    I never said I was surprised. I’m not. I said I was amused.

    Yes, there has been enormous effort invested in making JavaScript fast. After the interpreter was tuned as well as anyone could, the next step was JIT compilation to native machine code. I watched Mozilla get bit by that in Firefox at one point. They were doing JIT compilation, and yes, that code was faster. But overall performance still had issues. The blocker was that there were cases where it was faster overall to just interpret the code, instead of adding the overhead of compilation, but the JS engine they were using wasn’t smart enough about which things should just be interpreted instead of compiled. The created a newer JS engine and things got better.

    I’m seeing wrappers like TypeScript and CoffeeScript that compile to standard JS, and exist to add things like strict typing to the language. (MS created TypeScript, and uses it in their Visual Studio Code IDE, where TypeScript is a supported language.)

    I hardly expect JavaScript to go away. It’s in use in too many places, and embedded in too many things, and has a very large base of developers who write in it. It’s a mainstream language.

    >Dennis

    1. The question I would have is would any language targeted at that particular audience be *better*, or would it suck just as bad in mostly the same ways?

  22. I personally expect JavaScript (or some JS descendant, such as asm.js or WebAssembly) to be the back end for essentially *everything* within 20 years.

    Yeah, it has its design issues, but ubiquity can cover a multitude of sins.

    Remember the x86 segment registers, and the other baboon butt ugly misfeatures of the early Intel chips? They took over the world nonetheless. Even Apple gave in eventually.

  23. @William O. B’Livion: would any language targeted at that particular audience be *better*

    Who do you consider “that particular audience” to be?
    >Dennis

    1. People who want to build anything from interactive/dynamic webpages to browser based applications.

      1. About what I thought. The problem is, that’s a subset of the total JS market, and JS is appearing in all manner of places that are neither.

        I’ve been skimming the ECMAScript 2015 specs, and one of the goals for that revision was to better position JS to be the target for compilation from other languages, with the JS being what eventually gets translated to native machine code.

        What you might use instead of JS given the variety of things it is used for now is a fascinating can of worms I’ll pass on opening. (But I’d expect whatever it was to have warts of its own.)
        >Dennis

        1. When you take a reasonably good tool like a crescent wrench and use it for a hammer, or use a screwdriver as a prybar then you get what you deserve.

  24. @Doctor Locktopus: Remember the x86 segment registers, and the other baboon butt ugly misfeatures of the early Intel chips? They took over the world nonetheless. Even Apple gave in eventually.

    My memory goes back the the Elder Days. A chap I knew online in the days when “online” usually meant “calling a PC based BBS with a dial up modem” posted about the issue. He wrote device drivers, and was trying to explain Intel segmented architecture to folks coming from a DEC LSI-11 environment. He described the look of horrified wonder when they understood what he was saying and tried to wrap their minds around why Intel went that way.

    The ultimate drivers in decisions like that weren’t technical, they were financial. The 8088 used in the original IBM PC was “design to cost”, and got the nod in part because IBM already used the 8086 in things like the Displaywriter dedicated word processor, and a tool chain and understanding of the architecture existed. The MC680X0 architecture might have been better in a technical sense, but would likely have cost a lot more.

    When we got the 80386 where a segment was 4GB, many barriers fell. There are still outliers with niche markets, but these days the choice of architecture tends to be Intel or ARM.

    I wouldn’t be surprised if you were right abut some variant of JS becoming the back end for everything down the road. It has design issues, but what doesn’t? Being “batteries not included” likely helps. It has the underlying primitives, and various things now are syntactic sugar wrappers intended to compile to a JS subset that avoids the worst of the potential problems.

    So we may reach a state where $LANGUAGE compiles to JS, and that gets compiled to machine code.

    >Dennis

    1. My memory goes back the the Elder Days. A chap I knew online in the days when “online” usually meant “calling a PC based BBS with a dial up modem” posted about the issue. He wrote device drivers, and was trying to explain Intel segmented architecture to folks coming from a DEC LSI-11 environment. He described the look of horrified wonder when they understood what he was saying and tried to wrap their minds around why Intel went that way.

      I don’t remember the Elder Days, and frankly, I don’t get the visceral horror that 8086 segmentation seems to inspire in most people (even though I didn’t know much of anything about instruction sets or MMUs before I started using Linux, which would tend to predict that I would be well indoctrinated in the way of Unix, and would shudder at the thought of anything that wasn’t a flat address space).

      Any system is going to need to resort to ugly hacks if programs have to deal with more data than will fit in the directly available address space, and 8086 segmentation was relatively elegant for that purpose: witness the various 6502/8080/z80 based home micros with arbitrary bank switching schemes implemented in the motherboard chipset, or the EMS kludge developed for the PC once a 20-bit segmented address space ceased to be enough. And it was even helpful for systems with 64k or less of RAM: You needed to relink AppleDOS if you upgraded the memory on your Apple II or you wouldn’t be able to take advantage of the extra RAM, and a copy of AppleDOS built for a bigger machine wouldn’t work at all on a smaller one. With the PC architecture, on the other hand, DOS just sat at the low end of memory and loaded programs into the lowest free segment. DOS didn’t need to be relinked for different memory sizes, programs that used a tiny or small memory model didn’t need to know their load address or be relocated if the size of DOS changed or the user had an extra TSR loaded.

      And really, the PDP-11 MMU wasn’t all that different from the 8086/286. The big difference with the x86 in general was that there were 8 segment registers per address space, rather than one (sure, they were called “Page Address Registers”, but it was a segmented, not a paged memory model). The big difference with the 8086 specifically was that there were memory protection features, which the 286 also had, though implemented differently. But ignoring the features controlled by the PDRs, the view of the address space that a kernel programmer had on the PDP-11 was not all that different from that seen in x86 real mode (you plug a 16 bit segment number into a PAR/segment register, it gets shifted by 6 bits / 4 bits, and the result gets added to all memory accesses to the (section of the) 16-bit address space corresponding to the PAR/segment register in question). I’ll note that the “small model” corresponds directly to PDP-11 “Separate I and D space”. Now, your friend was likely speaking to application programmers, who would likely be working in user-mode and neatly insulated from the details of the PDP-11 MMU, but I know that there were PDP-11 operating systems that provided overlay mechanisms for application programmers that needed more memory. Specifically, while poking around the web to refresh my memory on the details of the PDP-11 MMU, I ran across this description of a Modula-2 implementation that seems to have done incestuous things with RT-11 in order to implement something that would correspond the the Intel medium/large/huge models for Modula application programmers.

      1. I’m with you.

        But then again, having implemented my own banking schemes (including developing external hardware and writing my own linkers) for Z80s and later for 16 bit DSPs, (a) I understood the problem domain implicitly; and (b) it was a breath of fresh air to have all the banking hardware wrapped up in the CPU, and have multiple competing toolchains directly support the standardized banking system.

        There’s no question that flat is easier to work with, and Intel/AMD eventually got to flat, and managed to do that with a massive amount of backward-compatibility.

      2. > I don’t get the visceral horror that 8086 segmentation seems to inspire in most people

        For me it just seemed repellant on a very deep and fundamental level that more than one segmented address could map to the same physical address. That may not be such a chore with a modern debugger, or if you have a compiler to handle the bookkeeping, but when you had to calculate those bastards by hand while writing asm it was no fun, believe me.

        1. But that actually had some utility — you didn’t need to waste bytes simply because they were at the end of a segment.

    1. >If praxeology is about using purely logical deduction to predict what people will do, how does that square with the Surprise Exam paradox

      It is always possible for one’s premises to be wrong.

      1. Eric,

        Arguably, the premises in the Unexpected Hanging Paradox are all true; it’s actually the deductive machinery that’s at fault. That’s what makes it an interesting paradox.

        Given that fact, how do praxeologists like you know if you can even *use* deduction given your premises? Do you actually have a formal mathematical model? Because the paradox above suggests that if you don’t have a foundation in formal logic, then your conclusions might actually be wrong. And let’s say you don’t have such a model, do you have any ways of experimentally proving that your theories work better than competing theories?

        1. >Given that fact, how do praxeologists like you know if you can even *use* deduction given your premises?

          The same way anyone else does in any domain at all. By observing the predictive consequences of doing so.

          1. You mean data that proves your theory right and other theories wrong. Where’s that data and the surrounding argument?

            1. >You mean data that proves your theory right and other theories wrong. Where’s that data and the surrounding argument?

              If you get more specific about what theory you mean, I might be able to answer.

              1. I’ve heard about praxeology in the context of Austrian economics. Where’s the proof of Austrian economics?

                1. There is no proof of Austrian economics, because Austrian economics is anti-empirical by design.

                  In fact “praxeology” is Austrian-school speak for “theories about human behavior which cannot be tested by experiment, so don’t bother trying”. Because it would kind of undermine the axioms of free-market economics to admit the empirical evidence that humans routinely engage in anti-rational behavior.

                2. >I’ve heard about praxeology in the context of Austrian economics

                  Austrians use the term, but it’s not exclusive to them and predates them. If you think of it as the application of economic principles like supply-demand equilibrium to exchanges that aren’t monetized but have scarcity constraints, you’ll be on the right track.

                  Some Austrians think of praxeological rules as a priori, like mathematics. This, of course, is a mistake (it’s a mistake about mathematics, too). You confirm them the same way you confirm any other claim about human behavior, by checking to see where and when they predict it correctly.

  25. > When we got the 80386 where a segment was 4GB, many barriers fell.

    Yes. I certainly don’t miss having to deal with choosing a “memory model” at compilation time.

    > So we may reach a state where $LANGUAGE compiles to JS, and that gets compiled to machine code.

    That’s what my gut tells me. If you have a fast JS*-to-machine code compiler/interpreter/JITer (which, to a first approximation, everyone does, with vast amounts of paid resources going to making those still better) the temptation for the designer of a new language to just punt and target JS* code for compiler output is going to be very high. Most of the non-fun, messy bits of bringing up a new language have to do with getting the back-end running on different machines. Avoiding that altogether is a powerful incentive.

    (and yeah, you have other things developing along that line, such as LLVM, but I don’t think any of them are getting the amount of sustained, paid effort that’s being poured into optimizing JS* stuff)

    * Where “JS” stands for “some JS-derived technology stack”

  26. @Doctor Locktopus: Yes. I certainly don’t miss having to deal with choosing a “memory model” at compilation time.

    I remember at least six, and may be blanking on a few. I doubt you miss the hoops you had to jump through when code and/or data needed to reside outside the 64K segment, either…

    > So we may reach a state where $LANGUAGE compiles to JS, and that gets compiled to machine code.

    That’s what my gut tells me. If you have a fast JS*-to-machine code compiler/interpreter/JITer (which, to a first approximation, everyone does, with vast amounts of paid resources going to making those still better) the temptation for the designer of a new language to just punt and target JS* code for compiler output is going to be very high.

    Again, I suspect the drivers will be financial, not technical. If you can just compile to JS instead of outputting machine code, solutions exist to turn JS into machine code. Your costs and time-to-market drop substantially.

    Most of the non-fun, messy bits of bringing up a new language have to do with getting the back-end running on different machines. Avoiding that altogether is a powerful incentive.

    *cough* GCC *cough*

    The good part about GCC was separating the front end that parsed the source from the back end would turn into machine code. The bad part was successfully creating and maintaining the back end for $TARGET. How many folks working on GCC knew enough about the GCC code and the internals of the Intel architecture to upgrade the GCC back end to emit optimized machine code for newer generations of Intel chips that added new instructions?

    And despite the popularity of having an open source compiler, there are still valid reasons for folks developing for Windows on Intel architecture to use Microsoft or Intel compilers, because those back ends get more attention and may generate better code. Engineers get paid to pay that attention.

    For that matter, there appear to be plenty of tool chains in the embedded space targeting things like micro-controllers that use a compiler supplied by the vendor of the part. There might not be a GCC back end for those targets. The tool chain may be free, but not open source. If you’re a developer writing code for those devices, you don’t care. You use the tool that works.

    (and yeah, you have other things developing along that line, such as LLVM, but I don’t think any of them are getting the amount of sustained, paid effort that’s being poured into optimizing JS* stuff)

    Agreed, with the keyword being paid. Who pays engineers to hack on LLVM?

    Ultimately, I’m struck by a feeling of “What’s old is new again.”

    I still have my first Unix machine – an AT&T 3B1. It came with the cc C compiler. Cc compiled to Assembler, which was assembled by as, and linked by ld into a running binary. You could interrupt the process at the assembler stage, and go in and hand optimize the assembly code before assembling and linking. C compilers that compiled directly to machine code were the next step.

    Now we’re seeing something like that, but with JS instead of Assembler as the intermediate language, and I suspect there’s a fair bit of work being done on tools to optimize the JS before turning that into native code.

    I think you gut feeling about where this will end up is spot on.

    >Dennis

    1. The good part about GCC was separating the front end that parsed the source from the back end would turn into machine code.
      The ‘portable C compiler from 1978ish up till the full gcc takeover in the early 90s with the arrival of Linux and the various 386 BSDs, was logically structured into a front and back, followed by an assembler. The portable F77 compiler from Bell Labs apparently could drive the second passes of either pcc or the original cc.

      And despite the popularity of having an open source compiler, there are still valid reasons for folks developing for Windows on Intel architecture to use Microsoft or Intel compilers, because those back ends get more attention and may generate better code. Engineers get paid to pay that attention.
      Interestingly Microsoft support using a clang front end with an MS back end. Which implies some quite interesting stuff about the meeting point.

      Agreed, with the keyword being paid. Who pays engineers to hack on LLVM?
      Google, for example. And there are some really interesting side projects from clang, which go some way towards voiding the need for a replacement for ‘C’, IMO, especially when you already have an investment.

      There’s probably a whole other article for Eric here but ISTM that ‘Open Source’ is moving away from the noble savage, group of hackers against the suits and into massive corporations putting huge resources into software that is now ‘Open Source’. A good example might be the editor in Matt Godbolt’s ‘Compiler Explorer’ (which is an awesome resource, for D, Rust, Go, ispc, Haskell, Swift and Pascal as well as the important one….) anyway the editor is Monaco from Microsoft. Open source JavaScript. From Microsoft.

      Paul

      1. ‘Open Source’ is moving away from the noble savage, group of hackers against the suits and into massive corporations putting huge resources into software that is now ‘Open Source’.

        I think a rigorous study would show that, in one sense, corporations have always borne the brunt of the cost of open source development. Sure, a lot of development has been done, and is still done, on what is the employees’ “own” time, but by creative, productive, engaged employees who were taking work home anyway. Figuring out how to structure the work so that they were helping others besides their direct employer (and, simultaneously and not coincidentally, increasing their own value to, not only their current employer, but also potential future employers) is certainly something that more and more employees are doing, and that doesn’t need to be hidden from the suits in nearly as many environs as before.

        One big story, of course, is the learning curve of the corporations that “coopetition” could be a win, even if it involved collaboration on (and giving away) things where the corporation has core in-house world-class competency.

        It may be that the GPL helped here, by convincing employees that at least some of their efforts could help the rest of the world (and be usable by them in future endeavors away from their employees), and by showing employers that employee effort spent on collaborative code could, in some cases, be significantly multiplied by a wider community.

        But the lesson is ingrained now. Whether the GPL helped or hurt, it was, at best, a set of training wheels about a collaboration mindset, and a lot of (most?) projects don’t need these any more, and they can actively hinder some development and investment. The problem with these training wheels is they are very difficult to take off, by design. And, of course, the safety conscious crowd thinks everybody should have them, and it’s unethical to ride a bike without them. Oh, you can’t whizz around the corners with them on? You shouldn’t be doing that anyway; it’s very dangerous.

        1. I would guess that the majority of stuff I use nowadays (other than a few very basic tools like compilers, etc.) is either MIT or Apache licensed, rather than GPL. Some of it is dual-licensed.

  27. Agreed, with the keyword being paid. Who pays engineers to hack on LLVM?

    Apple. Only the most powerful and successful technology company in the world.

    1. Hey, a helicopter from IBM just landed out back, and there’s a guy in a navy blue Spectra suit would like a…word with you.

      Popular is not powerful. Popular just means a large market cap as long as you can *stay* popular.

      Apple sells cellphones and lapdogs. IBM sells hardware AND software AND services. And it no longer cares about “popular”.

      I’ve been in places like >coughcough> and and while the little hipster bois like their sticker covered macbooks[1] while they’re sitting on the couch in the office “hacking” something or other you won’t find an Apple branded product in the building. No, not even iPads, lapdogs or cellphones. Because you won’t find *any* cellphones in the building. Or iPads. And damn few laptops. You will find IBM. Likely an IBM *employee* too.

      Because services go on after the money from the hardware sale has been spent.

      Right now the two most powerful technical companies are Amazon and Google. Mostly because Google is reading about half the mail on the internet. I’d put IBM right behind them, simply because they power the sorts of banks that won’t talk to the likes of me and you, and they have all sorts of contracts with the government.

      This isn’t to knock Apple in the technology department, they’re *good* at UI. Or they were. Now they’re sort of ok at it. Better than most, but that’s like being the fast runner at the Special Olympics. But “powerful”? No. not in terms of real power.

      That said, IBM has been interested in/working on LLVM since 2013.

      [1] If APPL hadn’t gone FULL ON STUPID in the SJW department I’d be buying a macbook soon. Hell, I’m *still* lusting after one. They’re REALLY nice lapdogs. But it’ll be a long, long time before APPL sees a dime from me.

      1. > Apple sells cellphones and lapdogs. IBM sells hardware AND software AND services. And it no longer cares about “popular”.

        And Apple has a market cap approaching one trillion dollars (with a “t”), while IBM has a market cap of a measly $140 billion.

  28. @anon2: I’ve heard about praxeology in the context of Austrian economics. Where’s the proof of Austrian economics?

    Ever read any work by Austrian economists? (I suspect not, or you wouldn’t be asking the question. Economists of any strain take pains to prove their notions.)

    Ludwig von Mises “Human Action” is a good place to start: https://mises.org/library/human-action-0

    >Dennis

  29. @Jeff Read: Because it would kind of undermine the axioms of free-market economics to admit the empirical evidence that humans routinely engage in anti-rational behavior.

    Er, why? Markets are composed of actors making economic decisions. Whether those decisions are “rational” is irrelevant. The market still exists.

    “Free market” normally means “actors are free to make economic decisions and carry out transactions, without a central authority trying to control them.” It may be a mistake, but the actor is free to make it.

    I’m not aware of anything about free markets that assumes or requires economic decisions to be rational.

    >Dennis

    1. “I’m not aware of anything about free markets that assumes or requires economic decisions to be rational.”

      s/economic decisions/individual economic decisions/

      Individual actors may be irrational, but the market as a whole will behave rationally.

      1. >Individual actors may be irrational, but the market as a whole will behave rationally.

        Not quite true. The market will reward rationality and select for rationality, but that’s not a guarantee that any subset of investors will behave rationally at any given time. At best it tells us to expect the length and severity of irrational excursions to be sharply bounded.

        See also “tulip mania”.

  30. @esr: >Speaking of which, do you have a public opinion on Bitcoin?

    I do not. Haven’t done the research.

    I’ve been skimming in the background. Bitcoin is the tip of the iceberg called blockchain, and that looks like it will have all manner of uses.

    Bitcoin is cryptocurrency, like Litecoin or Ethereum. It’s misunderstood, because of a mistaken notion that it allows anonymity. In fact, one of the things that makes Bitcoin of interest is that it establishes guaranteed transactions between known actors. If you want to pay for something anonymously, you need to launder your transaction through a third party.

    Bitcoin is getting interest because it can eliminate the need for intermediaries like banks to conduct transactions.

    And a point that seems to escape more commentators is that Bitcoin, by nature, is a finite resource. Bitcoin can be produced by mining, but the rate of increase is dead slow.

    Currency has historically derived value from scarcity. The currency itself might have been paper, but the paper represented an amount of an underlying precious material, like gold or silver, and you culd get the specified anmount of the underlying material in exchange for your paper. When currency shifted to fiat currency, not tied to a scarce and finite resource, a variety of undesirebale side effects came along for the ride. Bitcoin restores scarcity as a component of value.

    The question with Bitcoin is whether it will gain enough mindshare. How do you accumulate it? How do you pay for things with it? Will other actors accept it as payment? If not, can you convert Bitcoin to a currency they will accept? If you do, what will the exchange rate be?

    The answers to all of those questions are becoming known, and Bitcoin is already viable currency in a lot of places. The next question is whether it will become popular enough to displace traditional currencies entirely. Right now, you probably can’t use just Bitcoin to buy all the goods and services you need. Down the road, you may be able to.

    >Dennis

    1. From the social consequences point of view, people who criticize Bitcoin bring up things like money laundering or funding crime, but I don’t think that’s the biggest issue (in fact, not an issue at all, someone with money will always find a way to pay for what they want).

      The real issue, the elephant in the room, is that Bitcoin is deflationary. There is an asymptotically capped amount of it, and as human economy grows, the value of a given quantity of Bitcoins increases relative to the total size of goods and services that can interact with it.

      I’m not talking about tactical issues like transaction costs or inconvenience of dealing with increasingly small fractions. I’m talking about the fundamental problem of deflationary currencies – negative spending stimulus, aka hoarding. If Bitcoin (or a similar deflationary currency) becomes the currency of choice, people will hoard Bitcoins, knowing that its value will keep increasing. This will send the economy into a deflationary spiral, reduce spending for both the poor and the rich (the rich will hoard, and the poor will have no Bitcoin or opportunity to gain it since it’s being hoarded). The secondary goal of inflationary currencies (beyond spending stimulus) is a de-facto income redistribution mechanism, which (as long as kept to reasonable levels) is much less resisted than direct taxation.

      Previous deflationary spirals during the Gold Standard era in the USA (late 19th century) have sent economies into long recessions, and eventually were the primary reason while the Gold Standard was replaced with Federal Reserve in the USA, and similar fiat currencies elsewhere. There is plenty of criticism of fiat currency, some for good reason, but most economists (even of stances like the Austrian school) do not advocate a return to the Gold Standard, conceding that this approach is unworkable in an industrialized society, and that the consequences of a modern inflationary economy becoming backed by deflationary currency would be a disaster, leading to massive economic crises and social unrest.

      Do advocates of using Bitcoin for national currency have an answer or acknowledgement of the above concern? Or do they reject it as a concept, and actually claim that a return to essentially a gold standard economy will be good for society?

      1. @Eugene
        “Bitcoin is deflationary”

        That is the way it was designed. Obviously by people who could not see past the current problem (inflation) to the long term issue that the amount of money must match the production of the economy. And a growing economy must have a growing money supply.

        On the other hand, Bitcoin is a data structure and an algorithm. Both can be changed to fit the needs of the community. Hence all the forks currently going on.

        In the end, I think the money aspect of Bitcoin will be unimportant. It is the blockchain, a distributed ledger, that already is changing the financial world.

        And the Proof of Work will change or go away and be replaced by something more efficient. Having petaflops creating heat just to validate bitcoin transactions is too expensive. The current energy consumption per transaction is 294 kW. That is what a US household spends in a Week. World wide bitcoin mining could power 2.7 million US homes. And it will only grow.

        https://digiconomist.net/bitcoin-energy-consumption

      2. The secondary goal of inflationary currencies (beyond spending stimulus) is a de-facto income redistribution mechanism

        Yes, from the poor to the rich (or more specifically from everyone else, mostly poorer, to already rich financial institutions).

        which (as long as kept to reasonable levels) is much less resisted than direct taxation

        Yes, that’s why the rich financial institutions prefer it.

    2. In fact, currency does not derive value from scarcity, so the hard limit on the number of Bitcoins has nothing to do with their viability as a currency. A commodity becomes a currency if, and only if, people use it as a medium of exchange, meaning that they accept it because they think other people will accept it later.

      The advantage of gold or silver standards over fiat money is that gold and silver are valuable for reasons other than their use as currency; that original market is an anchor, restricting the currency’s price and stabilizing the market system. Bitcoins lack this advantage; there’s nothing you can do with a Bitcoin except trade it to someone else, just like a US dollar bill. Thus Bitcoin prices are inherently volatile; they have no anchor in reality.

      I regard any cryptocurrency in which the tokens don’t represent a legal claim on property as a collective delusion, not a financial tool to depend on. Certainly one could build a currency system on the basis of blockchain technology, but a store of value would be needed as well, and Bitcoins don’t have anything backing them up.

      1. I regard any cryptocurrency in which the tokens don’t represent a legal claim on property

        The point of a cryptocurrency is that the possession of the tokens is protected by claims stronger than legal claims.

        1. That whooshing sound was the point going over your head.

          The problem isn’t proving who owns the tokens; it’s that the tokens themselves have no stable value, because they have no use and aren’t symbols of things that have a use. It’s a bit like the old joke about people stranded on a deserted island who made themselves rich by trading hats with each other.

            1. Or little pieces of paper with green ink on them.

              And no, I’m not a gold bug either.

  31. IMHO, the biggest challenge for Bitcoin, et al is that there are no trusted institutions for transactions or parking. Seemingly every day my news feed is about another Bitcoin exchange going dark, losing a bunch of coins, stealing a bunch of coins, etc. So it will have no wide credibility until those same banks are the ones handling it (or something like a bank).

  32. @Michael
    “IMHO, the biggest challenge for Bitcoin, et al is that there are no trusted institutions for transactions or parking.”

    That is currently what is occupying the minds of the central banks. It is only a matter of time before one or more National Banks will start to regulate crypto currencies to achieve just that. Someone will work out something just trying to become the new bitcoin hub. My bet is on Russia and/or China.

  33. @Winter
    That is currently what is occupying the minds of the central banks. It is only a matter of time before one or more National Banks will start to regulate crypto currencies

    I frequently hear the banks want in on the action. That would be a positive development for credibility, though I suspect a lot of the current BitCoin advocates would not agree. I have doubts it could be done under onerous U.S. banking laws without considerable changes.

    My bet is on Russia and/or China.

    Not sure either of those would do much for comfort & credibility. But maybe the “me too” factor would spur others to action.

  34. “Not sure either of those would do much for comfort & credibility.”

    But that won’t stop them trying. But we know the Swiss will step in too by that time.

  35. Worth noting that while Rust is a far problem design if your goal is “successor to C”, the Rust developers were not trying to develop a successor to C. That’s a solved problem: C++. As early as the early 90s, inside Bell Labs C++ was called C and C was called “old C”.

    The Rust developers are trying to come up with a worthy successor to C++. In that they have by and large succeeded, but if you are trying to make the leap from C to Rust, you are in for some trouble. In fact, when attempting to learn Rust, prior C knowledge can actively work against you.

    But that’s not as serious a concern as you might think, because of Thomas Kuhn: in ten to twenty years, the developers who are entrenched in C land will die or retire anyway, and be replaced with newer, younger developers who only know how to code safely.

  36. What about Ada programming language, why doesn’t that get more used?
    Is the main reason over-verbosity?
    If yes, why doesn’t someone create a language with same semantics as Ada, but much less verbose?

Leave a comment

Your email address will not be published. Required fields are marked *