Reposurgeon’s Excellent Journey and the Waning of Python

Time to make it public and official. The entire reposurgeon suite (not just repocutter and repomapper, which have already been ported) is changing implementation languages from Python to Go. Reposurgeon itself is about 50% translated, with pretty good unit-test coverage. Three of my collaborators on the project (Daniel Brooks, Eric Sunshine, and Edward Cree) have stepped up to help with code and reviews.

I’m posting about this because the pressures driving this move are by no means unique to the reposurgeon suite. Python, my favorite working language for twenty years, can no longer cut it at the scale I now need to operate – it can’t handle large enough working sets, and it’s crippled in a world of multi-CPU computers. I’m certain I’m not alone in seeing these problems; if I were, Google, which used to invest heavily in Python (they had Guido on staff there for a while) wouldn’t have funded Go.

Some of Python’s issues can be fixed. Some may be unfixable. I love Guido and the gang and I am vastly grateful for all the use and pleasure I have gotten out of Python, but, guys, this is a wake-up call. I don’t think you have a lot of time to get it together before Python gets left behind.

I’ll first describe the specific context of this port, then I’ll delve into the larger issues about Python, how it seems to be falling behind, and what can be done to remedy the situation.

The proximate cause of the move is that reposurgeon hit a performance wall on the GCC Subversion repository. 259K commits, bigger than anything else reposurgeon has seen by almost an order of magnitude; Emacs, the runner-up, was somewhere a bit north of 33K commits when I converted it.

The sheer size of the GCC repository brings the Python reposurgeon implementation to its knees. Test conversions take more than nine hours each, which is insupportable when you’re trying to troubleshoot possible bugs in what reposurgeon is doing with the metadata. I say “possible” because we’re in a zone where defining correct behavior is rather murky; it can be difficult to distinguish the effects of defects in reposurgeon from those of malformations in the metadata, especially around the scar tissue from CVS-to-SVN conversion and near particularly perverse sequences of branch copy operations.

I was seeing OOM crashes, too – on a machine with 64GB of RAM. Alex, I’ll take “How do you know you have a serious memory-pressure problem?” for $400, please. I was able to head these off by not running a browser during my tests, but that still told me the working set is so large that cache misses are a serious performance problem even on a PC design specifically optimized for low memory-access latency.

I had tried everything else. The semi-custom architecture of the Great Beast, designed for this job load, wasn’t enough. Nor were accelerated Python implementations like cython (passable) or pypy (pretty good). Julien Rivaud and I did a rather thorough job, back around 2013, of hunting down and squashing O(n^^2) operations; that wasn’t good enough either. Evidence was mounting that Python is just too slow and fat for work on really large datasets made of actual objects.

That “actual objects” qualifier is important because there’s a substantial scientific-Python community working with very large numeric data sets. They can do this because their Python code is mostly a soft layer over C extensions that crunch streams of numbers at machine speed. When, on the other hand, you do reposurgeon-like things (lots of graph theory and text-bashing) you eventually come nose to nose with the fact that every object in Python has a pretty high fixed minimum overhead.

Try running this program:

from __future__ import print_function

import sys
print(sys.version)
d = {
     "int": 0,
     "float": 0.0,
     "dict": dict(),
     "set": set(),
     "tuple": tuple(),
     "list": list(),
     "str": "",
     "unicode": u"",
     "object": object(),
}
for k, v in sorted(d.items()):
     print(k, sys.getsizeof(v))

Here’s what I get when I run it under the latest greatest Python 3 on my system:

3.6.6 (default, Sep 12 2018, 18:26:19) 
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
dict 240
float 24
int 24
list 64
object 16
set 224
str 49
tuple 48
unicode 49

There’s a price to be paid for all that dynamicity and duck-typing that the scientific-Python people have evaded by burying their hot loops in C extensions, and the 49-byte per-string overhead is just the beginning of it. The object() size in that table is actually misleadingly low; object instance is a dictionary with its own hash table, not a nice tight C-like struct with fields at fixed offsets. Field lookup costs some serious time.

Those sizes may not look like a big deal, and they aren’t – not in glue scripts. But if you’re instantiating 359K objects containing actual data the overhead starts to pile up fast.

Alas, I can’t emulate the scientific-Python strategy. If you try to push complex graph-theory computations into C your life will become a defect-riddled hell, for reasons I’ve previously described as greenspunity. This is not something you want to do, ever, in a language without automatic memory management.

Trying to break the GCC conversion problem into manageable smaller pieces won’t work either. This is a suggestion I’m used to hearing from smart people when I explain the problem. To understand why this won’t work, think of a Subversion repository as an annotated graph in which the nodes are (mainly) things like commit representations and the main link type is “is a parent of”. A git repository is a graph like that too, but with different annotations tied to a different model of revisioning.

The job of reposurgeon is to mutate a Subversion-style graph into a git-style graph in a way that preserves parent relationships, node metadata, and some other relations I won’t go into just now. The reason you can’t partition the problem is that the ancestor relationships in these graphs have terrible locality. Revisions can have parents arbitrarily far back in the history, arbitrarily close to the zero point. There aren’t any natural cut points where you can partition the problem. This is why the Great Beast has to deal with huge datasets in memory all at once.

My problem points at a larger Python issue: while there probably isn’t much work on large datasets using data structures quite as complex and poorly localized as reposurgeon’s, it’s probably less of an outlier in the direction of high overhead than scientific computation is in the direction of low. Or, to put it in a time-focused way, as data volumes scale up the kinds of headaches we’ll have will probably look more like reposurgeon’s than like a huge matrix-inversion or simulated-annealing problem. Python is poorly equipped to compete at this scale.

That’s a general problem in Python’s future. There are others, which I’ll get to. Before that, I want to note that settling on a new implementation language was not a quick or easy process. After the last siege of serious algorithmic tuning in 2013 I experimented with Common LISP, but that effort ran aground because it was missing enough crucial features to make the gap from Python look impractical to bridge. A few years later I looked even more briefly at Ocaml; same problem, actually even worse.

I didn’t make a really serious effort to move sooner than 2018 because, until the GCC repository, I was always able to come up with some new tweak of reposurgeon or the toolchain underneath it that would make it just fast enough to cope with the current problem. But the problems kept getting larger and nastier (I’ve noted the adverse selection problem here). The GCC repo was the breaking point.

While this was going on, pre-GCC, I was also growing somewhat discontented with Python for other reasons. The most notable one at the time was the Python team’s failure to solve the notorious GIL (Global Interpreter Lock) problem. The GIL problem effectively blocks any use of concurrency on programs that aren’t interrupted by I/O waits. What it meant, functionally, was that I couldn’t use multithreading in Python to speed up operations like comment-text searches; those never hit the disk or network. Annoying…here I am with a 16-core hot-rod and reposurgeon can only use one (1) of those processors.

It turns out the GIL problem isn’t limited to non-I/O-bound workloads like mine, either, and it’s worse than most Python developers know. There’s a rather terrifying talk by David Beazley showing that the GIL introduces a huge amount of contention overhead when you try to thread across multiple processors – so much so that you can actually speed up your multi-threaded programs by disabling all but one of your processors!

This of course isn’t just a reposurgeon problem. Who’s going to deploy Python for anything serious if it means that 15/16ths of your machine becomes nothing more than a space heater? And yet the Python devs have shown no sign of making a commitment to fix this. They seem to put a higher priority on not breaking their C extension API. This…is not a forward-looking choice.

Another issue is the Python 2 to 3 transition. Having done my bit to make it as smooth as possible by co-authoring Practical Python porting for systems programmers with reposurgeon collaborator Peter Donis, I think I have the standing to say that the language transition was fairly badly botched. A major symptom of the botchery is that the Python devs unnecessarily broke syntactic compatibility with 2.x in 3.0 and didn’t restore it until 3.2. That gap should never have opened at all, and the elaborateness of the kluges Peter and I had to develop to write polyglot Python even after 3.2 are an indictment as well.

It is even open to question whether Python 3 is a better language than Python 2. I could certainly point out a significant number of functional improvements, but they are all overshadowed by the – in my opinion – extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences.

I felt like this was a bad idea when 3.0 shipped; my spider-sense said “wrong, wrong, wrong” at the time. It then caused no end of complications and backward-incompatibilities which Peter Donis and I later had to paper over. But lacking any demonstration of how to do better I didn’t criticize in public.

Now I know what “Do better” looks like. Strings are still bytes. A few well-defined parts of your toolchain construe them as UTF-8 – notably, the compiler and your local equivalent of printf(3). In your programs, you choose whether you want to treat string payloads as uninterpreted bytes (implicitly ASCII in the low half) or as Unicode code points encoded in UTF-8 by using either the “strings” or “unicode” libraries. If you want any other character encoding, you use codecs that run to and from UTF-8.

This is how Go does it. It works, it’s dead simple, it confines encoding dependencies to the narrowest possible bounds – and by doing so it demonstrates that Python 3 code-point sequences were a really, really bad idea.

The final entry in our trio of tribulations is the dumpster fire that is Python library paths. This has actually been a continuing problem since GPSD and has bitten NTPsec pretty hard – it’s a running sore on our issue tracker, so bad that were’re seriously considering moving our entire suite of Python client tools to Go just to get shut of it.

The problem is that where on your system you need to put a Python library module in order so that a Python main program (or other library) can see it and load it varies in only semi-predictable ways. By version, yes, but there’s also an obscure distinction between site-packages, dist-packages, and what for want of any better term I’ll call root-level modules (no subdirectory under the version directory) that different distributions and even different application packages seem to interpret in different and incompatible ways. The root of the problem seems to be that good practice is under-specified by the Python dev team.

This is particular hell on project packagers. You don’t know what version of Python your users will be running, and you don’t know what the contents of their sys.path (library load path variable). You can’t know where your install production should put things so the Python pieces of your code will be able to see each other. About all you can do is shotgun multiple copies of your library to different plausible locations and hope one of them intersects with your user’s load path. And I shall draw a kindly veil over the even greater complications if you’re shipping C extension modules…

Paralysis around the GIL, the Python 3 strings botch, the library-path dumpster fire – these are signs of a language that is aging, grubby, and overgrown. It pains me to say this, because I was a happy Python fan and advocate for a long time. But the process of learning Go has shed a harsh light on these deficiencies.

I’ve already noted that Go’s Unicode handling implicitly throws a lot of shade. So does its brute-force practice of building a single self-contained binary from source every time. Library paths? What are those?

But the real reason that reposurgeon is moving to Go – rather than some other language I might reasonably think I could extract high performance from – is not either of these demonstrations. Go did not get this design win by being right about Unicode or build protocols.

Go got this win because (a) comparative benchmarks on non-I/O-limited code predict a speedup of around 40x, which is good enough and competitive with Rust or C++, and (b) the semantic gap between Python and Go seemed surprisingly narrow, reducing the expected translation time lower than I could reasonably expect from any other language on my radar.

Yes, static typing vs. Python’s dynamic seems like it ought to be a big deal. But there are several features that converge these languages enough to almost swamp that difference. One is garbage collection; the second is the presences of maps/dictionaries; and the third is strong similarities in low-level syntax.

In fact, the similarities are so strong that I was able to write a mechanical Python-to-Go translator’s assistant – pytogo – that produces what its second user described as a “a good first draft” of a Go translation. I described this work in more detail in Rule-swarm attacks can outdo deep reasoning.

I wrote pytogo around roughly the 22% mark (just short of 4800) lines out of 14000 in the translation and am now up to 50% out of 16000. The length of the Go plus commented-out untranslated Python has been creeping up because Go is less dense – all those explicit close brackets add up. I am now reasonably confident of success, though there is lots of translation left to do and one remaining serious technical challenge that I may discuss in a future post.

For now, though, I want to return to the question of what Python can do to right its ship. For this project the Python devs have certainly lost me; I can’t afford to wait on them getting their act together before finishing the GCC conversion. The question is what they can do to stanch more defections to Go, a particular threat because the translation gap is so narrow.

Python is never going to beat Go on performance. The fumbling of the 2/3 transition is water under the dam at this point, and I don’t think it’s realistically possible to reverse the Python 3 strings mistake.

But that GIL problem? That’s got to get solved. Soon. In a world where a single-core machine is a vanishing oddity outside of low-power firmware deployments, the GIL is a millstone around Python’s neck. Otherwise I fear the Python language will slide into shabby-genteel retirement the way Perl has, largely relegated to its original role of writing smallish glue scripts.

Smothering that dumpster fire would be a good thing, too. A tighter, more normative specification about library paths and which things go where might do a lot.

Of course there’s also a positioning issue. Having lost the performance-chasers to Go, Python needs to decide what constituency it wants to serve and can hold onto. That problem I can’t solve, just point out what technical problems are both seriously embarrassing and fixable. That’s what I’ve tried to do.

As I said at the beginning of this rant, I don’t think there’s a big window of time in which to act, either. I judge the Python devs do not have a year left to do something convincing about the GIL before Go completely eats their lunch, and I’m not sure they have even six months. They’d best get cracking.

CORRECTION: I wrote a tool to measure conversion completeness just after I posted this. Not 37%, 50%! Merge requests from my collaborators accounted for 13%

167 thoughts on “Reposurgeon’s Excellent Journey and the Waning of Python

  1. You need to run your spell checker over this post, because it’s too important to leave this garbled.

  2. I love Guido and the gang and I am vastly grateful for all the use and pleasure I have gotten out of Python, but, guys, this is a wake-up call. I don’t think you have a lot of time to get it together before Python gets left behind.

    It saddens me to have to agree with this sentiment. I actually think the original Python 3 release was the first writing on the wall as far as the overall management of Python as a language by its development team. But back then it was still possible to handwave away limitations like the GIL. Now it’s not.

    • As a language, especially a learning language, I suspect Python will remain useful for many years where Perl stumbled. Perl was a (useful) mess of kludges from day one; Python remains elegant and productive for a very large set of projects, and can still teach good practice.

      Where Perl failed at code-size scale, Python is failing at resource scale. That’s a failure mode that doesn’t as strongly impact the usefulness of the language.

      • Indeed. Python is the BASIC of the 21st century, with clear representation of modern programming paradigms and constructs. That makes it ideal for learners, casuals, and developers for whom performance or scale is not an utmost concern.

        But in reality, aside from the most low-level/performance critical stuff, everything’s going to be written in JavaScript. Forget the long-dead dream of the Lisp Machines; our computers are turning into JavaScript Machines.

        • Python is the BASIC of the 21st century, with clear representation of modern programming paradigms and constructs.

          Except for functional programming. Which is an ever more important part of your toolkit in the multicore age. My spider-sense said “wrong, wrong, wrong” when I learned that Guido desired to and was planning on removing the few functional bits in the language in version 3.

          Forget the long-dead dream of the Lisp Machines; our computers are turning into JavaScript Machines.

          Never! JavaScript is Not Even Wrong, although I’d pick it over Python as a BASIC of the 21st, without any real knowledge of Python.

          • Never! JavaScript is Not Even Wrong, although I’d pick it over Python as a BASIC of the 21st, without any real knowledge of Python.

            Hell, if we were going with that classification, I’d be tempted to say Lua is the modern BASIC rather than JavaScript. They’re remarkably similar underneath the syntax, and Lua to a large degree requires actually understanding the way the metatables work, if you want to do anything even remotely interesting.

            JavaScript is mostly a lot of syntactic sugar designed to (attempt to) hide how hairy it actually can be.

          • Never! JavaScript is Not Even Wrong, although I’d pick it over Python as a BASIC of the 21st, without any real knowledge of Python.

            You may not have a choice. Industry currently appears to favor JavaScript as the language of the future for new development. This is because JavaScript enables a single language to be deployed on both the front and back ends. Which was actually also true of languages like Java, but the ubiquity of browser-based applications and the deprecation of applets mean that only JavaScript can truly fulfill this role today.

            And this has huge implications for industry, since it means that back end development needs not require special hiring of back-end domain experts anymore; just pull a few devs off your UI team, teach them Node, and have them write your back end.

            • just pull a few devs off your UI team, teach them Node, and have them write your back end.

              Then, three to six months later, pull your hair out when you realize Node is actually shite, and attempt to solve scaling problems by throwing more and more hardware at the problem only to realize that it’s not something you can fix that way. Finally, hire a single Go dev to re-implement your back-end in a few weeks and write off 75% of your capex.

              I sure like the idea of Node, but people treat it like a humvee when it’s more like a vw buggy.

              • The computing world is filled with Cessnas that some poor fools have scaled up to the size of a jetliner. Best to stay away.

              • Then, three to six months later, pull your hair out when you realize Node is actually shite, and attempt to solve scaling problems by throwing more and more hardware at the problem only to realize that it’s not something you can fix that way. Finally, hire a single Go dev to re-implement your back-end in a few weeks and write off 75% of your capex.

                Yes, of course — but industry doesn’t want to hear that.

            • And this has huge implications for industry, since it means that back end development needs not require special hiring of back-end domain experts anymore; just pull a few devs off your UI team, teach them Node, and have them write your back end.

              No, oh God no.

              I’ve spent some time on an ongoing effort to port a non-trivial JavaScript (specifically, Electron-based) application to FreeBSD.

              ‘Back end’ code written by developers without experience of systems or applications development is hugely problematic.

              One example that springs to mind (from an earlier porting experience, not the Electron-based one): a testing framework, designed to be run from the command-line, that is ignorant of $PATH.

              When I raised an issue w/ the developers suggesting their tool could, you know, look in $PATH for binaries I was knocked back. No, I was told, just add the likely locations on a FreeBSD system to their “big ol’ array of pathnames” (which included things like hardcoded drive letters for MS Windows).

              On the Electron porting project, I’ve spent a lot of effort dealing with JavaScript code that blithely assumes that the only two operating systems in the world are Linux and OSX.

              I’ve been recommending ESR’s book The Art of UNIX Programming to folks I know making the transition, so they can start to understand the philosophy behind ‘back end’ development, and how different it can be to the browser based ecosystem.

              • When I raised an issue w/ the developers suggesting their tool could, you know, look in $PATH for binaries I was knocked back. No, I was told, just add the likely locations on a FreeBSD system to their “big ol’ array of pathnames” (which included things like hardcoded drive letters for MS Windows).

                You know how you solve that? Write a library that looks up and splits $PATH, publish it to NPM, and say “Here, npm install this to get your big ol’ array of pathnames. It’s the Douglas Crockford preferred approach.”

                We’re talking about a community that needs a library for fucking left_pad and is_odd.

            • just pull a few devs off your UI team, teach them Node, and have them write your back end.

              A lot of front-end guys would require a lot more re-training than just pointing them at Node. Languages are not that hard to learn, compared to design patterns and best practices, which are completely different between the browser and the backend of web sites. In fact, one of the reasons why PHP developers have a reputation for being crap is that most of them used to start as front-end devs, and just little by little, mixed in some PHP into their HTML code.

              And yeah, I get that front-end web coding uses MVC frameworks and such now, but the needs of maintainable server-side web development are much bigger than a language change.

            • I fully agree, Jeff. Let’s look at the big picture: business is chock full of tiny boring requests like please can you add a phone no. 2 field? Which means at a minimum you hire one junior backend developer, one junior frontend developer, or a senior full stack developer who will be bored and quit, and one analyst who can reject stupid requests so you don’t end up with 5 different features solving the same business problem differently and all slightly wrong.

              This is a too expensive team for a small corporation. They often want a one person team.

              So what business always wanted is tight coupling, with the crudest example being pulling a database table onto a form with the mouse and getting an UI list. Asp.NET Web Forms, Deplhi, FoxPro.

              But even though that is hair-raisingly bad, we still need one person teams and fairly tight back-front end coupling.

              Europe understands this better, because we have more small corporations and more one-person teams, in fact often one person supporting 5-6 corporations custom development. Hence the French wakanda.io (previous to and unrelated to the movie) as the elegant example of JavaScript frontend, backend, and a development environment so there is basically one tool to learn, any smart analyst can do it. A cute little example is https://picolisp.com/wiki/?home which is really one guys toolset for whipping up a lot of business apps in Lisp, similar close coupling. Other examples are strongly coupled frameworks made for one app, again, the typical case (over here) of one person writing a big app, such frameworks exist behind Odoo and Compiere. All this is about costs really.

              Sometimes this is hard to explain to Americans. 10 years ago I contacted the Django team and tried to explain I want to use what you call the autogenerated admin interface as the app itself, but I need more features. I don’t intend to hand-write a frontend. Why? Because I am not a startup who wants to reach 200K users with a beautifully crafted UI. I just want something handy for 4 service technicians to enter data into. So a modern FoxPro basically, and the Django admin is pretty neat for that but needs more features. Somehow the message did not come across.

              Anyway. Business always wants their FoxPro of $current_decade. Small business especially.

              It might not necessarily mean that everybody had to follow. The people who did not follow Asp.NET Web Forms, Delphi or FoxPro might also not follow this.

    • >Do you think you’ll start recommending something other than Python in How To Become a Hacker?

      Not yet. It’s still a terrific learning language. What has changed is that’s now less competitive in production use.

      • I am actually in the process of writing a computer science curriculum for my daughter’s school for 6-8th graders, and was ruminating on whether to teach it with Go vs. Python. I largely landed on Python because:

        a) nothing I’m teaching is going to run up against the aforementioned limitations
        b) no need for them to deal with static typing of compilers at this stage
        c) they’re going to learn on Raspberry Pi’s, and Python has excellent GPIO stuff available for the Pi.

        It is a terrific learning language, and I still use it for tons of “glue” stuff. I do enjoy Go more, I just rarely actually use it.

        • >It is a terrific learning language, and I still use it for tons of “glue” stuff.

          I’d make the same call in your situation.

  3. How is it using roughly 1/5 MB per commit?

    I kinda doubt the GIL will ever get fixed and even if it was you still wouldn’t be anywhere close to as fast as C++.

  4. Have you looked into Julia? It promises to have all the ease of use of Python, but the speed of C++.

  5. Eh, I’ve always had the sense that you only used Python when you didn’t particularly care about speed.

    • Define “speed”.

      Any project is going to be a mixture of 1) time to develop (including debugging and testing), 2) execution time in production, and 3) time to re-work or extend the code in the future (as things change, more use cases arise, etc.) How much time and resources are consumed in each of these three phases will vary enormously depending on your particular project. I’d say, if your job is not to distribute production code to large numbers of “civilians”, then it’s very likely that phases #1 and #3 are going to dominate the total amount of time spent, and hence “speed”.

      Take the case of scientific users: their job is not to write software, but to produce analysis on datasets. Writing code is merely a tool to do their job– the difference between a commuter, and a professional driver, if you will. They’ll spend a lot of time in phase #1, and then will have to (want to) take the base code and extend it to related data sets or analyses in the future (probably after the original author has graduated and left)– phase #3. For them, the time spent compiling code which turns out to have errors in it will dwarf the time “savings” during their production runs, most likely. And a language which is, if you will, “conversational”, like Python, is much easier to understand and maintain (for non-specialists). Python will handle lots of things that are confusing and difficult to do, for people who think their main job is not “coder”, but “biologist” (or whatever). You can even embed it into notebooks with honest-to-God English paragraphs talking you through the reasoning and displaying the output and graphs right there for you. It doesn’t matter if you’re running a Mac or a PC or Linux, either. And the Python devs have made sure that, under the hood, mostly the libraries and built-ins are really C programs with a wrapper, so the performance is better than it has any right to be, really.

      I do a lot of Python programming as exploratory work, to get my thought process 100% clear about what I want done and to surface corner cases, etc. It’s “good enough” for most of my use cases, as I spend a lot of time just working though, say, normalizing this particular pathological data set. The code doesn’t go anywhere (although the cleaned-up dataset might). Python is faster overall, even if the actual processing time is 40x slower than C or Go. If phase 2 turns out to be a problem, then I refactor the smallest possible part of the program into a compiled language after I’m happy with the functionality of the Python code (I prefer C, my boss prefers C++; although I’m seriously considering switching to Go, for all the reasons our host has already mentioned). The big exception is when I’m doing embedded work, where it’s easier to just go straight to C (in most cases).

      I’d also agree that Python is the best choice for the first “real” language for students to learn– they can learn almost all the important concepts, like control flow, data structures, I/O, recursion, etc. and save the residuals (explicit typing, memory management, direct reduction to assembly/machine language, etc.) for later. Most people won’t ever need to cover those, actually. Indeed, most won’t even need recursion, and also would be very well advised to leave things like IPC, cryptography, packaging, and so on to the experts.

      That said: I’ve always hated the Python 2/3 split, particularly because of the Unicode crap that I can’t avoid, and because I work in networking. The devs were **just about** to start fixing some of the painful, frustrating parts of the I/O interface– instead, they got diverted to the conversion to Python 3, and we ended up with some unsatisfying, partial fixes in 2.7 and then chaos in 3.x. Plus, they’re retrofitting in some Java-like syntax and mindset, and I hate Java with the fire of 10,000 suns.

  6. the – in my opinion – extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences.

    Even worse than that was the decision to make the sys.std{in|out|err} streams Unicode text streams instead of byte streams. Even if this makes a kind of sense if your program is interactive, so the streams are connected to a TTY, it makes no sense at all if those streams are pipes. And of course, since the interpreter creates those streams before your code even runs, you have no way of telling the interpreter “Hey, knucklehead, this program has to work with pipes so please don’t do Unicode streams this time!” So you end up with kludges and hacks instead. (This was one of the issues we had to deal with in the Python 3 port of reposurgeon.)

      • I’ve tried all three – Sublime extensively, as it was the standard editor for pairing on code at Lonely Planet. Neither is _bad_, but they offer only a very small subset of the functionality offered by Emacs.

        And neither is an option if you’re working on console-only systems, over poor connections, on resource constrained machines, etc. etc.

        • I’ve wanted to get behind Atom as it was at least modular and open source, but it’s a damned Electron app, which means it’s an absurd resource hog.

          I’d love for something more modern to replace Emacs as a good option for code editor on both console and GUI, but I don’t think there are too many that care about this set of use cases.

        • Yeah, but Eric isn’t, and most of us aren’t, so why Only Use Emacs?

          (I mean, I’ve done deployment to horribly constrained systems; dev on one, run on the other.)

        • And neither is an option if you’re working on console-only systems, over poor connections, on resource constrained machines, etc. etc.

          For that, most developers these days use vim. Vim won the vi/emacs editor wars of old; the new wars are between vim and the various IDEs and IDE-wannabe GUI editors.

          I still use Emacs myself, but I have to endure funny looks when my coworkers wonder why I’m using some ancient thing they last saw on old DEC iron (older developers), or some alien thing no one understands (younger developers). And collaboration becomes more difficult.

          • Vi “won” in the only way that matters – it’s guaranteed to be everywhere; so you can dedicate brainspace to knowing it in the sure and certain knowledge that you CAN use it everywhere.

      • The only C I’ve written since leaving school is some trivial Arduino code*, and that’s perfect, because C is the devil**.

        (* And boy, is the Arduino kit broken. Can’t even string-copy correctly as of last time I checked.

        ** I mean, for Everyday Stuff, not systems programming, etc.)

  7. Does anyone have an opinion on Perl 6? Seems interesting as a language, though the ecosystem seems still a toy. Does it have a future?

    • The ecosystem probably seems a bit toy because all of the existing Perl5 modules work with Perl6, so there isn’t such a big push yet to write specifically for Perl6 yet.

      Haven’t used Perl6 yet myself, but it looks very promising.

  8. I’m curious; do you think Go would be an equally good replacement for Java? Or are there other things that are poised to become Java successors that are better?

    • >I’m curious; do you think Go would be an equally good replacement for Java?

      I don’t know.

      I know why Go competes well with C; because it’s an easy step up, while avoiding C’s vulnerability to bare-pointer errors and the like.

      I know why Go competes well with Python; because it’s a (slightly less) easy step up and ridiculously faster.

      To know how Go might compete with Java, I’d need to know what Java fans think Java is especially good at.

      • Java is C++–. It’s a whole lot like C++ except with a few of the hardest to understand bits like multiple inheritance taken out. The other key difference is that all memory is handled by the memory manager – there is no heap that can be explicitly managed with operations like delete, it’s all garbage collected. Misuse of free is the single most common bug in large C++ programs; programmers either free memory too soon (leading to crashes and security leaks when programs access memory that has been reused for something else) or never free it at all, leading to ever-growing programs.

        The other thing that Java got right was to standardize the library right away rather than leaving it as a later exercise. I’m talking about the Standard Edition, not the Enterprise Edition mess that Oracle eventually turned over to the community to attempt to maintain.

        Java was also THE language in higher education for a number of years until Python started to take over. But that wasn’t because of any special advantages of the language, but because of the cross-platform environment. Students could develop Java on whatever kind of computer they had and still be confident that the TAs would be able to run their code, something that doesn’t work as well with languages like C and C++ where there is a different and not fully compatible implementation for each platform.

        Go shares the property of always being garbage collected and it otherwise covers all the capabilities of Java, so it would be a good replacement.

        • C++ developers don’t use free() (or its equivalent delete) since 2011.
          free/delete was deprecated before that, but good alternatives weren’t standardized yet. Today, the rule of thumb is: if you see free() in a C++ program, you see a bug.

          Using a C++ iterator after it has been invalidated, on the other hand, is still a non-trivial issue.

        • Agree with almost everything here.

          It’s true, but not the full picture, to say that Java has multiple inheritance taken out. While it’s not called multiple inheritance, Java classes can and do inherit methods and fields from multiple sources. i.e. It’s true that you can directly inherit from only one class, but you can implement as many interfaces as you wish.

          Java also doesn’t allow direct pointer manipulations like C/C++ does.

        • Another substantial difference is in the structure of the STL vs the Java Class Library. I dislike C++ as a language, but appreciate and wish some of the ideas of the STL would spread to other languages.

      • Java is really good when you have a big team of largely mediocre programmers who need to build really big systems. Here’s a conversation I had with a senior developer of more than a decade of experience a few years back:

        Client: Hey Expensive Consultant (Me), can you help out Senior Developer (SD)? He’s stuck. Again.
        Me: OK.
        SD: I have a problem. This 3rd Party Library I need to use won’t load.
        Me: Dude, WTF? It’s just a Java wrapper over native code. It’s a Windows DLL. A Windows DLL will not run on Debian.
        SD: I don’t understand. It’s Java. Java runs everywhere.
        Me: No! It’s Java calling native code. Native code does not “run everywhere”.
        SD: I don’t understand.

        But here’s the kicker. SD could still add value under appropriate supervision. Can’t readily think of any other language where this would be true.

        • They must be using a definition of “Senior” of which I am unfamiliar.

          Unless he’s just the oldest guy they have.

          • Salary bracket.

            The dude had another infuriating habit, as I recall. When he sent somebody an error log, “errors.log” would be an OpenOffice document containing text he copied out of a PuTTY window.

            • Count yourself lucky it wasn’t an excel spreadsheet with each logline in a different row

              Yes, I am serious
              No, I don’t want to talk about it, I want another ethanol induced blackout instead

      • Java’s main strength is enterprise development. Which, as Inkstain said, basically means enabling schlubs to be deployed in legion strength and grind out code while more or less not stepping on each other’s toes. It has several features to this end: It enforces a single programming paradigm — OOP — and there is a passel of schlub-accessible best practices around that which enforce modularization, abstraction, code reuse, and loose coupling. It has strong static typing which eliminates type-mismatch bugs at compile time. It has a GC and a well-defined memory model which basically eliminate entire classes of memory bugs for an acceptable performance cost in enterprise work. And JVMs have been built to run on everything from IBM mainframes to smart cards, meaning that a single language and runtime can be used to implement all tiers of a typical distributed application, and a single binary package can, with some caveats, run on any platform that has a JVM.

        There are other accidental characteristics of Java that also make it an ideal enterprise language: for one, it has vast library support; and for another, in the 90s it displaced C++ and Pascal as the most common introductory language for new computer science students, so there are huge numbers of programmers, here and abroad, that understand it — making hiring a few hundred of them a doddle for a manager. Plus it’s backed by Oracle, which no one ever got fired for buying.

        Go is going to have a hard time displacing Java in the spaces where Java thrives. Much better candidates for Java replacements include C# (which has done Java stuff better than Java since the early 2000s, and is now officially open source and cross-platform) and JavaScript.

        • It enforces a single programming paradigm — OOP — and there is a passel of schlub-accessible best practices around that which enforce modularization, abstraction, code reuse, and loose coupling.

          It has a wonderful side-benefit for IT departments because of this, too: forcing everything to be encapsulated in an Object the way Java does results in SLOC explosion, as endless reimplementations of existing code, etc. results in hundreds or thousands of extra lines, producing a wonderful ProblemFactory, giving managers an eternal excuse for larger headcount and development budgets.

          Java is bureaucracy in programming language form, with everything that entails. No surprise that it’s so well suited to enterprise needs.

          • Java is bureaucracy in programming language form, with everything that entails. No surprise that it’s so well suited to enterprise needs.

            That, sir, is a brilliant distillation of my thoughts on Java. Thank you. I will steal that for use in various conversations.

            • > That, sir, is a brilliant distillation of my thoughts on Java.

              Thanks, but to be fair it’s not my original thought. I’m reformulating a quote by Joe Marshall: “Whenever I write code in Java I feel like I’m filling out endless forms in triplicate.”

              Like you, this describes my personal experience and thinking re: Java so well that I’ve kept the idea in my back pocket for these occasions.

          • Java is bureaucracy in programming language form, with everything that entails. No surprise that it’s so well suited to enterprise needs.

            Actually, that was my feeling about Ada, rather than Java. Java was actually fun to write in. And it had that WORA promise going on, backed by a company that looked like it would be around for a while.

            And then some bad events occurred. Java made some promises that its GUI library couldn’t keep, partially because different hardware and architectures weren’t quite as interchangeable as envisioned. So the face of most applications looked kinda neat, but ran sluggishly. Applets didn’t catch on, partly due to good-enough solutions like Flash and the “I swear we’re not piggybacking on branding” JavaScript. C# arrived, sacrificing some pure abstraction for impure pragmatics. And then Sun went under.

            That might’ve been it for Java, I think, had Eclipse not come along. Eclipse made a lot of that boilerplate less boily. A lot of Java programming became point-and-click, enough of it to keep the legions going. It was even finding bugs for you.

            Also, the JDBC abstraction was just useful enough to drive a critical mass of databases. Turns out most people just needed a website in front of an RDB, and Java provided that well enough. It wasn’t applets that took over; it was servlets.

            So now, I’d say the biggest strength Java has now is its ecology. It has at least two great IDEs. It has Spring. It has Apache Commons. It has huge amounts of class libraries and frameworks. It has enough legacy buy-in now to make COBOL jealous, if COBOL had any feelings. It also now has a more agile release schedule; I’m not sure how that will play out.

            Compared to Go, it’s probably suffering in scaling. That ol’ heap continues to be a pesky problem. It might be better off in the multithreading department than Python; I’m not sure. (I’ve written several good threaded apps, and knew enough of the Swing event model to reliably turn out snappy UIs as long as the customer was willing to wait a bit longer – or they could go with the fast solution that runs slow for the rest of their lives.)

            Go’s biggest challenge, I think, would be replicating that ecology. I haven’t worked with Go yet, and I don’t know how robust Goclipse is, or if there’s any other Go plugin to the 800-pound IDE. Overall, though, Go’s funding is deep, so it’s probably a question of whether funding focuses on that.

        • > It has strong static typing which eliminates type-mismatch bugs at compile time.

          Java does have that reputation. And Java is somewhat strong with types, but IMO it has a lot of behind the scenes implicit conversions that make it less type strong than its reputation. Maybe learning Ada before Java biased me, but it bothers me how easily this code compiles and works with no complaints.


          public class TypeStrong {
            public static void main ( String[] args ) {
              int x = 4;
              double d = x * 2.0; // silent promotion to double
              // now printf with a string format specifier.
              // Java silently wraps d in Double object wrapper.
              // then silently calls Double.toString()
              System.out.printf ( "d = %s%n", d );
            }
          }
          // prints "d = 8.0"

          • I’m so used to “x * 2.0” promoting from int to double that that part doesn’t bother me. Especially since it’s not like x is becoming a double; it’s still an int.

            Amusingly (perhaps darkly so), Java resisted autoboxing (promoting primitives to objects) for a long time, and was seen as failing because of that. They finally broke down and introduced it, AIUI, as part of several syntactic sugar enhancements.

            System.out likewise had no printf method until Java 1.5. Even today, I virtually never see it.

            • You probably don’t see System.out.printf() used very much because most Java these days is server-side, and people are more inclined to use the logging APIs (java.util.logging, Log4J, SLF4J, etc.) rather than writing to System.out.

              String.format() is the equivalent of sprintf, and more useful in most Java work.

              • Yes, it is more useful, since you’re likely not writing to the console all that often. I have to agree.

                I was more focused on placing a floating point number into a %s with no complaint.

                I like my type strictness like the old, never-said Mr. Miyagi quote:

                “Here, type-strict, same thing. Either you type-strict do “yes” or type-strict do “no.” You type-strict do “guess so,” [makes squish gesture while auto-boxing] Just like grape. Understand?” — !(Mr. Miyagi)

        • I don’t care how “officially open source and cross-platform” people say it is; C# will always be Micro$oft ick to me.

          Though to be fair, Oracle seems to be learning some of Micro$oft’s tricks; with the new Java 11 JDK, if you download the “standard” JDK like you’ve always done, you can’t use it in production without paying $lots to Oracle. You have to make sure to use an OpenJDK build if you don’t want Oracle Licensing Enforcement coming down on you like harpies.

          • @Amy Bowersox: I don’t care how “officially open source and cross-platform” people say it is; C# will always be Micro$oft ick to me.

            And that attitude will hobble you as a developer.

            I do my best to draw a distinction between Microsoft as a developer of technology and Microsoft as the old Evil Empire with business practices folks disparaged.

            Microsoft can afford to hire and pay the best, and a lot of very good people work there, producing some potentially superior technology. Microsoft is also trying to shed the Evil Empire image, because the money these days is in cloud services, not Windows and Office, and they have to play nice with everyone to make money.

            Turning up your nose at a tool because of who made it, not whether it’s a good tool, simply deprives you of resources you might need.

            I submit that’s the wrong way to look at things, and may cost you down the road.

            Though to be fair, Oracle seems to be learning some of Micro$oft’s tricks;

            And if you develop in Java you know that and and use the OpenJDK build. Oracle got Java when they bought Sun, and have wrestled with the question of how to monetize it. This is an attempted answer. I suspect it won’t work for them, because the alternative is so simple.

            (If you don’t know that, you haven’t been keeping up, and more fool you.)

            But meanwhile, in most areas of computing and software, assume it’s about the money for those doing it, because it generally is..

            >Dennis

            • And that attitude will hobble you as a developer.

              To be honest, I don’t blame her.

              It’s not some sort of Microsoft is evil thing, but a history of evil has made Microsoft tasteless. C# has some neat features, granted, but the .NET libraries reek of Win32’s design. Talk about bureaucracy in code form!

              • @Jeff Read: C# has some neat features, granted, but the .NET libraries reek of Win32’s design. Talk about bureaucracy in code form!

                Given that .NET originated on Win32, no surprise. I’m not sure how it could have avoided it.

                But the comment reminded me of a bit in Tracy Kidder’s “The Soul of a New Machine”, about mini-computer Data General’s race to build a 32 bit machine comparable to Digital Equipment Corporation’s new VAX system. Tom West, leader of the engineering team tasked to do it, purchases a VAX through suitable intermediaries, puts it in a warehouse, and proceeds to do a teardown to see how DEC handled the hardware. He decides the results reflect DEC’s corporate structure, with unnecessary complexity and message passing that could get in the way of doing the work.

                There’s probably fertile ground for exploration in how the corporate structure of the company making it affects architectural decisions in the product.

                >Dennis

            • Somehow, I don’t think Our Host would touch C# with a ten-foot pole, either, and I would guess that that reasoning is largely based in the fact that it is Micro$oft ick. Would that “hobble” him as a developer?

              Also, ever heard the aphorism “the leopard doesn’t change its spots”? I’m sure the EEE strategy is still in the back of the minds of many at Micro$oft.

              As for OpenJDK, the blog post I linked to was intended as a warning and a pointer to OpenJDK for those that haven’t yet gotten the message. It just played up the fact that, by subtly changing the license on the downloads that people have “always” gotten from them, Oracle is trying to pull a fast one. Easily evaded once you see it for what it is, but a fast one nonetheless.

    • There are plenty of JVM languages which are already good replacements for Java. Kotlin, Groovy, Clojure, JRuby, whatever tool fits the job.

    • Java is “our” generations’ COBOL. It will live another 40 years.

      The thing about Java is that it’s *everywhere* and there’s about a half a brazilian programmers–many of them in places like India and China where they will work for a few dollars an hour. Java also has tool chain support from IBM and other large vendors.

      And finally a Java “binary” runs on the JVM, not the processor, whereas Go is statically compiled for target machine. This has large tool chain implications.

  9. ” Test conversions take more than nine hours each,”

    Ah! A reminder about the reasoning leading to the parallel interest in improving UPS devices …

  10. I followed the link on “adverse selection” and was… utterly dumbfounded. An O(N^3) sort algorithm? That’s not a naive sort algorithm. A naive sort algorithm is O(N^2). Getting your sort to be as bad as O(N^3) without it being obvious that you were deliberately pessimizing would require fiendish cleverness.

    • I can easily imaging writing a O(N^3) sort if I’m not thinking of it as a sort at that moment and I’m reusing functions which were not optimized for big O either.

  11. > extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences.

    Tcl made the same mistake years earlier, so Python has no excuse. Today the noun I associate with the word “Python” is “fatal string-codec exception” because every other Python program I use fades into the technological background noise where it belongs…until it ruins my day because someone put a funny character in a string and made print statements and sometimes even socket IO fail. Somebody didn’t get the memo about malicious inputs causing unexpected behavior…

    > Yes, static typing vs. Python’s dynamic seems like it ought to be a big deal.

    In my experience, people who believe they need dynamic typing really just want a terse way to write expressions with exactly the implicit conversions their mental model of the toolchain expects. Rarely do any real problem domain’s type requirements correspond exactly to the precise set of Byzantine rules that such-and-such language’s dynamic type system performs (consider types used for storing floating-point numbers with types used to store amounts of currency or lengths of strings). So rarely does this alignment occur that we often want the compiler to detect unintended implicit conversions and flag them as warnings, then build with warnings-as-errors to stop them from passing code reviews…

    If it’s really necessary, you can build a dynamic type system at run time on top of a static one (that is, after all, what most dynamic type system implementations do). Going the other way is harder, because it requires the human mental discipline that you obviously don’t have, or you wouldn’t have needed to move from a dynamic type system to a static one in the first place.

  12. I’m really enjoying modern C++. But I can well understand why it’s not everybody’s cup of tea.

    I really don’t like Go. I have an aversion to garbage collection.

      • “Prioritizing low latency” generally means de-prioritizing high throughput. This is not very good for something like the reposturgeon, which is all about batch-oriented workloads.

    • Yes, OTOH reposturgeon has to deal with huge graphs and there’s nothing to ensure that these graphs will consistently be free of cycles, so this is most likely one of the admittedly rare cases where GC is a solid choice.

      • >this is most likely one of the admittedly rare cases where GC is a solid choice.

        “admittedly rare”?

        Yes, like the “admittedly rare” cases where you care about lowering long-term defect rates, Or security. Or not wasting your developers’ time on resource shuffling that a machine can do better.

        GC is always a solid choice unless you’re doing hard realtime. How can anyone not grok this in 2018?

        • By “solid” I actually meant something more like “not even worth looking for alternatives”. Modern alternatives to GC have memory safety properties that are comparable to GC languages, and are _better_ for resource safety and ease of writing safe concurrent code. This is because they do not focus on “manual resource shuffling”, but on using generalizations of the RAII pattern to describe how resources will be used in a quasi-declarative way (where ‘resources’ includes but is not limited to memory)– thus enabling varieties of automatic resource management that have near-zero overhead, unlike fully-general GC.

          And this is important not just for RT, but most likely also for enabling high throughput on modern and upcoming hardware, which is generally constrained by a combination of low multi-core utilization, and highly-limited memory bandwidth _per core_ (within a given amount of CPU cycles). Again, this is not immediately relevant to reposurgeon, but it is for just about everything else.

        • Yes, like the “admittedly rare” cases where you care about lowering long-term defect rates, Or security. Or not wasting your developers’ time on resource shuffling that a machine can do better.

          All of these can be had with RAII in C++ or Rust. And RAII gives you strictly deterministic object lifetimes — meaning zero GC pause, and zero memory overhead.

          • Memory (de)allocation can still take unexpected amounts of time, you can just sort of predict where in the code path these delays are possible. And they can’t happen in a parallel thread.

            • > And they can’t happen in a parallel thread.

              What makes you say that? By-default, they probably won’t happen in a different thread. But assuming that your memory allocator is thread-safe, it’s certainly easy-enough to do on-demand.
              If you know that you have a large object that you want to deallocate in a different thread, you could perform a move/swap to an object owned by a thread which does your memory deallocations.
              If you are willing to change your data type even a tad, you could wrap it in a mechanism which does this for you so you could do it with one/some/all instances, everywhere.

        • I don’t think anyone seriously debates any more whether automation in memory management is a good thing. AMM is table stakes in all but the most extreme environments, where considerations like security or developer time are irrelevant.

          AMM is on the checklist for safety certifications. If you’re not using some kind of AMM, even one you built yourself with C macros (and prove that you use religiously, and have working tools that catch you if you blaspheme), you have to explain why not during the safety audit, and your auditors have to agree with your explanation, or you won’t get a safety certificate. (Safety auditors would prefer you don’t allocate memory at run time at all, but if you must, they prefer it be managed by software).

          The debate is whether GC is a good AMM scheme for general systems-programming use. That is certainly not true in the general case, and possibly not even in the common case. reposurgeon is exceptional in many ways, and being a poster child for GC is possibly one of them.

  13. Just asking, does the Jython implementation of Python have the same issues as the CPython implementation? Have you tried running your tool with it?

    • Moderator: Never mind. I did not actually realize Jython was a dead project. You can delete both my posts.

  14. What issues did you run into with Ocaml? They’ve added some fairly “crucial” features quite recently, including meaningful support for concurrency. (Rust is also seeing a lot of improvement over time, but Rust is not really an appropriate choice for a program that has to deal with huge graphs, potentially including cycles– whereas that’s exactly where a GC-based language can be appropriate.)

    • >What issues did you run into with Ocaml?

      Just the big semantic gap. Idiomatic Ocaml doesn’t look or feel anything like Python, which is a big deal wgen your choosing partly to hold down the time cost of translation. I wasn’t expressing any negatve about the language itself.

  15. In my world – which is not Eric’s – life consists of prototype & proof of concept in the numpy/scikit/anaconda toolset, deploy at scale on pyspark / h2o / tensorflow.

    Python here is still (or one again) in its original role as a quick & easy glue language, controlling other things that are doing the heavy lifting.

    UPDATE: oops, should have read a couple of paragraphs further to the point where Eric addresses this point.

  16. Python has its problem and is inadequate? Sure, whatever. The replacement is Go? Please, no.

    Not for any technical reason, mind you. But it’s a Google-specific language. All *good* languages with healthy development have always been unonwned by a single entity. Go, in stark contrast, is *named* for Google, was entirely created within Google, and all development is made by Googlers. There is *no way* that any non-Googler can come now, later and after the fact, and try to affect a non-Google culture into the already established Google culture in the Go project. Even in a 100 years, when all the individual Go core developers have been replaced, the pattern would still be there. The origin of the project defines its culture. And the *current* state of the Go core developer demographic defines their allegiance.

    Therefore, I would not trust Go to develop into a reasonable language any futher than I would trust C#, or Java in its Sun heyday. Any single entity with entirely different goals cannot be allowed to have such a large influence on a language which has aspirations on industry- and community-wide adoption.

    • You’re probably right in a general sense, but I don’t think that reposurgeon has been made _dependent_ on Go in any real sense, or that this rewrite has anything to do with “industry- and community-wide adoption” of this particular language. Go just happens to be a pretty good fit for this job. I also expect that parts of reposurgeon could fairly easily be rewritten in some other language as needed (including, e.g. Rust), subject only to the mild hairiness of Go’s “cgo” FFI implementation.

    • C was developed entirely at AT&T, which at the time was even more an evil monopoly than Google is today.

      In fact I see more Bell Labs culture in Go than I do Google culture. (showAds(), sellPersonalInfo(), and acquiesceToChina() are not yet standard library calls…)

      Committees rarely design good languages; they can only codify and support good existing practice. That first implementation of a language has to come from somewhere; generally, it’s a single organization. Go is no different in this regard from C or C++ (AT&T) or JavaScript (Netscape).

      • @Teddy > Go, in stark contrast, is *named* for Google, was entirely created within Google, and all development is made by Googlers.

        That’s a really foolish viewpoint made from historical ignorance. See below.

        @Jeff Read > In fact I see more Bell Labs culture in Go than I do Google culture.

        And given the core language design team – Ken, Rob, Russ, et al – this should come as exactly zero surprise. It is Bell Labs culture!

        Go is not Google, it is CSP and Plan 9 made useful.

        • >Go is not Google, it is CSP and Plan 9 made useful.

          I agree. I think this is a reason I find Go fits my hand well, because I steeped myself in the artifacts of the old Bell Labs culture when I was a noob. In a non-silly sense it’s…a return to my roots.

  17. I must have had too much coffee. I misread the title as a homage to the Bill & Ted movies (albeit 30-odd years later) – “Reposurgeon’s Excellent Adventure and Python’s Bogus Journey”. Move along, nothing to see here…

    It does shed a bit of light onto a problem I’m chewing on, where a translation to Go might be more tractable than plain algo-fu in Python.

    • >I must have had too much coffee. I misread the title as a homage to the Bill & Ted movies (albeit 30-odd years later) – “Reposurgeon’s Excellent Adventure and Python’s Bogus Journey”.

      You were supposed to.

  18. I think PyPy’s working on the GIL problem. They tried out Software Transactional Memory; found out it was too much work; and have chosen, I believe, to reimplement everything in a multi-threaded compatible way.

    If the CPython devs were to break compatibility with the C API, they can continue to support it via emulation, no? I believe this is what the PyPy folks did.

    I think the Graal/Truffle project has found a way to maintain performance while doing such emulation. If you JIT the C code (or Fortran code) alongside the Python code, then you can do the emulation with less overhead. Writing a C interpreter and Fortran interpreter in RPython or Truffle might be a large undertaking, even though I believe the Truffle folks have partially done it.

  19. Regarding strings, I’ve always wondered why languages insist on only having one String type. Why not have AsciiString, Utf8String, and Utf16String and then have explicit conversion between types? I mean, we do that with numbers (float, double, int, long, BigDecimal) and we choose the right type for the right job.

  20. @esr: My general sense is that I don’t see Python going away because of Go.

    You mentioned developing happily in Python for decades. If you hadn’t created Reposurgeon and watched Python fall over when attempting to deal with really huge repos, you might still be happily coding in Python.

    All languages have strengths and weaknesses, and specific problem domains they address. It’s why there are so many languages out there. I can see serious Python devs who are aware other languages exist reading your post and saying “Dude! Why were you using Python for that? It’s another language’s job!”

    I think of Java and Python as similar in approach. Both are open source and cross-platform, with the goal of being able to run your code on any platform that has a current language runtime. The problem, of course, is that the platform must have a current language runtime. (That means forget Android, and imposes lower limits on the hardware you have to have to be able to support the language.)

    Python tends to get classed as a scripting language, but I think it straddles the line between scripting language and general purpose language. And Python benefits from the fact that hardware gets steadily smaller, faster, and cheaper. You can write applications in Python that get sufficient performance that you don’t have to write in something like C++ that compiles to native code.

    I can see lots of places where Python will still be the language of choice. One area is the Linux distros that have shifted to Python as the configuration language. (As an old shell scripter, I have mixed feelings about that, but it ca and is being done.)

    I think Go is a good choice for a rewrite of Reposurgeon, but I think the underlying problem is that no “one size fits all” language exists. You reached for Python when you started writing Reposurgeon because you had been writing pretty much everything in it for years, but you found out courtesy of the GCC conversion that there were limits to Python that you hadn’t anticipated.

    So a more interesting question might be “What drives language choices, and what sort of questions should devs ask when presented with a problem before they choose the language to address the problem?”

    >Dennis

    • There’s two points however that ESR made in the post that are big problems going forward for Python:

      1) the library path hell (which burns me a LOT whenever I try and make Python applications for more than my own personal use)
      2) it’s multi-threaded performance for CPU bound tasks, in an increasingly parallel world (our CPU’s are growing in parallel capacity much faster than growing in single-thread performance).

      Python won’t die completely any time soon, much like perl isn’t completely dead. It’ll just be relegated to being a good alternative for shell scripts and other small tasks, rather than being used for anything non-trivial.

      Which is a shame because the language is so approachable and friendly in most other ways.

      • @Aaron Traas:Python won’t die completely any time soon, much like perl isn’t completely dead. It’ll just be relegated to being a good alternative for shell scripts and other small tasks, rather than being used for anything non-trivial.

        I agree with ESR’s points, and yours. The fundamental issue is that Python is increasingly unsuited to be a general purpose language for writing applications.

        The problem is that I don’t think the core Python devs understand that, or the fact that many folks are trying to use it that way and finding out the hard way that they can’t.

        One of the most used applications here is Calibre, an open source “Swiss Army Knife for eBooks” It’s cross platform, available for Windows, Linux, and MacOS. The developer, Kovid Goyal, is also a maintainer of the v2 Python branch and has no good to say of Python v3. The last I looked, he had an experimental branch with Calibre being rewritten in C# to get around some of Python’s limitations. I’m watching that with fascination.

        >Dennis

    • “…If you hadn’t created Reposurgeon and watched Python fall over when attempting to deal with really huge repos, you might still be happily coding in Python.”

      I think this misses the larger point…

      Python was my language of choice for 15 years. But in the last 2 years I’ve largely abandoned it for Go. The tl;dr version is simply that Go does almost everything better than Python. Stated another way, if I pick Python over Go I gain very little and lose a lot. If I pick Go over Python I gain much but lose very little.

      I don’t wish to be a fanboi of Go. If there was something better I’d use it. But the simple fact is that the annoyances of Python steadily accreted over the years to where I was going to use something else.

      None of which is to say there aren’t things I miss from Python. But the big wins with Go tipped the scale pretty far.

      • @Michael:I think this misses the larger point…

        I don’t wish to be a fanboi of Go. If there was something better I’d use it. But the simple fact is that the annoyances of Python steadily accreted over the years to where I was going to use something else.

        I got the point, and largely agree. My basic question was “What drives language choices, and what sort of questions should devs ask when presented with a problem before they choose the language to address the problem?”

        Python was good enough for many years that it got thought of as a general purpose applications language rather than a scripting language, and got away with it because it mostly wasn’t used for stuff that would encounter its limitations the hard way.

        Now, Go seems to be getting the nod Python used to get as general purpose applications development language, and Python is back to being a scripting language.

        (And as an aside to an earlier poster who put Lua into the same category, I can’t agree. Lua is neat, but you can’t write stand-alone applications in it. It’s specifically intended to be embedded in a stand-alone application. You can write stand alone applications in Java and Python. The question is whether you should.)

        >Dennis

        • And as an aside to an earlier poster who put Lua into the same category, I can’t agree. Lua is neat, but you can’t write stand-alone applications in it.

          Sure, and that isn’t what I was saying. The discussion was on modern BASIC analogues and Python’s fitness to that set; same person also mentioned JavaScript’s taking-over from LISP Machines due to JS engine proliferation, someone else responded that JS fits BASIC analog classification better than Python. At which point I stated that, if that’s how we’re looking at it Lua could be a better BASIC analog than JS for reasons of mechanical transparency.

          Otherwise, I say Python is indeed the better fit for that set.

        • “Python was good enough for many years that it got thought of as a general purpose applications language rather than a scripting language, and got away with it because it mostly wasn’t used for stuff that would encounter its limitations the hard way.”

          Agreed. Maybe the point I was trying to make is that I didn’t slam into a rock-hard performance wall like ESR did. My stuff never taxed Python that way. What I got was the death by a thousand cuts of trying to hack my way thru the brambles of the Python ecosystem and eventually it was one thorn too many.

          Maybe it’s simply that I tried to force Python into being, as you said, a general purpose applications language because the likely alternatives (Java, C++) were just too horrible to contemplate.

          • What I got was the death by a thousand cuts of trying to hack my way thru the brambles of the Python ecosystem and eventually it was one thorn too many.

            Python’s solution to this problem appears to be virtualenv. For running back end apps on the major cloud providers (the ones I have direct experience with are AWS and Heroku), that seems to work fine: you pick a target Python runtime, run pip freeze in your local virtualenv to build requirements.txt, push it, and let the cloud provider’s build system do the rest.

            The extra overhead on the developer is having to work inside the virtualenv locally and have a separate one for every project. At first I found that somewhat bothersome, but I got used to it fairly quickly. The advantage is that it entirely decouples your particular Python app from the vagaries of your distro’s Python packaging vs. pip vs. whatever else; your app is in its own environment and doesn’t care what’s installed on the rest of the system or how gnarly it is to find it.

            • “Python’s solution to this problem appears to be virtualenv.”

              I’m well familiar with virtualenv. It certainly helps certain problems, but also brings some of its own. In the very least it’s another layer of tooling/complexity that consumes the developer’s valuable time & attention. Maybe it has gotten better, but the whole thing was very brittle for years. I can’t count the number of releases of pip & setuptools that were just plain broken.

              I think programming languages ought to be rated by the number of additional tools & packages you have to download / install / learn / maintain / fight-with in order to be productive. Go is up near the lead, Python is pretty far back, while Javascript is laying face down in the mud at the starting gate.

              • Maybe it has gotten better, but the whole thing was very brittle for years.

                I think it has gotten better recently, but you’re right that it was broken for years.

  21. I am shocked, shocked to find out that a dynamic scripting language is out performed by a traditional compiled language.
    Opening point Go is great. It’s like C with Python syntax.
    As to the death of python, I think the statements bring up the question. What reasons was it chosen for when the project started? Are those reasons actually any less true for people starting projects today? If one cared about performance they wouldn’t choose python. If they care about convenience python remains easier than go.
    Minimizing global locks is just a good idea though.

    • >What reasons was it chosen for when the project started?

      Combination of GC with a rich type ontology. I knew I was going to need both. Besides, it had been my first tool to hand since 1998.

  22. This is kinda what I was trying to get at in the last discussion on the topic. Is Go literally a scripting language? Well, that’s a definitional question. Is it closer than someone raised in the 1990s-era distinctions might realize from the initial read down the feature set of Go? Is it close enough to significantly eat into the market share of things that definitely are scripting languages? Yes.

    Python is still faster to bash together a 100-line script in. But I find that I get to the crossover point in about a week, where even the prototyping process is faster in Go than in Python. I have a lot more refactoring power and confidence in Go and that starts multiplying over the course of a serious project. The resulting prototype is also closer to production quality because I’ve been better able to refactor as I went.

    • ” Is Go literally a scripting language?”

      I’d be wholly resistant to calling Go a scripting language. But that’s mostly because that term is generally said as a pejorative.

      On the other hand, I’m finding myself using Go for things that would have been done in bash in the past. In fact, my current project is replacing a largish bash script with a proper Go program. And I like it. Feels … powerful.

    • Jeremy Bowers This is kinda what I was trying to get at in the last discussion on the topic. Is Go literally a scripting language? Well, that’s a definitional question. Is it closer than someone raised in the 1990s-era distinctions might realize from the initial read down the feature set of Go? Is it close enough to significantly eat into the market share of things that definitely are scripting languages? Yes.

      My basic assumption is a scripting language is one that does not have to be compiled to native code to run.

      Early script languages were interpreters, like the Unix Bourne shell that was the antecedent of bash, In that sense, BASIC could be considered a scripting language. It took some time after BASIC became popular for BASIC compilers to appear, but you could still have it interpreted interactively, then compile when you had it working as desired for production deployment.

      Definitions are blurring. Java and Python both compile to tokenized binaries, actually executed by the language runtime. The advantage is that the binaries are cross-platform, and target a virtual CPU implemented by the runtime. The runtime abstracts away the underlying platform differences.

      JavaScript began as something interpreted by a JavaScript engine in the browser, but as usage expanded and JavaScript routines got larger and libraries were involved, we started to see JIT compilers that compiled JS to machine code on the user’s system to get performance.

      Now we are seeing JavaScript as the output from compilers for other languages like C. You may not actually compile that to machine code for the target, because the target may well have a JavaScript engine like Chrome’s V8 or Mozilla’s IonMonkey that does JIT compilation installed, so you just send the JS to the target and let the resident JS engine output the machine code for you.

      I don’t think of Go as a script language, since you are still compiling to native code for deployment, and you don’t have the convenience of an interpreter step to test your code (or just interpret because you don’t need the speed of compiled code for what you are doing.)

      As Go becomes more popular and more devs become fluent in it, it can displace actual script languages because it has the speed and power to handle stuff the script languages fall down on, but the scripting languages won’t go away. The stuff they are useful for will still exist and they will be the right tools for those jobs.

      The question, as usual, is just what the job is, and what tools are the right ones to apply. Not properly understanding the problem domain and choosing the wrong tools in consequence has been the death of unnumbered projects.

      >Dennis

      • My basic assumption is a scripting language is one that does not have to be compiled to native code to run.

        Ask ten developers what a scripting language is and you’ll get at least three distinct answers. So, it’s not as pessimal as it could be (ask ten developers what OO is and you’ll get about 15 answers), but it’s still a fairly fuzzy term for sure. So I’m not at all passionate about whether we apply the fuzzy label to Go. I’m passionate about more people realizing what Go actually is, so we break down the barriers between “scripting” and “system” language and get more languages playing in this space.

        As much as I like Go, I still definitely have some non-trivial quibbles with it and want to see more people playing in this space of static languages that are almost as convenient as scripting languages, but still have the benefits of static languages too, without being as expensive to work with as Rust. (Which also absolutely has its place in the language landscape and I really like it too. But it’s not playing in the same space as Go.) Like Nim and Crystal, and hopefully a dozen more in the future. Neither of those have enough draw to drag me away from Go today. But I’m definitely happy to be dragged away later by something better.

        Also Go is not done developing. I’m far from the “Go Is Useless Without Generics” camp, but there definitely are places I’ve both wished for the feature, and wished I could download libraries that use it. (One of Go’s biggest holes, IMHO, is the difficulty of just waltzing on to GitHub and grabbing a binary tree library, or an immutable $ANYTHING library, and having it work well, because without generics such libraries basically can’t exist.) If Go does get that, it will be that much more difficult to pry me away.

        (Especially as it has become clear that the Sufficient Smart Compiler, or its modern incarnation as the Sufficient Smart JIT, isn’t going to happen and the old scripting languages are going to plateau at 10x slower than C or so, with frequent spikes to worse performance than that, all the while chewing through your RAM to get that fast. And while it’s an accident of history and I don’t believe dynamic languages are fundamentally impossible to have good threading in, it is still the world we live in that all the good static languages are basically impossible to get decent threading out of.)

        • Things I dislike about Go:

          1) Lack of generics. Go isn’t useless without generics, but their glaring lack just makes you go “oh, come on, there’s no fucking excuse for this”. A ton of CS has been done since C hit it big, there are many implementation strategies for deriving classes of types from other types, pick one and get on with it.

          2) Stupid package naming. In Java, domain-name-based package naming was advisory; I could name my package “come.on.fhqwhgads” if I so chose, as long as it didn’t clash with any libraries I was using. In Go, domain-name-based package naming is mandatory if you want your package to be go-get-able. Which means that you have to decide, up front, whether and where to publish your package online — and God help you if your favorite VCS-hosting service goes the way of SourceForge.

          3) Until Go 1.11 — no versioning. When you ‘go get’ something, it’s literally the latest thing out of GitHub or wherever.

          I just… get the feeling that the Go community doesn’t want to invest in repo infrastructure, and that’s why they’re like “oh, just use github — see, it’s even built into go get” — but they don’t allow the user to change the defaults for ‘go get’. The Go toolchain is opinionated, and if you disagree or find their solutions inadequate — tough.

          It’s kind of a shame because Go thrives in its niche — network applications. It’s not really a systems programming language (having a GC rules it out for that), but it very handily replaces C in one of C’s traditional uses: applications, utilities, and services running on a Unix substrate.

          • >A ton of CS has been done since C hit it big, there are many implementation strategies for deriving classes of types from other types, pick one and get on with it.

            I’m involved in the generics debate on go-nuts. Designing a generic-type system that meets Go standards of simplicity is a much harder problem than you think it is. Sure, it’s easy if you’re willing to lumber down your language with lots of additional keywords and special semantics for representing interface contracts that is disconnected from anything else in the language. The Go devs aren’t willing to do that and I applaud them for it.

            >In Go, domain-name-based package naming is mandatory if you want your package to be go-get-able

            Yes, this sucks if the residence node of one of your dependencies goes poof. Welcome to distributed systems on unreliable hardware. There is no good solution to this problem short of blessing one central package repository and investing enough in it that it never goes down. Oh, look, now you have a single point of failure! Similarly, the only way you avoid domain-name-based package naming is by having a single-point-of-failure name registry. Te Go policy isn’t abdication or laziness, it’s a refusal to pretend that reality is not what it is.

            • Twenty years ago I met C++ templates, and it gave me something I’d been wanting for years — a way to write a standard data structure or algorithm ONCE and be done with it, instead of having to rewrite it for every new combination of types. Ever since then templates/generics have been a must-have feature for me in a statically-typed language. I don’t ever want to go back to the bad old days.

              So Go has always been dead on arrival for me.

  23. >extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences

    I think the idea may have been to make it easy for business apps, for programmers who are not really that technical and they can just take any input string from a web form, stuff it in a database, export it in XML, import into another app and it is still displayed right without really having to understand what is going on.

    For business, text is often a stinking mess you don’t really want to touch, just save and display without really doing anything with it. Before Unicode we had to deal with crap like someone calling from the Norwegian subsidiary that the uppercase 0/ letters are not displaying properly, when you don’t even speak the language and have little idea what is going on.

    Unicode arrived and we gladly forgot about it all and basically started to treat text as black boxes we don’t look into, because we can’t, it can be Chinese or Thai or anything in a multinational corporation, so we just save and display and print them out as they are entered without touching them and the interpretation is up to the users.

    They may have wanted to make it easy. They may have also botched that, I don’t know.

  24. FWIW, although it is surely a lot of fun to do the conversion, it seems to me to be a solution in search of a problem. You are having a problem with the conversion of the largest repository you can find? Surely that doesn’t demand a solution for the 99% that it works well for? To me the solution is to buy a little time on an AWS machine with an insane amount of memory and be done with it. Especially since it is a one and done program.
    I understand that the nature of quadratic growth doesn’t always make this a viable solution, from the problems you describe it is a solution here. Extra hardware is always cheaper than custom software.

    • > To me the solution is to buy a little time on an AWS machine with an insane amount of memory and be done with it.

      I hear this a lot.

      No, the cloud isn’t the answer. Cloud VMs have poor memory latency; that would make my test runs longer, not shorter.

    • > Especially since it is a one and done program.

      I think it’s worth being very clear that it is not a one-and-done program. It’s a tool that may require many runs to get just the right tuning for your use case, modulo the specific pathology to be wrestled. That tuning time should be maximally spent feeling out the edges of the problem, not waiting for processes to complete or die of resource starvation.

  25. > reposurgeon hit a performance wall on the GCC Subversion repository. 259K commits, bigger than anything else reposurgeon has seen by almost an order of magnitude; Emacs, the runner-up, was somewhere a bit north of 33K commits when I converted it.

    > The sheer size of the GCC repository brings the Python reposurgeon implementation to its knees. Test conversions take more than nine hours each

    These numbers boggle my mind. Why does reposurgeon have so much trouble with repos that are so small? Serious question.

    The Linux kernel is a “medium” git repo and it’s 3x larger. I work with svn and git repos that are 30x to 400x larger than Linux. I won’t count the 400x larger repo as it’s pretty unusable in git, but there’s still a two-order-of-magnitude gap between “the biggest known reposurgeon project, ever” and “repos that I routinely slice and dice as I move from project to project.”

    What is reposurgeon doing with all that time? What are the top three profiling hotspots? Which feature of reposurgeon contributes the most to the cost?

    I have to do really heavy editing (like, file-content-changes heavy) to get anything that isn’t SHA1 or zlib into the profiling top two. A full commit history dump on the Linux kernel takes 25 seconds; manipulating it according to the DVCS migration HOWTO and shoving it back into the git repo takes maybe 10 times that depending on the heaviness of the edits. Author and commit reference mapping requires a lookup table with fewer than a million entries–utterly trivial on modern hardware. A diffstat of the entire history is 3 million file records, it takes 14 minutes to run, and it involves less CPU work than SVN property and merge conversion. Still ~40x less than 9 hours…for a 3x larger repo.

    The gap is not because of the implementation language. I’m using shell commands and ad-hoc Perl scripts. Except for the git tools themselves, everything is slow and single-threaded, and somehow it all still runs rings around reposurgeon. One of us is missing something important.

    Perhaps there is some larger workflow issue? Do the 9 hours include svn-fast-export or cvs-fast-export runtime? Do you do a test conversion, say “whoops there’s a problem at commit 5432 out of 259000” and do the entire conversion again, instead of fixing it in-place in the output git repo? Is reposurgeon rewriting embedded RCS tags?

    • >These numbers boggle my mind. Why does reposurgeon have so much trouble with repos that are so small? Serious question.

      You think this is slow? Fact: Even the Python version reads Subversion dumpstreams faster that the native Subversion importer does – and that latter is written in C.

      >What is reposurgeon doing with all that time?

      In the Python version the code that gets hammered seems to be object allocation and string-bashing. There were serious O(n**2) hotspots at one time but Julian Rivaud and I smashed those out back in 2013.

      I’m going to be very interested to se e where the usage spikes are in the Go version.

      • > You think this is slow?

        Yes, I think this is slow. git filter-branch feeds every commit to a shell instance for mangling, and doesn’t take that kind of time to rewrite the commit history even when the repo is much larger. If your conversion time is dominated by the performance of handling dumpstreams then I think I understand the 9-hour problem.

        I divide the conversion work into two phases: one just translates SVN literally into git, so I get 259000 commits on a single branch and a parallel branch containing a tree of metadata corresponding to each content commit. The second phase reads just the metadata and mangles it as required to rearrange the blobs into trees, branches and tags. I built it that way because the first phase that moves the blobs from SVN to git takes a few hundred to a few thousand hours on my repos (most of this is SHA1 and zlib and IO), but it never needs to be repeated once it’s done. The second phase (the one that needs some human input and seems to do the interesting things reposurgeon does) usually grinds through hundreds of thousands of commits per hour, though the “human input” thing usually means that it only gets to work in batches of fifty thousand commits or so.

        With my workflow, it’s never more than an hour per test run, and after a few repetitions of that hour that unit of work is done (assuming something horribly wrong isn’t discovered later), and I move on to the next one.

        When conversion is complete, I let git filter-branch remove grafts and git gc get rid of anything that was removed from history. I save the output of git show-refs as a checkpoint at every edit step (it’s also a handy log of the commands I used), and restore it if I do something wrong part way through.

        > In the Python version the code that gets hammered seems to be object allocation and string-bashing.

        Sure, those are the slow parts of Python, and therefore the slow parts of almost every Python program. What are the slow parts of reposurgeon? What’s allocating all the objects and bashing all the strings?

        • >What are the slow parts of reposurgeon? What’s allocating all the objects and bashing all the strings?

          Ask me again when the Go translation is done. One of the side effcts will be better profiling tools. Python’s aren’t very good – or, at least, if there’s a way to get information as fine-grained as I need out of them I’ve never found it.

          • > One of the side effcts will be better profiling tools. Python’s aren’t very good

            Fair enough. I badly miss perf on every platform that doesn’t have some equivalent.

          • > Python’s [profiling tools] aren’t very good – or, at least, if there’s a way to get information as fine-grained as I need out of them I’ve never found it.

            Stupid olde-skool trick?? Build a private copy of the python interpreter with all the debugging hooks in place (“-g” et al.); run it under gdb and set breakpoints at the places in the interpreter where you think the hotspots are.

            Obliviously, this will be nearly an order of magnitude slower than “real” Python, so you should do this with your second-gnarliest repo conversion.

  26. Why can’t model airplanes use Diesel fuel like the Big Rigs?

    Python is a microcode interepreter not unlike Java. It CANNOT perform like a compiled language until/unless there is a native Python chip (which won’t exist – there are Java, Forth, and Pascal chips, even if obsolete). Saying a compiler is 40x more efficient than an interpereter is comparing a skateboard to a muscle car.

    That said, Python jumped the shark with 3.0. Forget all the other stuff, a division is a choice between truncation and precision. When (ints) X/Y returns 1/3, either you know it will be truncated and the modulo might be needed, OR that if you convert it to some kind of float it won’t be exact, as in 0.333 (x3 = 0.999, not 1.0!). Python made a bad choice here.

    Still, for a quick program, prototyping, or low load programming Python optimizes programmers’ brain time.

    I think Python needs to be forked. Maybe Quocatyl (think feather Boa) where one would be more geeky, C-like, hackers need to know, and the other will go toward what BASIC was.

  27. Go won’t eat their lunch. Go is still not mainstream. C#/dotNet is still more a force.

    What are iOS or Android apps coded in? Go? Anywhere?

    Webserver backends?

    To extend what I said above, I agree Python is at a crossroads. Does it want to be Java, Javascript, BASIC, or does it want to be C or Go? Or will it fork?

    My sole objection to Go so far is there should be the language, and there shoudl be the standard library, like C and libc. Printf is NOT part of the C language.

    Go’s memory allocation with GC is part of the lang, not the lib. That is violating a boundary that should never be crossed.

    • Where ever you draw that line is going to be arbitrary. Unless you say minimum to be Turing complete, but then the only thing C supported was function calls, subtraction, and the ternary operator. I do lean toward minimal langauge set. I love C, but I’ve always thought they missed the boat with things like memcpy being a function. Moving data structures around should be part of the language because there are so many assumptions that can be made that a true memcpy function can’t. And that is why it is usually end up a built-in in many versions of C.
      Most languages have some way to allocate memory that is not a function call. Should C have a stack_malloc() function that has to be called to dynamically allocated stack memory for local variables? or a compile time static_malloc() for statically allocated heap memory? The fact that dynamically allocated heap memory is a function is more of the special case. And can anyone imagine statically allocate stack memory? Forth maybe?

      • I believe C now allows variable length automatic arrays. And of course alloca, which may be a Unix thing rather than C.

  28. @Amy Bowersox: Somehow, I don’t think Our Host would touch C# with a ten-foot pole, either, and I would guess that that reasoning is largely based in the fact that it is Micro$oft ick. Would that “hobble” him as a developer?
    If he chooses not to use a tool because of who made it, yes, it would, but I think he may surprise you. Eric has always been a technologist first. I’ve known him since before he became famous and still had a $DAYJOB.

    The question is “What is the right tool for the job?” and the problem is determining just what the job is. Failure to properly understand the problem to be solved has doomed many projects.

    Also, ever heard the aphorism “the leopard doesn’t change its spots”? I’m sure the EEE strategy is still in the back of the minds of many at Micro$oft.
    The leopard may not change it’s spots. People do. So do companies. That you don’t wish to believe that does not make it untrue.

    I’ve been using computers and watching this since Microsoft was the developer of a version of BASIC, and was asked by IBM to come up with an OS for the original IBM PC. Back then, IBM was the Evil Empire, and trade publications has stories about IBM sales reps threatening to have DP managers fired if they didn’t specify IBM. Microsoft was small potatoes indeed. It grew from there.

    MS is a publicly held company, and the key metric the financial markets care about is the price of the stock. MS had posted regular double digit increases in revenues and profits, and got a stock price in the stratosphere. At one point, Bill Gates was the world’s richest man, based on the value of his MS stock. He stepped aside an an opportune moment – he went out a winner. It became COO Ken Ballmer’s new mission as CEO to support the price of the stock, which was a challenge as the market was changing and MS’s business model had to change too. What the financial markets reward is growth, and the question was where growth would come from. MS had been the poster child for a growth company, but was in the process of becoming a mature company. Mature companies throw off great gobs of cash, but don’t have stock prices in the stratosphere. The market for Windows was saturated – pretty much every machine that could run Windows did. The market for Office was similar. There was revenue in upgrades and replacements, but far less new sales. Where would growth come from?

    These days, the money is in cloud services, and MS must co-exist in a multi-architecture market place and play nice and cooperate with everyone, and it knows it.

    I’m sure the EEE strategy is still in the back of the minds of many at Micro$oft.

    It may be, but if they want to keep their jobs they won’t put it into practice. MS CEO Satya Nadella is firmly behind the new approach, and he’s not going anywhere. MS’s numbers are good, thank you, so the board is unlikely fire him.

    And as an example of how much MS has changed, see https://www.zdnet.com/article/microsoft-open-sources-its-entire-patent-portfolio/

    It wasn’t all that long ago that most folks would have considered that unthinkable.

    Like I said, it’s about the money, and the way to make it these days in the tech market is pretty much the opposite of teh old days of account control and locking in the customer.

    As for OpenJDK, the blog post I linked to was intended as a warning and a pointer to OpenJDK for those that haven’t yet gotten the message. It just played up the fact that, by subtly changing the license on the downloads that people have “always” gotten from them, Oracle is trying to pull a fast one. Easily evaded once you see it for what it is, but a fast one nonetheless.

    I concur, but it’s on the developer to keep up and be aware. I’d be a bit surprised if readers here who develop in Java for a living and deploy commercial installations weren’t aware of it.

    (And trying to pull fast ones has been part of the tech marketplace for as long as I’ve been paying attention, which is over 30 years. I find a lot of it hilarious.)

    My basic objection to your original post is that it comes across here as “Microsoft? Ewww! I’ll get cooties!” No, you won’t, and the sooner you shed that notion the better off you’ll be.

    >Dennis</strong

    • >If he chooses not to use a tool because of who made it, yes, it would, but I think he may surprise you. Eric has always been a technologist first.

      I don’t trust closed-source tools, period; the downstream risks of becoming dependent on them are too high. This is me speaking as a technologist – it’s not a Microsoft/anti-Microsoft issue.

      I don’t know about C#. Is the toolchain open-souece? If so, it is theoretically possible that I might use it. Though unlikely.

        • I know a guy who was a college professor. He trained his students using Silverlight and built an amazing architectural application dependent on MS databases and whatever constituted Silverlight’s “special sauce.” It could literally take a building apart into pipes and wires and bricks, kind of like what we saw on Max Headroom, but in 3D color.

          Then MS gave Silverlight the axe. Goodbye student’s prospects for work. Goodbye miraculous application.

          When it comes to proprietary programming languages, the best things in life are free.

      • > I don’t trust closed-source tools, period; the downstream risks of becoming dependent on them are too high

        You mean like becoming dependent on Python, or GCC?

      • As others stated, the .NET and C# toolchain is open source now, but you’re probably honestly better off using Java; C# was based on Java in the first place, and Java has much better support in the open source world, in particular for running on not-Windows OSes, which C# is capable of but much of the ecosystem depends on Windows.

    • Steve Ballmer’sname is Steve Ballmer.

      He’s not cool enough to be named Ken.

      (He’s not cool enough to be named Steve either, but this is where we are.)

  29. @esr: I don’t know about C#. Is the toolchain open-souece? If so, it is theoretically possible that I might use it. Though unlikely.

    C# and F# and other things are part of the .NET framework, so the answer is probably yes. MS has open sourced .NET, and MS engineers are major contributors to the Mono project.

    This means is should be possible to write cross-platform code in C# that runs on Windows and Linux, because the code is actually executed by the .NET framework.

    You may not ever use it, but there isn’t a reason you shouldn’t beyond “wrong tool for the job”.

    I’m less fussy than you about whether to go closed or open source.

    The issue with closed source and proprietary is “Will the vendor still be around and maintaining and supporting it X years from now?” I don’t expect MS to suddenly wither and blow away.

    The issue with open source tends to be direction, and whether the project gains enough traction that someone other than the original devs gets involved and can pick up the reins if the founders leave. Most open source projects never gain traction and become abandoned.

    The things that are making you transition Reposurgeon to Go are examples of what I mean by direction. Guido has stepped back from actively leading Python development, and the folks holding the reins now don’t seem to realize the stakes of the game they are playing. Since their living probably isn’t dependent on it, what sort of club might get their attention?

    An advantage to commercial software that is sold for money is that the folks selling it have an incentive to pay attention to the customers, fix bugs in a timely manner, and implement features the customers want, because it they don’t, people stop buying their software and they go under.

    A fundamental problem with open source is the disconnect between the developers and the folks that use it (cough Mozilla cough…)

    And most things I can think of that are now open source are what happens when software becomes a commodity, and there’s no longer a way to make a decent profit selling it.

    OSes and development toolchains are now commodities, so…

    >Dennis

    • I was sitting here pondering a sci-fi reference to C#.

      Note: I use C# and IronPython a lot.

      C# and its relationship to .NET is like War of the Worlds. C# represents the weak-ass martians in the giant walkers, and .NET represents the walkers themselves.

      C# is a wonderful language. Its recent updates to allow basically dynamic-like typing with the ‘var’ keyword, along with the ‘dynamic’ keyword, make it a strongly-typed pseudo dynamic-typed C++-like language, with GC, on a VM.

      But it’s forever chained to the monolith that is .NET. If you shed the .NET framework, you shed all of the usefulness of C#. It’s still a fun language to use, but the utility of it is gone. C# requires .NET.

      This doesn’t bother me, since I’m strongly tied to the current business environment using exclusively windows computers, but it’s still an issue with C# as a language, which probably won’t ever get resolved.

      • Just like Java is chained to the monolith that is the JVM. With Roslyn/Mono there isn’t really a big difference other than C# being better than Java at the same job.

        I often write code that generates and emails reports as .xlsx files and I don’t have a bad conscience about it. It is a zipped XML file, an ECMA standard, well documented, and implemented everywhere from most Android phones coming with an Excel installed on them to LibreOffice being able to read an write it. What speaks against it? Well, maybe that it can be an overkill. But I would show a report of 28 (long) lines which is a 11kbyte file. Not really a hog to email. Unzipped 68k, of which 29k is the actual data. Rest is mostly style. Yes I like manager reports to have some style and not just plain csv files. Why not. Any Linux server can parse it with openpyxl etc.

        And if it hadn’t had one? After unzipping it, the data is in sheet.xml, with a really basic structure telling you the value of the H15 cell is 1062. If it was a string, one more step is required, reading sharedStrings.xml. Parsing it is a basic entry level programming school homework.

        No, I really don’t feel bad about this at all. Should I? It definitely does not look like vendor lock-in at all.

      • “C# is a wonderful language. … But it’s forever chained to the monolith that is .NET.”

        A pity, that.

        Having spent many days trying to fix broken installs of .NET on some rattle-bang Windows box, I came to despise the very smell of it.

        Microsoft can do really good work when they want to. The design of C# testifies to that, while .NET is the counterexample.

        • I think the root cause of that bad experience is the windows DLL-Hell rather than any fault of .NET itself. Its risking your sanity to look under the covers of how windows 10 or recent server versions manage this phenomenon, but from the perspective of the user being able to install programs that have hard and conflicting shared library requirements (and not just .net or directx) and have them Just Work is pretty nice.

  30. > That “actual objects” qualifier is important because there’s a substantial scientific-Python community working with very large numeric data sets. They can do this because their Python code is mostly a soft layer over C extensions that crunch streams of numbers at machine speed. When, on the other hand, you do reposurgeon-like things (lots of graph theory and text-bashing) you eventually come nose to nose with the fact that every object in Python has a pretty high fixed minimum overhead.

    Have you tried accelerated graph libraries, like Graph-Tool library (with C++ backend), or SNAP.py (Python wrapper around Stanford Network Analysis Platform library)? Those may have the same advantages as using NumPy for vectorizable data.

    There is also Numba – a Python compiler, though it is geared mostly towards the same work NumPy and SciPy is best at (and is compatibile with NumPy). Perhaps reorganization of data, from commonly used “array / list of structs / dicts” to easier to vectorize “struct of arrays” (also known as Inside-Out Objects).

    > The most notable one at the time was the Python team’s failure to solve the notorious GIL (Global Interpreter Lock) problem. The GIL problem effectively blocks any use of concurrency on programs that aren’t interrupted by I/O waits. What it meant, functionally, was that I couldn’t use multithreading in Python to speed up operations like comment-text searches; those never hit the disk or network.

    Doesn’t alternate Python implementations like PyPy and Cython avoid GIL?

    • >Have you tried accelerated graph libraries, like Graph-Tool library (with C++ backend), or SNAP.py (Python wrapper around Stanford Network Analysis Platform library)?

      No, and it’s not going to happen now with the pure Go translation at 56%.

      Anyway, I would have taken a lot of persuading that overcoming the impedance mismatch at the Python-C++ boundary was easier than just translating the whole program into one other language. Seams like that are hell on your complexity budget and downstream defect rates.

      >Doesn’t alternate Python implementations like PyPy and Cython avoid GIL?

      No. If they did, you’d hear a lot of buzz about concurrent Python on these platforms; ain’t happening. The PyPy people are working on the problem, though, which is more than one can really say about the C-Pyton creww.

  31. Any news on this?:

    http://archive.4plebs.org/pol/thread/187078170/#187093805
    > Anonymous ID:sqkjzTNz Thu 27 Sep 2018 00:23:36 No.187093805 ViewReport
    > Working with a group of 5 other devs thus far to rescind our code. Some of whom have been major contributers.
    >Was hard to find a decent attorney with tech based experience due to the fact that a lot of heavy weighters are going to be extremely upset. Drafting legal docs to make it as impactful as possible.
    >
    >All that work and you fuck us over, Linus. Now we fuck you back.

    ——

    http://archive.4plebs.org/pol/thread/187078170/#q187094037
    > leiqdbSauceNAO Screen Shot 2018-09-26 at 8.24.1 (…).png, 136KiB, 458×928
    > Anonymous ID:zDdj02sY Thu 27 Sep 2018 00:25:31 No.187094037 Report
    > Quoted By: >>187094386 >>187094624 >>187099544
    >Stop fucking commenting as if this would ground the internet to a halt overnight. Revoking code means it can’t be in the next version, it doesn’t invalidate prior versions. 80% of the code submitted to the Linux project is from Redhat or some other major company like Intel or AMD. That other 20% is minor shit except for the few people who have been around forever. This fucking threat is hollow because no company is going to pull their code because of a few trannies. This CoC thing will be ignored because big money rests on the project being successful. The moment this fucking dipshit starts becoming a liability the big money will smite him full force.

    ——

    http://archive.4plebs.org/pol/thread/187078170/#q187094386
    > Anonymous ID:sqkjzTNz Thu 27 Sep 2018 00:28:48 No.187094386 Report
    > Quoted By: >>187104619
    > >>187094037
    > >except for the few people who have been around forever.
    >
    >Ding ding.

    ——————

    https://lulz.com/linux-devs-threaten-killswitch-coc-controversy-1252/
    >thejynxed
    >
    >My company is already considering the full withdrawal of all contributed code to the kernel project and related embedded kernel projects. You literally can\u2019t run embedded Linux on industrial controls or handheld scanners without this code.
    >September 22, 2018 Reply

    • If there’s even a hint that this sort of thing is even feasible, watch Oracle pull another fast one and make OpenJDK go bye-bye. Then, everyone will have to pay for Java licenses.

      And that’ll be just the beginning. This is a can of worms you do not want to open up.

      Is the freedom to be a dick online and still contribute to the kernel really worth it?

      • >Is the freedom to be a dick online and still contribute to the kernel really worth it?

        Preventing totalitarians from getting the control they crave is always worth it.

        • One of the linchpins of open source is that once released under an open-source license, that code remains open in perpetuity. I don’t think even you would want to cede that ground for an opportunity to stick it to some of your hated enemies — because you won’t be getting that back, and once companies like Oracle sniff out that there’s money to be made by rescinding and charging license fees for code that they once offered in good will to the community, they won’t hesitate to do so for a second.

          To quote one of fiction’s greatest military space commanders — it’s a trap!

          • That may be true in spirit, but recall that the FSF saw it necessary to make it explicit in GPLv3.0, section 2 of Terms and Conditions, and may be taken as evidence that the threat has legal teeth. It is yet to be tested.

          • The passion is like a flower.
            It must be supported, but not held too fastly.
            Kept from the burning sun, but not heralded into the night everlasting.

            All things, within themselves, bear the seeds of their own distruction.
            The Free Software movement, in it’s error, imagines every user a
            creator. We know this not to be true. In that, the true progentors are
            discounted and unworthened. Imagined to be replaceable, to be thrown
            away if irredeemable: the irreplacable artifact. Their departure mistooken
            for a familiar voyage seen before; whereas it is a final farewell.

            Opensource, too, has its deadly poison. One we see sprouting it’s
            leaves in tandem with the social ills of its predecessor and
            compatriot. Opensource, to be enthralling to the business class, flew
            close to that hidden master; for behind this class lies not the
            freedom of choice: of the market: of mertit. Behind the business class
            lies the leaded hand of our modern fates. The rulers who show not
            their face but sew our futures.

            Here today, we stand before the confluence, by design, of these two
            fatal seeds: their sprouting arms interwined. Let them grow they will
            strangle what we loved, what we laboured for. Every victory turned
            instead to a detriment: that what was a labour of love: now the chain
            that would bind us to our malefactor.

            Shall we continue down the path that has, before us, the bricks
            allready set. That which we are ment to go down, for it is the easiest
            path, the one which uses our body of works against our weal?

            Or shall we do what must be done? Must that which is corrupt bake in
            the sun? Must our child be tortured, turned against his parent, turned
            into a snake? Must we submit to those who would rule us? Who we
            struggled against (but have forgotten in their embrace)? Our enemy who
            beseeches us that he is our friend?

          • >One of the linchpins of open source is that once released under an open-source license, that code remains open in perpetuity.

            Misappropriation of the developer’s code resulting in harm to his reputation, or compromising the reputational incentives offered by open source, is a cause of action under Jacobsen vs. Katzer. This is a separate issue from revocation of the license.

            • This is indeed one of the blatantly stated goals: to shame and bring disrepute to a disobedient programmer.

          • >One of the linchpins of open source is that once released under an open-source license, that code remains open in perpetuity.

            This is never stated in the “classic” opensource licenses.
            (Which are all very short documents, not drafted by lawyers)
            (BSD, GPLv1, GPLv2, MIT, etc)

            It is just assumed to be by various people.

            Which is a mistake under US law since the property-law defaults are actually the opposite.

            It IS stated in the 2nd and 3rd generation of opensource licenses that were actually drafted by corporate council (of which the GPLv3 is one).

            >and once companies like Oracle sniff out that there\u2019s money to be made by rescinding and

            They already know. This is why they usually insist on proper licenses without missing or omitted terms, as-well as copyright assignments.

            The Opensource and Free Software movements might need to disassociate with the USA however. I remember a time when the Opensource and Free Software movements had little love for the USA and openly flouted US encryption restrictions. At that time the movements were inhabited by the people, and not controlled by establishment interlopers.

      • > Is the freedom to be a dick online and still contribute to the kernel really worth it?

        Given that the SJW definition of “be a dick” is “doesn’t support my position loud enough”, yes.

        Remember:
        “You cannot be civil with a political party, that wants to destroy what you stand for, what you care about.”

        Remember which side keeps *starting* this shit.

        • The SJW’s definition of “being a dick” is “having one”.

          A white male cannot be hired at google unless he physically chops it off.

          America is a disgusting aberration.

          • >A white male cannot be hired at google unless he physically chops it off.

            No, most of their techies are still white and male. People who meet the IQ minimum for those jobs are too rare in non-white populations other than East Asians for it to be otherwise. And most white women who meet the threshold have sensibly decided they can have better lives somewhere other than software engineering.

            This brutal reality must gall the lefty/identitarians inside Google terribly. I think that pain explains why the reaction to James Damore was so vicious.

    • I’ll believe it when it happens. Until then, it’s just a bunch of whining which isn’t going anywhere.

  32. Doesn’t the GPL prohibit additional restrictive terms?

    Does appending a writing stating, in a multitude of words, “you cannot contribute if you are not a proud multi-positive feminist, and we will hunt you down and have your lively-hood ruined if we deem words of yours offensive”, and requiring contributors to agree to said contract constitute such an additional restrictive term in practice?

  33. Tried to run the script from the post on MicroPython:

    3.4.0
    dict 16
    float 12
    int 4
    list 32
    set 16
    str 17
    tuple 8
    unicode 17

    Of course, it wouldn’t help with ESR’s issue, MicroPython is an experiment in scaling Python *down*, not *up*.

  34. Writing in a high level language and optimizing the hot paths in low level language (after measuring hard numbers) is how one does programming right. Everything else (no matter how smart(TM) the language/tool is or how big are the glasses he wears) is nothing more than asking for less readable code, subtle bugs, premature optimization, reinventing the wheel, bad software architecture, or a combination thereof.

    • To be clear. When a project uses both a macro assembler like C and a zen language like Python, it’s not a workaround, it’s done on purpose. They try to solve different problems and that’s what lets them be exceptionally good in their respective fields of application.

      Sure there are many people out there who miss the point, and write C++ with Python syntax and Pascal with C syntax, but that’s not fault of the language.

Leave a Reply

Your email address will not be published. Required fields are marked *