Reposurgeon’s Excellent Journey and the Waning of Python

Time to make it public and official. The entire reposurgeon suite (not just repocutter and repomapper, which have already been ported) is changing implementation languages from Python to Go. Reposurgeon itself is about 50% translated, with pretty good unit-test coverage. Three of my collaborators on the project (Daniel Brooks, Eric Sunshine, and Edward Cree) have stepped up to help with code and reviews.

I’m posting about this because the pressures driving this move are by no means unique to the reposurgeon suite. Python, my favorite working language for twenty years, can no longer cut it at the scale I now need to operate – it can’t handle large enough working sets, and it’s crippled in a world of multi-CPU computers. I’m certain I’m not alone in seeing these problems; if I were, Google, which used to invest heavily in Python (they had Guido on staff there for a while) wouldn’t have funded Go.

Some of Python’s issues can be fixed. Some may be unfixable. I love Guido and the gang and I am vastly grateful for all the use and pleasure I have gotten out of Python, but, guys, this is a wake-up call. I don’t think you have a lot of time to get it together before Python gets left behind.

I’ll first describe the specific context of this port, then I’ll delve into the larger issues about Python, how it seems to be falling behind, and what can be done to remedy the situation.

The proximate cause of the move is that reposurgeon hit a performance wall on the GCC Subversion repository. 259K commits, bigger than anything else reposurgeon has seen by almost an order of magnitude; Emacs, the runner-up, was somewhere a bit north of 33K commits when I converted it.

The sheer size of the GCC repository brings the Python reposurgeon implementation to its knees. Test conversions take more than nine hours each, which is insupportable when you’re trying to troubleshoot possible bugs in what reposurgeon is doing with the metadata. I say “possible” because we’re in a zone where defining correct behavior is rather murky; it can be difficult to distinguish the effects of defects in reposurgeon from those of malformations in the metadata, especially around the scar tissue from CVS-to-SVN conversion and near particularly perverse sequences of branch copy operations.

I was seeing OOM crashes, too – on a machine with 64GB of RAM. Alex, I’ll take “How do you know you have a serious memory-pressure problem?” for $400, please. I was able to head these off by not running a browser during my tests, but that still told me the working set is so large that cache misses are a serious performance problem even on a PC design specifically optimized for low memory-access latency.

I had tried everything else. The semi-custom architecture of the Great Beast, designed for this job load, wasn’t enough. Nor were accelerated Python implementations like cython (passable) or pypy (pretty good). Julien Rivaud and I did a rather thorough job, back around 2013, of hunting down and squashing O(n^^2) operations; that wasn’t good enough either. Evidence was mounting that Python is just too slow and fat for work on really large datasets made of actual objects.

That “actual objects” qualifier is important because there’s a substantial scientific-Python community working with very large numeric data sets. They can do this because their Python code is mostly a soft layer over C extensions that crunch streams of numbers at machine speed. When, on the other hand, you do reposurgeon-like things (lots of graph theory and text-bashing) you eventually come nose to nose with the fact that every object in Python has a pretty high fixed minimum overhead.

Try running this program:

from __future__ import print_function

import sys
print(sys.version)
d = {
     "int": 0,
     "float": 0.0,
     "dict": dict(),
     "set": set(),
     "tuple": tuple(),
     "list": list(),
     "str": "",
     "unicode": u"",
     "object": object(),
}
for k, v in sorted(d.items()):
     print(k, sys.getsizeof(v))

Here’s what I get when I run it under the latest greatest Python 3 on my system:

3.6.6 (default, Sep 12 2018, 18:26:19) 
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
dict 240
float 24
int 24
list 64
object 16
set 224
str 49
tuple 48
unicode 49

There’s a price to be paid for all that dynamicity and duck-typing that the scientific-Python people have evaded by burying their hot loops in C extensions, and the 49-byte per-string overhead is just the beginning of it. The object() size in that table is actually misleadingly low; object instance is a dictionary with its own hash table, not a nice tight C-like struct with fields at fixed offsets. Field lookup costs some serious time.

Those sizes may not look like a big deal, and they aren’t – not in glue scripts. But if you’re instantiating 359K objects containing actual data the overhead starts to pile up fast.

Alas, I can’t emulate the scientific-Python strategy. If you try to push complex graph-theory computations into C your life will become a defect-riddled hell, for reasons I’ve previously described as greenspunity. This is not something you want to do, ever, in a language without automatic memory management.

Trying to break the GCC conversion problem into manageable smaller pieces won’t work either. This is a suggestion I’m used to hearing from smart people when I explain the problem. To understand why this won’t work, think of a Subversion repository as an annotated graph in which the nodes are (mainly) things like commit representations and the main link type is “is a parent of”. A git repository is a graph like that too, but with different annotations tied to a different model of revisioning.

The job of reposurgeon is to mutate a Subversion-style graph into a git-style graph in a way that preserves parent relationships, node metadata, and some other relations I won’t go into just now. The reason you can’t partition the problem is that the ancestor relationships in these graphs have terrible locality. Revisions can have parents arbitrarily far back in the history, arbitrarily close to the zero point. There aren’t any natural cut points where you can partition the problem. This is why the Great Beast has to deal with huge datasets in memory all at once.

My problem points at a larger Python issue: while there probably isn’t much work on large datasets using data structures quite as complex and poorly localized as reposurgeon’s, it’s probably less of an outlier in the direction of high overhead than scientific computation is in the direction of low. Or, to put it in a time-focused way, as data volumes scale up the kinds of headaches we’ll have will probably look more like reposurgeon’s than like a huge matrix-inversion or simulated-annealing problem. Python is poorly equipped to compete at this scale.

That’s a general problem in Python’s future. There are others, which I’ll get to. Before that, I want to note that settling on a new implementation language was not a quick or easy process. After the last siege of serious algorithmic tuning in 2013 I experimented with Common LISP, but that effort ran aground because it was missing enough crucial features to make the gap from Python look impractical to bridge. A few years later I looked even more briefly at Ocaml; same problem, actually even worse.

I didn’t make a really serious effort to move sooner than 2018 because, until the GCC repository, I was always able to come up with some new tweak of reposurgeon or the toolchain underneath it that would make it just fast enough to cope with the current problem. But the problems kept getting larger and nastier (I’ve noted the adverse selection problem here). The GCC repo was the breaking point.

While this was going on, pre-GCC, I was also growing somewhat discontented with Python for other reasons. The most notable one at the time was the Python team’s failure to solve the notorious GIL (Global Interpreter Lock) problem. The GIL problem effectively blocks any use of concurrency on programs that aren’t interrupted by I/O waits. What it meant, functionally, was that I couldn’t use multithreading in Python to speed up operations like comment-text searches; those never hit the disk or network. Annoying…here I am with a 16-core hot-rod and reposurgeon can only use one (1) of those processors.

It turns out the GIL problem isn’t limited to non-I/O-bound workloads like mine, either, and it’s worse than most Python developers know. There’s a rather terrifying talk by David Beazley showing that the GIL introduces a huge amount of contention overhead when you try to thread across multiple processors – so much so that you can actually speed up your multi-threaded programs by disabling all but one of your processors!

This of course isn’t just a reposurgeon problem. Who’s going to deploy Python for anything serious if it means that 15/16ths of your machine becomes nothing more than a space heater? And yet the Python devs have shown no sign of making a commitment to fix this. They seem to put a higher priority on not breaking their C extension API. This…is not a forward-looking choice.

Another issue is the Python 2 to 3 transition. Having done my bit to make it as smooth as possible by co-authoring Practical Python porting for systems programmers with reposurgeon collaborator Peter Donis, I think I have the standing to say that the language transition was fairly badly botched. A major symptom of the botchery is that the Python devs unnecessarily broke syntactic compatibility with 2.x in 3.0 and didn’t restore it until 3.2. That gap should never have opened at all, and the elaborateness of the kluges Peter and I had to develop to write polyglot Python even after 3.2 are an indictment as well.

It is even open to question whether Python 3 is a better language than Python 2. I could certainly point out a significant number of functional improvements, but they are all overshadowed by the – in my opinion – extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences.

I felt like this was a bad idea when 3.0 shipped; my spider-sense said “wrong, wrong, wrong” at the time. It then caused no end of complications and backward-incompatibilities which Peter Donis and I later had to paper over. But lacking any demonstration of how to do better I didn’t criticize in public.

Now I know what “Do better” looks like. Strings are still bytes. A few well-defined parts of your toolchain construe them as UTF-8 – notably, the compiler and your local equivalent of printf(3). In your programs, you choose whether you want to treat string payloads as uninterpreted bytes (implicitly ASCII in the low half) or as Unicode code points encoded in UTF-8 by using either the “strings” or “unicode” libraries. If you want any other character encoding, you use codecs that run to and from UTF-8.

This is how Go does it. It works, it’s dead simple, it confines encoding dependencies to the narrowest possible bounds – and by doing so it demonstrates that Python 3 code-point sequences were a really, really bad idea.

The final entry in our trio of tribulations is the dumpster fire that is Python library paths. This has actually been a continuing problem since GPSD and has bitten NTPsec pretty hard – it’s a running sore on our issue tracker, so bad that were’re seriously considering moving our entire suite of Python client tools to Go just to get shut of it.

The problem is that where on your system you need to put a Python library module in order so that a Python main program (or other library) can see it and load it varies in only semi-predictable ways. By version, yes, but there’s also an obscure distinction between site-packages, dist-packages, and what for want of any better term I’ll call root-level modules (no subdirectory under the version directory) that different distributions and even different application packages seem to interpret in different and incompatible ways. The root of the problem seems to be that good practice is under-specified by the Python dev team.

This is particular hell on project packagers. You don’t know what version of Python your users will be running, and you don’t know what the contents of their sys.path (library load path variable). You can’t know where your install production should put things so the Python pieces of your code will be able to see each other. About all you can do is shotgun multiple copies of your library to different plausible locations and hope one of them intersects with your user’s load path. And I shall draw a kindly veil over the even greater complications if you’re shipping C extension modules…

Paralysis around the GIL, the Python 3 strings botch, the library-path dumpster fire – these are signs of a language that is aging, grubby, and overgrown. It pains me to say this, because I was a happy Python fan and advocate for a long time. But the process of learning Go has shed a harsh light on these deficiencies.

I’ve already noted that Go’s Unicode handling implicitly throws a lot of shade. So does its brute-force practice of building a single self-contained binary from source every time. Library paths? What are those?

But the real reason that reposurgeon is moving to Go – rather than some other language I might reasonably think I could extract high performance from – is not either of these demonstrations. Go did not get this design win by being right about Unicode or build protocols.

Go got this win because (a) comparative benchmarks on non-I/O-limited code predict a speedup of around 40x, which is good enough and competitive with Rust or C++, and (b) the semantic gap between Python and Go seemed surprisingly narrow, reducing the expected translation time lower than I could reasonably expect from any other language on my radar.

Yes, static typing vs. Python’s dynamic seems like it ought to be a big deal. But there are several features that converge these languages enough to almost swamp that difference. One is garbage collection; the second is the presences of maps/dictionaries; and the third is strong similarities in low-level syntax.

In fact, the similarities are so strong that I was able to write a mechanical Python-to-Go translator’s assistant – pytogo – that produces what its second user described as a “a good first draft” of a Go translation. I described this work in more detail in Rule-swarm attacks can outdo deep reasoning.

I wrote pytogo around roughly the 22% mark (just short of 4800) lines out of 14000 in the translation and am now up to 50% out of 16000. The length of the Go plus commented-out untranslated Python has been creeping up because Go is less dense – all those explicit close brackets add up. I am now reasonably confident of success, though there is lots of translation left to do and one remaining serious technical challenge that I may discuss in a future post.

For now, though, I want to return to the question of what Python can do to right its ship. For this project the Python devs have certainly lost me; I can’t afford to wait on them getting their act together before finishing the GCC conversion. The question is what they can do to stanch more defections to Go, a particular threat because the translation gap is so narrow.

Python is never going to beat Go on performance. The fumbling of the 2/3 transition is water under the dam at this point, and I don’t think it’s realistically possible to reverse the Python 3 strings mistake.

But that GIL problem? That’s got to get solved. Soon. In a world where a single-core machine is a vanishing oddity outside of low-power firmware deployments, the GIL is a millstone around Python’s neck. Otherwise I fear the Python language will slide into shabby-genteel retirement the way Perl has, largely relegated to its original role of writing smallish glue scripts.

Smothering that dumpster fire would be a good thing, too. A tighter, more normative specification about library paths and which things go where might do a lot.

Of course there’s also a positioning issue. Having lost the performance-chasers to Go, Python needs to decide what constituency it wants to serve and can hold onto. That problem I can’t solve, just point out what technical problems are both seriously embarrassing and fixable. That’s what I’ve tried to do.

As I said at the beginning of this rant, I don’t think there’s a big window of time in which to act, either. I judge the Python devs do not have a year left to do something convincing about the GIL before Go completely eats their lunch, and I’m not sure they have even six months. They’d best get cracking.

CORRECTION: I wrote a tool to measure conversion completeness just after I posted this. Not 37%, 50%! Merge requests from my collaborators accounted for 13%

318 comments

  1. You need to run your spell checker over this post, because it’s too important to leave this garbled.

  2. I love Guido and the gang and I am vastly grateful for all the use and pleasure I have gotten out of Python, but, guys, this is a wake-up call. I don’t think you have a lot of time to get it together before Python gets left behind.

    It saddens me to have to agree with this sentiment. I actually think the original Python 3 release was the first writing on the wall as far as the overall management of Python as a language by its development team. But back then it was still possible to handwave away limitations like the GIL. Now it’s not.

    1. As a language, especially a learning language, I suspect Python will remain useful for many years where Perl stumbled. Perl was a (useful) mess of kludges from day one; Python remains elegant and productive for a very large set of projects, and can still teach good practice.

      Where Perl failed at code-size scale, Python is failing at resource scale. That’s a failure mode that doesn’t as strongly impact the usefulness of the language.

      1. Indeed. Python is the BASIC of the 21st century, with clear representation of modern programming paradigms and constructs. That makes it ideal for learners, casuals, and developers for whom performance or scale is not an utmost concern.

        But in reality, aside from the most low-level/performance critical stuff, everything’s going to be written in JavaScript. Forget the long-dead dream of the Lisp Machines; our computers are turning into JavaScript Machines.

        1. Python is the BASIC of the 21st century, with clear representation of modern programming paradigms and constructs.

          Except for functional programming. Which is an ever more important part of your toolkit in the multicore age. My spider-sense said “wrong, wrong, wrong” when I learned that Guido desired to and was planning on removing the few functional bits in the language in version 3.

          Forget the long-dead dream of the Lisp Machines; our computers are turning into JavaScript Machines.

          Never! JavaScript is Not Even Wrong, although I’d pick it over Python as a BASIC of the 21st, without any real knowledge of Python.

          1. Never! JavaScript is Not Even Wrong, although I’d pick it over Python as a BASIC of the 21st, without any real knowledge of Python.

            Hell, if we were going with that classification, I’d be tempted to say Lua is the modern BASIC rather than JavaScript. They’re remarkably similar underneath the syntax, and Lua to a large degree requires actually understanding the way the metatables work, if you want to do anything even remotely interesting.

            JavaScript is mostly a lot of syntactic sugar designed to (attempt to) hide how hairy it actually can be.

          2. Never! JavaScript is Not Even Wrong, although I’d pick it over Python as a BASIC of the 21st, without any real knowledge of Python.

            You may not have a choice. Industry currently appears to favor JavaScript as the language of the future for new development. This is because JavaScript enables a single language to be deployed on both the front and back ends. Which was actually also true of languages like Java, but the ubiquity of browser-based applications and the deprecation of applets mean that only JavaScript can truly fulfill this role today.

            And this has huge implications for industry, since it means that back end development needs not require special hiring of back-end domain experts anymore; just pull a few devs off your UI team, teach them Node, and have them write your back end.

            1. just pull a few devs off your UI team, teach them Node, and have them write your back end.

              Then, three to six months later, pull your hair out when you realize Node is actually shite, and attempt to solve scaling problems by throwing more and more hardware at the problem only to realize that it’s not something you can fix that way. Finally, hire a single Go dev to re-implement your back-end in a few weeks and write off 75% of your capex.

              I sure like the idea of Node, but people treat it like a humvee when it’s more like a vw buggy.

              1. The computing world is filled with Cessnas that some poor fools have scaled up to the size of a jetliner. Best to stay away.

              2. Then, three to six months later, pull your hair out when you realize Node is actually shite, and attempt to solve scaling problems by throwing more and more hardware at the problem only to realize that it’s not something you can fix that way. Finally, hire a single Go dev to re-implement your back-end in a few weeks and write off 75% of your capex.

                Yes, of course — but industry doesn’t want to hear that.

            2. And this has huge implications for industry, since it means that back end development needs not require special hiring of back-end domain experts anymore; just pull a few devs off your UI team, teach them Node, and have them write your back end.

              No, oh God no.

              I’ve spent some time on an ongoing effort to port a non-trivial JavaScript (specifically, Electron-based) application to FreeBSD.

              ‘Back end’ code written by developers without experience of systems or applications development is hugely problematic.

              One example that springs to mind (from an earlier porting experience, not the Electron-based one): a testing framework, designed to be run from the command-line, that is ignorant of $PATH.

              When I raised an issue w/ the developers suggesting their tool could, you know, look in $PATH for binaries I was knocked back. No, I was told, just add the likely locations on a FreeBSD system to their “big ol’ array of pathnames” (which included things like hardcoded drive letters for MS Windows).

              On the Electron porting project, I’ve spent a lot of effort dealing with JavaScript code that blithely assumes that the only two operating systems in the world are Linux and OSX.

              I’ve been recommending ESR’s book The Art of UNIX Programming to folks I know making the transition, so they can start to understand the philosophy behind ‘back end’ development, and how different it can be to the browser based ecosystem.

                1. Hah! No, in all cases, free software projects. I’d actually sworn off attempting to port NodeJS apps until Cypress, which is just so good at what it does that I’m willing to bear the pain again.

              1. When I raised an issue w/ the developers suggesting their tool could, you know, look in $PATH for binaries I was knocked back. No, I was told, just add the likely locations on a FreeBSD system to their “big ol’ array of pathnames” (which included things like hardcoded drive letters for MS Windows).

                You know how you solve that? Write a library that looks up and splits $PATH, publish it to NPM, and say “Here, npm install this to get your big ol’ array of pathnames. It’s the Douglas Crockford preferred approach.”

                We’re talking about a community that needs a library for fucking left_pad and is_odd.

            3. just pull a few devs off your UI team, teach them Node, and have them write your back end.

              A lot of front-end guys would require a lot more re-training than just pointing them at Node. Languages are not that hard to learn, compared to design patterns and best practices, which are completely different between the browser and the backend of web sites. In fact, one of the reasons why PHP developers have a reputation for being crap is that most of them used to start as front-end devs, and just little by little, mixed in some PHP into their HTML code.

              And yeah, I get that front-end web coding uses MVC frameworks and such now, but the needs of maintainable server-side web development are much bigger than a language change.

            4. I fully agree, Jeff. Let’s look at the big picture: business is chock full of tiny boring requests like please can you add a phone no. 2 field? Which means at a minimum you hire one junior backend developer, one junior frontend developer, or a senior full stack developer who will be bored and quit, and one analyst who can reject stupid requests so you don’t end up with 5 different features solving the same business problem differently and all slightly wrong.

              This is a too expensive team for a small corporation. They often want a one person team.

              So what business always wanted is tight coupling, with the crudest example being pulling a database table onto a form with the mouse and getting an UI list. Asp.NET Web Forms, Deplhi, FoxPro.

              But even though that is hair-raisingly bad, we still need one person teams and fairly tight back-front end coupling.

              Europe understands this better, because we have more small corporations and more one-person teams, in fact often one person supporting 5-6 corporations custom development. Hence the French wakanda.io (previous to and unrelated to the movie) as the elegant example of JavaScript frontend, backend, and a development environment so there is basically one tool to learn, any smart analyst can do it. A cute little example is https://picolisp.com/wiki/?home which is really one guys toolset for whipping up a lot of business apps in Lisp, similar close coupling. Other examples are strongly coupled frameworks made for one app, again, the typical case (over here) of one person writing a big app, such frameworks exist behind Odoo and Compiere. All this is about costs really.

              Sometimes this is hard to explain to Americans. 10 years ago I contacted the Django team and tried to explain I want to use what you call the autogenerated admin interface as the app itself, but I need more features. I don’t intend to hand-write a frontend. Why? Because I am not a startup who wants to reach 200K users with a beautifully crafted UI. I just want something handy for 4 service technicians to enter data into. So a modern FoxPro basically, and the Django admin is pretty neat for that but needs more features. Somehow the message did not come across.

              Anyway. Business always wants their FoxPro of $current_decade. Small business especially.

              It might not necessarily mean that everybody had to follow. The people who did not follow Asp.NET Web Forms, Delphi or FoxPro might also not follow this.

    2. >Do you think you’ll start recommending something other than Python in How To Become a Hacker?

      Not yet. It’s still a terrific learning language. What has changed is that’s now less competitive in production use.

      1. I am actually in the process of writing a computer science curriculum for my daughter’s school for 6-8th graders, and was ruminating on whether to teach it with Go vs. Python. I largely landed on Python because:

        a) nothing I’m teaching is going to run up against the aforementioned limitations
        b) no need for them to deal with static typing of compilers at this stage
        c) they’re going to learn on Raspberry Pi’s, and Python has excellent GPIO stuff available for the Pi.

        It is a terrific learning language, and I still use it for tons of “glue” stuff. I do enjoy Go more, I just rarely actually use it.

        1. >It is a terrific learning language, and I still use it for tons of “glue” stuff.

          I’d make the same call in your situation.

  3. How is it using roughly 1/5 MB per commit?

    I kinda doubt the GIL will ever get fixed and even if it was you still wouldn’t be anywhere close to as fast as C++.

  4. Have you looked into Julia? It promises to have all the ease of use of Python, but the speed of C++.

  5. Eh, I’ve always had the sense that you only used Python when you didn’t particularly care about speed.

    1. Define “speed”.

      Any project is going to be a mixture of 1) time to develop (including debugging and testing), 2) execution time in production, and 3) time to re-work or extend the code in the future (as things change, more use cases arise, etc.) How much time and resources are consumed in each of these three phases will vary enormously depending on your particular project. I’d say, if your job is not to distribute production code to large numbers of “civilians”, then it’s very likely that phases #1 and #3 are going to dominate the total amount of time spent, and hence “speed”.

      Take the case of scientific users: their job is not to write software, but to produce analysis on datasets. Writing code is merely a tool to do their job– the difference between a commuter, and a professional driver, if you will. They’ll spend a lot of time in phase #1, and then will have to (want to) take the base code and extend it to related data sets or analyses in the future (probably after the original author has graduated and left)– phase #3. For them, the time spent compiling code which turns out to have errors in it will dwarf the time “savings” during their production runs, most likely. And a language which is, if you will, “conversational”, like Python, is much easier to understand and maintain (for non-specialists). Python will handle lots of things that are confusing and difficult to do, for people who think their main job is not “coder”, but “biologist” (or whatever). You can even embed it into notebooks with honest-to-God English paragraphs talking you through the reasoning and displaying the output and graphs right there for you. It doesn’t matter if you’re running a Mac or a PC or Linux, either. And the Python devs have made sure that, under the hood, mostly the libraries and built-ins are really C programs with a wrapper, so the performance is better than it has any right to be, really.

      I do a lot of Python programming as exploratory work, to get my thought process 100% clear about what I want done and to surface corner cases, etc. It’s “good enough” for most of my use cases, as I spend a lot of time just working though, say, normalizing this particular pathological data set. The code doesn’t go anywhere (although the cleaned-up dataset might). Python is faster overall, even if the actual processing time is 40x slower than C or Go. If phase 2 turns out to be a problem, then I refactor the smallest possible part of the program into a compiled language after I’m happy with the functionality of the Python code (I prefer C, my boss prefers C++; although I’m seriously considering switching to Go, for all the reasons our host has already mentioned). The big exception is when I’m doing embedded work, where it’s easier to just go straight to C (in most cases).

      I’d also agree that Python is the best choice for the first “real” language for students to learn– they can learn almost all the important concepts, like control flow, data structures, I/O, recursion, etc. and save the residuals (explicit typing, memory management, direct reduction to assembly/machine language, etc.) for later. Most people won’t ever need to cover those, actually. Indeed, most won’t even need recursion, and also would be very well advised to leave things like IPC, cryptography, packaging, and so on to the experts.

      That said: I’ve always hated the Python 2/3 split, particularly because of the Unicode crap that I can’t avoid, and because I work in networking. The devs were **just about** to start fixing some of the painful, frustrating parts of the I/O interface– instead, they got diverted to the conversion to Python 3, and we ended up with some unsatisfying, partial fixes in 2.7 and then chaos in 3.x. Plus, they’re retrofitting in some Java-like syntax and mindset, and I hate Java with the fire of 10,000 suns.

  6. the – in my opinion – extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences.

    Even worse than that was the decision to make the sys.std{in|out|err} streams Unicode text streams instead of byte streams. Even if this makes a kind of sense if your program is interactive, so the streams are connected to a TTY, it makes no sense at all if those streams are pipes. And of course, since the interpreter creates those streams before your code even runs, you have no way of telling the interpreter “Hey, knucklehead, this program has to work with pipes so please don’t do Unicode streams this time!” So you end up with kludges and hacks instead. (This was one of the issues we had to deal with in the Python 3 port of reposurgeon.)

      1. I’ve tried all three – Sublime extensively, as it was the standard editor for pairing on code at Lonely Planet. Neither is _bad_, but they offer only a very small subset of the functionality offered by Emacs.

        And neither is an option if you’re working on console-only systems, over poor connections, on resource constrained machines, etc. etc.

        1. I’ve wanted to get behind Atom as it was at least modular and open source, but it’s a damned Electron app, which means it’s an absurd resource hog.

          I’d love for something more modern to replace Emacs as a good option for code editor on both console and GUI, but I don’t think there are too many that care about this set of use cases.

        2. Yeah, but Eric isn’t, and most of us aren’t, so why Only Use Emacs?

          (I mean, I’ve done deployment to horribly constrained systems; dev on one, run on the other.)

        3. And neither is an option if you’re working on console-only systems, over poor connections, on resource constrained machines, etc. etc.

          For that, most developers these days use vim. Vim won the vi/emacs editor wars of old; the new wars are between vim and the various IDEs and IDE-wannabe GUI editors.

          I still use Emacs myself, but I have to endure funny looks when my coworkers wonder why I’m using some ancient thing they last saw on old DEC iron (older developers), or some alien thing no one understands (younger developers). And collaboration becomes more difficult.

          1. Vi “won” in the only way that matters – it’s guaranteed to be everywhere; so you can dedicate brainspace to knowing it in the sure and certain knowledge that you CAN use it everywhere.

          2. Funny, just about everyone in the *nix world I know uses nano, which is to emacs what vi is to vim. It’s installed by default on a similar number of distros, often set as $EDITOR, and has on screen tooltips to help newbies figure out what to do. Emacs itself is less often used (but the same is true of vim), but it has a number of niches that are unlikely to go away. It’s available out of the box on mac, which makes it my go-to when working at client locations, and it has syntax highlighting for a number of esoteric transpiled languages (e.g. moonscript) which are otherwise unsupported.

            1. This would be the same nano which gratuitously inserted a newline into a wrapped > 80 characters line in as I recall my fstab file?

              It’s clunky in general, but the above sent me screaming back to ed/ex for small changes to files, it was the first text editor I ever used, from back in the days of V6, so the commands and paradigm are still wired into my brain. Otherwise, Emacs Makes All Computing Simple.

              1. So I don’t know why some distros have nano set up with line-breaking on by default. You can disable it at invocation time with the -w flag. Sane distros do have it off by default.

                1. It was either Debian squeeze or wheezy, or Ubuntu 14.04 Trusty Tahr. Debian should be “sane”, prior to their adopting systemd, and it was probably one of those two. Ubuntu, well, that release has been very sane in the many years I’ve used it (now moving to OpenBSD for the obvious reasons as 14.04 LTS ends in March).

                  On the other hand, I can’t even imagine why the author(s) of nano would have such an option, let alone ship it as the default.

                  1. If I recall correctly, I ran into it on a debian box, shortly after they moved to systemd. As for why include it? It’s nice when you’re using mutt or similar to write emails, or other similar types of text files. Making it the default (either at compilation or packaging) was screwy.

      1. The only C I’ve written since leaving school is some trivial Arduino code*, and that’s perfect, because C is the devil**.

        (* And boy, is the Arduino kit broken. Can’t even string-copy correctly as of last time I checked.

        ** I mean, for Everyday Stuff, not systems programming, etc.)

      2. Honestly, the smug sense of superiority that I’m picking up from most Gotards is the main thing keeping me from giving the language any more than a passing glance.

  7. Does anyone have an opinion on Perl 6? Seems interesting as a language, though the ecosystem seems still a toy. Does it have a future?

    1. The ecosystem probably seems a bit toy because all of the existing Perl5 modules work with Perl6, so there isn’t such a big push yet to write specifically for Perl6 yet.

      Haven’t used Perl6 yet myself, but it looks very promising.

  8. I’m curious; do you think Go would be an equally good replacement for Java? Or are there other things that are poised to become Java successors that are better?

    1. >I’m curious; do you think Go would be an equally good replacement for Java?

      I don’t know.

      I know why Go competes well with C; because it’s an easy step up, while avoiding C’s vulnerability to bare-pointer errors and the like.

      I know why Go competes well with Python; because it’s a (slightly less) easy step up and ridiculously faster.

      To know how Go might compete with Java, I’d need to know what Java fans think Java is especially good at.

      1. Java is C++–. It’s a whole lot like C++ except with a few of the hardest to understand bits like multiple inheritance taken out. The other key difference is that all memory is handled by the memory manager – there is no heap that can be explicitly managed with operations like delete, it’s all garbage collected. Misuse of free is the single most common bug in large C++ programs; programmers either free memory too soon (leading to crashes and security leaks when programs access memory that has been reused for something else) or never free it at all, leading to ever-growing programs.

        The other thing that Java got right was to standardize the library right away rather than leaving it as a later exercise. I’m talking about the Standard Edition, not the Enterprise Edition mess that Oracle eventually turned over to the community to attempt to maintain.

        Java was also THE language in higher education for a number of years until Python started to take over. But that wasn’t because of any special advantages of the language, but because of the cross-platform environment. Students could develop Java on whatever kind of computer they had and still be confident that the TAs would be able to run their code, something that doesn’t work as well with languages like C and C++ where there is a different and not fully compatible implementation for each platform.

        Go shares the property of always being garbage collected and it otherwise covers all the capabilities of Java, so it would be a good replacement.

        1. C++ developers don’t use free() (or its equivalent delete) since 2011.
          free/delete was deprecated before that, but good alternatives weren’t standardized yet. Today, the rule of thumb is: if you see free() in a C++ program, you see a bug.

          Using a C++ iterator after it has been invalidated, on the other hand, is still a non-trivial issue.

        2. Agree with almost everything here.

          It’s true, but not the full picture, to say that Java has multiple inheritance taken out. While it’s not called multiple inheritance, Java classes can and do inherit methods and fields from multiple sources. i.e. It’s true that you can directly inherit from only one class, but you can implement as many interfaces as you wish.

          Java also doesn’t allow direct pointer manipulations like C/C++ does.

        3. Another substantial difference is in the structure of the STL vs the Java Class Library. I dislike C++ as a language, but appreciate and wish some of the ideas of the STL would spread to other languages.

      2. Java is really good when you have a big team of largely mediocre programmers who need to build really big systems. Here’s a conversation I had with a senior developer of more than a decade of experience a few years back:

        Client: Hey Expensive Consultant (Me), can you help out Senior Developer (SD)? He’s stuck. Again.
        Me: OK.
        SD: I have a problem. This 3rd Party Library I need to use won’t load.
        Me: Dude, WTF? It’s just a Java wrapper over native code. It’s a Windows DLL. A Windows DLL will not run on Debian.
        SD: I don’t understand. It’s Java. Java runs everywhere.
        Me: No! It’s Java calling native code. Native code does not “run everywhere”.
        SD: I don’t understand.

        But here’s the kicker. SD could still add value under appropriate supervision. Can’t readily think of any other language where this would be true.

        1. They must be using a definition of “Senior” of which I am unfamiliar.

          Unless he’s just the oldest guy they have.

          1. Salary bracket.

            The dude had another infuriating habit, as I recall. When he sent somebody an error log, “errors.log” would be an OpenOffice document containing text he copied out of a PuTTY window.

            1. Count yourself lucky it wasn’t an excel spreadsheet with each logline in a different row

              Yes, I am serious
              No, I don’t want to talk about it, I want another ethanol induced blackout instead

              1. It could be worse.

                Our “QA” team sends screen captures.

                Of Putty windows.

                Set to the default 80×25 window size and colors.

      3. Java’s main strength is enterprise development. Which, as Inkstain said, basically means enabling schlubs to be deployed in legion strength and grind out code while more or less not stepping on each other’s toes. It has several features to this end: It enforces a single programming paradigm — OOP — and there is a passel of schlub-accessible best practices around that which enforce modularization, abstraction, code reuse, and loose coupling. It has strong static typing which eliminates type-mismatch bugs at compile time. It has a GC and a well-defined memory model which basically eliminate entire classes of memory bugs for an acceptable performance cost in enterprise work. And JVMs have been built to run on everything from IBM mainframes to smart cards, meaning that a single language and runtime can be used to implement all tiers of a typical distributed application, and a single binary package can, with some caveats, run on any platform that has a JVM.

        There are other accidental characteristics of Java that also make it an ideal enterprise language: for one, it has vast library support; and for another, in the 90s it displaced C++ and Pascal as the most common introductory language for new computer science students, so there are huge numbers of programmers, here and abroad, that understand it — making hiring a few hundred of them a doddle for a manager. Plus it’s backed by Oracle, which no one ever got fired for buying.

        Go is going to have a hard time displacing Java in the spaces where Java thrives. Much better candidates for Java replacements include C# (which has done Java stuff better than Java since the early 2000s, and is now officially open source and cross-platform) and JavaScript.

        1. It enforces a single programming paradigm — OOP — and there is a passel of schlub-accessible best practices around that which enforce modularization, abstraction, code reuse, and loose coupling.

          It has a wonderful side-benefit for IT departments because of this, too: forcing everything to be encapsulated in an Object the way Java does results in SLOC explosion, as endless reimplementations of existing code, etc. results in hundreds or thousands of extra lines, producing a wonderful ProblemFactory, giving managers an eternal excuse for larger headcount and development budgets.

          Java is bureaucracy in programming language form, with everything that entails. No surprise that it’s so well suited to enterprise needs.

          1. Java is bureaucracy in programming language form, with everything that entails. No surprise that it’s so well suited to enterprise needs.

            That, sir, is a brilliant distillation of my thoughts on Java. Thank you. I will steal that for use in various conversations.

            1. > That, sir, is a brilliant distillation of my thoughts on Java.

              Thanks, but to be fair it’s not my original thought. I’m reformulating a quote by Joe Marshall: “Whenever I write code in Java I feel like I’m filling out endless forms in triplicate.”

              Like you, this describes my personal experience and thinking re: Java so well that I’ve kept the idea in my back pocket for these occasions.

          2. Java is bureaucracy in programming language form, with everything that entails. No surprise that it’s so well suited to enterprise needs.

            Actually, that was my feeling about Ada, rather than Java. Java was actually fun to write in. And it had that WORA promise going on, backed by a company that looked like it would be around for a while.

            And then some bad events occurred. Java made some promises that its GUI library couldn’t keep, partially because different hardware and architectures weren’t quite as interchangeable as envisioned. So the face of most applications looked kinda neat, but ran sluggishly. Applets didn’t catch on, partly due to good-enough solutions like Flash and the “I swear we’re not piggybacking on branding” JavaScript. C# arrived, sacrificing some pure abstraction for impure pragmatics. And then Sun went under.

            That might’ve been it for Java, I think, had Eclipse not come along. Eclipse made a lot of that boilerplate less boily. A lot of Java programming became point-and-click, enough of it to keep the legions going. It was even finding bugs for you.

            Also, the JDBC abstraction was just useful enough to drive a critical mass of databases. Turns out most people just needed a website in front of an RDB, and Java provided that well enough. It wasn’t applets that took over; it was servlets.

            So now, I’d say the biggest strength Java has now is its ecology. It has at least two great IDEs. It has Spring. It has Apache Commons. It has huge amounts of class libraries and frameworks. It has enough legacy buy-in now to make COBOL jealous, if COBOL had any feelings. It also now has a more agile release schedule; I’m not sure how that will play out.

            Compared to Go, it’s probably suffering in scaling. That ol’ heap continues to be a pesky problem. It might be better off in the multithreading department than Python; I’m not sure. (I’ve written several good threaded apps, and knew enough of the Swing event model to reliably turn out snappy UIs as long as the customer was willing to wait a bit longer – or they could go with the fast solution that runs slow for the rest of their lives.)

            Go’s biggest challenge, I think, would be replicating that ecology. I haven’t worked with Go yet, and I don’t know how robust Goclipse is, or if there’s any other Go plugin to the 800-pound IDE. Overall, though, Go’s funding is deep, so it’s probably a question of whether funding focuses on that.

        2. > It has strong static typing which eliminates type-mismatch bugs at compile time.

          Java does have that reputation. And Java is somewhat strong with types, but IMO it has a lot of behind the scenes implicit conversions that make it less type strong than its reputation. Maybe learning Ada before Java biased me, but it bothers me how easily this code compiles and works with no complaints.


          public class TypeStrong {
            public static void main ( String[] args ) {
              int x = 4;
              double d = x * 2.0; // silent promotion to double
              // now printf with a string format specifier.
              // Java silently wraps d in Double object wrapper.
              // then silently calls Double.toString()
              System.out.printf ( "d = %s%n", d );
            }
          }
          // prints "d = 8.0"

          1. I’m so used to “x * 2.0” promoting from int to double that that part doesn’t bother me. Especially since it’s not like x is becoming a double; it’s still an int.

            Amusingly (perhaps darkly so), Java resisted autoboxing (promoting primitives to objects) for a long time, and was seen as failing because of that. They finally broke down and introduced it, AIUI, as part of several syntactic sugar enhancements.

            System.out likewise had no printf method until Java 1.5. Even today, I virtually never see it.

            1. You probably don’t see System.out.printf() used very much because most Java these days is server-side, and people are more inclined to use the logging APIs (java.util.logging, Log4J, SLF4J, etc.) rather than writing to System.out.

              String.format() is the equivalent of sprintf, and more useful in most Java work.

              1. Yes, it is more useful, since you’re likely not writing to the console all that often. I have to agree.

                I was more focused on placing a floating point number into a %s with no complaint.

                I like my type strictness like the old, never-said Mr. Miyagi quote:

                “Here, type-strict, same thing. Either you type-strict do “yes” or type-strict do “no.” You type-strict do “guess so,” [makes squish gesture while auto-boxing] Just like grape. Understand?” — !(Mr. Miyagi)

        3. I don’t care how “officially open source and cross-platform” people say it is; C# will always be Micro$oft ick to me.

          Though to be fair, Oracle seems to be learning some of Micro$oft’s tricks; with the new Java 11 JDK, if you download the “standard” JDK like you’ve always done, you can’t use it in production without paying $lots to Oracle. You have to make sure to use an OpenJDK build if you don’t want Oracle Licensing Enforcement coming down on you like harpies.

          1. @Amy Bowersox: I don’t care how “officially open source and cross-platform” people say it is; C# will always be Micro$oft ick to me.

            And that attitude will hobble you as a developer.

            I do my best to draw a distinction between Microsoft as a developer of technology and Microsoft as the old Evil Empire with business practices folks disparaged.

            Microsoft can afford to hire and pay the best, and a lot of very good people work there, producing some potentially superior technology. Microsoft is also trying to shed the Evil Empire image, because the money these days is in cloud services, not Windows and Office, and they have to play nice with everyone to make money.

            Turning up your nose at a tool because of who made it, not whether it’s a good tool, simply deprives you of resources you might need.

            I submit that’s the wrong way to look at things, and may cost you down the road.

            Though to be fair, Oracle seems to be learning some of Micro$oft’s tricks;

            And if you develop in Java you know that and and use the OpenJDK build. Oracle got Java when they bought Sun, and have wrestled with the question of how to monetize it. This is an attempted answer. I suspect it won’t work for them, because the alternative is so simple.

            (If you don’t know that, you haven’t been keeping up, and more fool you.)

            But meanwhile, in most areas of computing and software, assume it’s about the money for those doing it, because it generally is..

            >Dennis

            1. And that attitude will hobble you as a developer.

              To be honest, I don’t blame her.

              It’s not some sort of Microsoft is evil thing, but a history of evil has made Microsoft tasteless. C# has some neat features, granted, but the .NET libraries reek of Win32’s design. Talk about bureaucracy in code form!

              1. @Jeff Read: C# has some neat features, granted, but the .NET libraries reek of Win32’s design. Talk about bureaucracy in code form!

                Given that .NET originated on Win32, no surprise. I’m not sure how it could have avoided it.

                But the comment reminded me of a bit in Tracy Kidder’s “The Soul of a New Machine”, about mini-computer Data General’s race to build a 32 bit machine comparable to Digital Equipment Corporation’s new VAX system. Tom West, leader of the engineering team tasked to do it, purchases a VAX through suitable intermediaries, puts it in a warehouse, and proceeds to do a teardown to see how DEC handled the hardware. He decides the results reflect DEC’s corporate structure, with unnecessary complexity and message passing that could get in the way of doing the work.

                There’s probably fertile ground for exploration in how the corporate structure of the company making it affects architectural decisions in the product.

                >Dennis

                1. See Conway’s law:

                  “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”

            2. Somehow, I don’t think Our Host would touch C# with a ten-foot pole, either, and I would guess that that reasoning is largely based in the fact that it is Micro$oft ick. Would that “hobble” him as a developer?

              Also, ever heard the aphorism “the leopard doesn’t change its spots”? I’m sure the EEE strategy is still in the back of the minds of many at Micro$oft.

              As for OpenJDK, the blog post I linked to was intended as a warning and a pointer to OpenJDK for those that haven’t yet gotten the message. It just played up the fact that, by subtly changing the license on the downloads that people have “always” gotten from them, Oracle is trying to pull a fast one. Easily evaded once you see it for what it is, but a fast one nonetheless.

        4. I thought you did not think Java had much of a future. Didn’t you call Sun’s open sourcing a Hail Mary Desperation Pass?

    2. There are plenty of JVM languages which are already good replacements for Java. Kotlin, Groovy, Clojure, JRuby, whatever tool fits the job.

      1. Kotlin in particular is already eating Java’s lunch on Android, and is surely going to do so in the enterprise as well (except for high-end shops already committed to Scala). It’s a smooth and gentle upgrade from Java, and IntelliJ even has a built-in Java-to-Kotlin compiler. Amazing to see Java code shrink by 90% with one mouse click as all the infamous Java ceremony just disappears.

        1. Downside is, at least for now, complete tooling lock-in to JetBrains. The Eclipse support (which JetBrains contributed either graciously or maliciously, depending on your POV) completely breaks the IDE, and I don’t think that NetBeans supports it at all.

    3. Java is “our” generations’ COBOL. It will live another 40 years.

      The thing about Java is that it’s *everywhere* and there’s about a half a brazilian programmers–many of them in places like India and China where they will work for a few dollars an hour. Java also has tool chain support from IBM and other large vendors.

      And finally a Java “binary” runs on the JVM, not the processor, whereas Go is statically compiled for target machine. This has large tool chain implications.

  9. ” Test conversions take more than nine hours each,”

    Ah! A reminder about the reasoning leading to the parallel interest in improving UPS devices …

  10. I followed the link on “adverse selection” and was… utterly dumbfounded. An O(N^3) sort algorithm? That’s not a naive sort algorithm. A naive sort algorithm is O(N^2). Getting your sort to be as bad as O(N^3) without it being obvious that you were deliberately pessimizing would require fiendish cleverness.

    1. I can easily imaging writing a O(N^3) sort if I’m not thinking of it as a sort at that moment and I’m reusing functions which were not optimized for big O either.

  11. > extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences.

    Tcl made the same mistake years earlier, so Python has no excuse. Today the noun I associate with the word “Python” is “fatal string-codec exception” because every other Python program I use fades into the technological background noise where it belongs…until it ruins my day because someone put a funny character in a string and made print statements and sometimes even socket IO fail. Somebody didn’t get the memo about malicious inputs causing unexpected behavior…

    > Yes, static typing vs. Python’s dynamic seems like it ought to be a big deal.

    In my experience, people who believe they need dynamic typing really just want a terse way to write expressions with exactly the implicit conversions their mental model of the toolchain expects. Rarely do any real problem domain’s type requirements correspond exactly to the precise set of Byzantine rules that such-and-such language’s dynamic type system performs (consider types used for storing floating-point numbers with types used to store amounts of currency or lengths of strings). So rarely does this alignment occur that we often want the compiler to detect unintended implicit conversions and flag them as warnings, then build with warnings-as-errors to stop them from passing code reviews…

    If it’s really necessary, you can build a dynamic type system at run time on top of a static one (that is, after all, what most dynamic type system implementations do). Going the other way is harder, because it requires the human mental discipline that you obviously don’t have, or you wouldn’t have needed to move from a dynamic type system to a static one in the first place.

  12. I’m really enjoying modern C++. But I can well understand why it’s not everybody’s cup of tea.

    I really don’t like Go. I have an aversion to garbage collection.

      1. “Prioritizing low latency” generally means de-prioritizing high throughput. This is not very good for something like the reposturgeon, which is all about batch-oriented workloads.

    1. Yes, OTOH reposturgeon has to deal with huge graphs and there’s nothing to ensure that these graphs will consistently be free of cycles, so this is most likely one of the admittedly rare cases where GC is a solid choice.

      1. >this is most likely one of the admittedly rare cases where GC is a solid choice.

        “admittedly rare”?

        Yes, like the “admittedly rare” cases where you care about lowering long-term defect rates, Or security. Or not wasting your developers’ time on resource shuffling that a machine can do better.

        GC is always a solid choice unless you’re doing hard realtime. How can anyone not grok this in 2018?

        1. By “solid” I actually meant something more like “not even worth looking for alternatives”. Modern alternatives to GC have memory safety properties that are comparable to GC languages, and are _better_ for resource safety and ease of writing safe concurrent code. This is because they do not focus on “manual resource shuffling”, but on using generalizations of the RAII pattern to describe how resources will be used in a quasi-declarative way (where ‘resources’ includes but is not limited to memory)– thus enabling varieties of automatic resource management that have near-zero overhead, unlike fully-general GC.

          And this is important not just for RT, but most likely also for enabling high throughput on modern and upcoming hardware, which is generally constrained by a combination of low multi-core utilization, and highly-limited memory bandwidth _per core_ (within a given amount of CPU cycles). Again, this is not immediately relevant to reposurgeon, but it is for just about everything else.

        2. Yes, like the “admittedly rare” cases where you care about lowering long-term defect rates, Or security. Or not wasting your developers’ time on resource shuffling that a machine can do better.

          All of these can be had with RAII in C++ or Rust. And RAII gives you strictly deterministic object lifetimes — meaning zero GC pause, and zero memory overhead.

          1. Memory (de)allocation can still take unexpected amounts of time, you can just sort of predict where in the code path these delays are possible. And they can’t happen in a parallel thread.

            1. > And they can’t happen in a parallel thread.

              What makes you say that? By-default, they probably won’t happen in a different thread. But assuming that your memory allocator is thread-safe, it’s certainly easy-enough to do on-demand.
              If you know that you have a large object that you want to deallocate in a different thread, you could perform a move/swap to an object owned by a thread which does your memory deallocations.
              If you are willing to change your data type even a tad, you could wrap it in a mechanism which does this for you so you could do it with one/some/all instances, everywhere.

        3. I don’t think anyone seriously debates any more whether automation in memory management is a good thing. AMM is table stakes in all but the most extreme environments, where considerations like security or developer time are irrelevant.

          AMM is on the checklist for safety certifications. If you’re not using some kind of AMM, even one you built yourself with C macros (and prove that you use religiously, and have working tools that catch you if you blaspheme), you have to explain why not during the safety audit, and your auditors have to agree with your explanation, or you won’t get a safety certificate. (Safety auditors would prefer you don’t allocate memory at run time at all, but if you must, they prefer it be managed by software).

          The debate is whether GC is a good AMM scheme for general systems-programming use. That is certainly not true in the general case, and possibly not even in the common case. reposurgeon is exceptional in many ways, and being a poster child for GC is possibly one of them.

  13. Just asking, does the Jython implementation of Python have the same issues as the CPython implementation? Have you tried running your tool with it?

    1. Moderator: Never mind. I did not actually realize Jython was a dead project. You can delete both my posts.

      1. a) Jython is not dead, it is only very very slow (currently at 2.7 compatibility),
        b) Jython is Java, so no GIL anywhere, and

        c) of course, this is classical ESR, who ignored multiprocessing library (which doesn’t have GIL, because it is based on fork/exec not on threads), and so his big banging of doors seems kind of funny to anybody who actually knows something about Python (which is strange, because I really thought ESR should know better). I have followed some previous examples of this phenomenon (e.g., leaving Fedora because he did sudo rm -rf /usr/, IIRC), and it is quite entertaining to follow.

        1. > this is classical ESR, who ignored multiprocessing library

          I was perfectly aware of the multiprocessing library. It won’t do for cases where your worker pool needs to share access to a large working set, as you would realize if you’d thought about the design problem for ten seconds and are not an idiot.

          >e.g., leaving Fedora because he did sudo rm -rf /usr/, IIRC)

          Um, no. I’m leaning towards “idiot” now.

  14. What issues did you run into with Ocaml? They’ve added some fairly “crucial” features quite recently, including meaningful support for concurrency. (Rust is also seeing a lot of improvement over time, but Rust is not really an appropriate choice for a program that has to deal with huge graphs, potentially including cycles– whereas that’s exactly where a GC-based language can be appropriate.)

    1. >What issues did you run into with Ocaml?

      Just the big semantic gap. Idiomatic Ocaml doesn’t look or feel anything like Python, which is a big deal when you’re choosing partly to hold down the time cost of translation. I wasn’t expressing any negatve about the language itself.

  15. In my world – which is not Eric’s – life consists of prototype & proof of concept in the numpy/scikit/anaconda toolset, deploy at scale on pyspark / h2o / tensorflow.

    Python here is still (or one again) in its original role as a quick & easy glue language, controlling other things that are doing the heavy lifting.

    UPDATE: oops, should have read a couple of paragraphs further to the point where Eric addresses this point.

  16. Python has its problem and is inadequate? Sure, whatever. The replacement is Go? Please, no.

    Not for any technical reason, mind you. But it’s a Google-specific language. All *good* languages with healthy development have always been unonwned by a single entity. Go, in stark contrast, is *named* for Google, was entirely created within Google, and all development is made by Googlers. There is *no way* that any non-Googler can come now, later and after the fact, and try to affect a non-Google culture into the already established Google culture in the Go project. Even in a 100 years, when all the individual Go core developers have been replaced, the pattern would still be there. The origin of the project defines its culture. And the *current* state of the Go core developer demographic defines their allegiance.

    Therefore, I would not trust Go to develop into a reasonable language any futher than I would trust C#, or Java in its Sun heyday. Any single entity with entirely different goals cannot be allowed to have such a large influence on a language which has aspirations on industry- and community-wide adoption.

    1. You’re probably right in a general sense, but I don’t think that reposurgeon has been made _dependent_ on Go in any real sense, or that this rewrite has anything to do with “industry- and community-wide adoption” of this particular language. Go just happens to be a pretty good fit for this job. I also expect that parts of reposurgeon could fairly easily be rewritten in some other language as needed (including, e.g. Rust), subject only to the mild hairiness of Go’s “cgo” FFI implementation.

    2. C was developed entirely at AT&T, which at the time was even more an evil monopoly than Google is today.

      In fact I see more Bell Labs culture in Go than I do Google culture. (showAds(), sellPersonalInfo(), and acquiesceToChina() are not yet standard library calls…)

      Committees rarely design good languages; they can only codify and support good existing practice. That first implementation of a language has to come from somewhere; generally, it’s a single organization. Go is no different in this regard from C or C++ (AT&T) or JavaScript (Netscape).

      1. @Teddy > Go, in stark contrast, is *named* for Google, was entirely created within Google, and all development is made by Googlers.

        That’s a really foolish viewpoint made from historical ignorance. See below.

        @Jeff Read > In fact I see more Bell Labs culture in Go than I do Google culture.

        And given the core language design team – Ken, Rob, Russ, et al – this should come as exactly zero surprise. It is Bell Labs culture!

        Go is not Google, it is CSP and Plan 9 made useful.

        1. >Go is not Google, it is CSP and Plan 9 made useful.

          I agree. I think this is a reason I find Go fits my hand well, because I steeped myself in the artifacts of the old Bell Labs culture when I was a noob. In a non-silly sense it’s…a return to my roots.

  17. I must have had too much coffee. I misread the title as a homage to the Bill & Ted movies (albeit 30-odd years later) – “Reposurgeon’s Excellent Adventure and Python’s Bogus Journey”. Move along, nothing to see here…

    It does shed a bit of light onto a problem I’m chewing on, where a translation to Go might be more tractable than plain algo-fu in Python.

    1. >I must have had too much coffee. I misread the title as a homage to the Bill & Ted movies (albeit 30-odd years later) – “Reposurgeon’s Excellent Adventure and Python’s Bogus Journey”.

      You were supposed to.

  18. I think PyPy’s working on the GIL problem. They tried out Software Transactional Memory; found out it was too much work; and have chosen, I believe, to reimplement everything in a multi-threaded compatible way.

    If the CPython devs were to break compatibility with the C API, they can continue to support it via emulation, no? I believe this is what the PyPy folks did.

    I think the Graal/Truffle project has found a way to maintain performance while doing such emulation. If you JIT the C code (or Fortran code) alongside the Python code, then you can do the emulation with less overhead. Writing a C interpreter and Fortran interpreter in RPython or Truffle might be a large undertaking, even though I believe the Truffle folks have partially done it.

    1. Historically, python couldn’t remove the GIL even then, since interpreter state was tracked via global variables, but that was cleaned up with the move to py3k. Now, the problem isn’t so much with the C API as it is with reference counting (which has implications for the C API). PyPy has a special glue layer for CAPI extensions, to avoid GCing anything which an extension has INCREFed, which could be implemented in CPython. Combining that with a move to mark and sweep or similar would then allow elimination of the GIL. But that would be a tremendous undertaking, and likely to be a source of bugs in CAPI extensions for years to come.

      There have been a couple attempts to remove the GIL while keeping reference counting, but you don’t gain anything, because python’s ‘immutable’ types aren’t actually immutable: specifically the reference count is stored on the PyObject, which means that one thread changing the reference count invalidates the caches for the object globally, plus the issues with atomic or thread-safe writes to the refcount. The solution there is to move the reference counting out of the PyObject, using some sort of table for reference to refcount. Again, doing that is a daunting task.

      Bottom line is you can remove the GIL relatively easily, with relatively little work, but doing so decreases single-thread performance by 30% or worse, and the performance penalties scale exponentially with number of threads.

      1. >Bottom line is you can remove the GIL relatively easily, with relatively little work, but doing so decreases single-thread performance by 30% or worse, and the performance penalties scale exponentially with number of threads.

        Well, shit. Good explanation – fits with my incomplete knowledge of the issues.

        I greatly fear you just wrote the epitaph for Python at any scale bigger than glue scripts. And that makes me, personally, sad.

        1. Python has been run on 64K core supercomputers. You’re equating threads with scaling. Python is quite capable of running multiple processes, has a full suite of tools such as thread-safe queues, means to share data between processes if necessary, etc.

          There’s also a logic problem here. You’re saying that Python only runs at the level of glue scripts. If so, then this has propelled it to being one of the most-used languages in the world. How is this going to change in six months to a year? All the things that Python does successfully will still be there. And if it’s not true that Python only runs at the scale of glue scripts, then that also nullifies your argument.

          The 2/3 transition, Unicode strings and speed were all objections that existed *before* Python began its massive rise in popularity. They’re also objections that have been refuted soundly in the past. In fact, Guido did just this at his PyCon Keynote back in 2012, a talk on “The Myths Of Python”. For instance, you site the talk by David Beasley and Guido brought this up in the keynote, pointing out that you had to be David Beasley to find this corner case. ;-)

          Instagram, Paypal, etc. are using Python at scale every day. Instagram often blogs about it:

          https://instagram-engineering.com/copy-on-write-friendly-python-garbage-collection-ad6ed5233ddf

          And to quote from the Paypal Engineering blog:
          “Scale has many definitions, but by any definition, YouTube is a web site at scale. More than 1 billion unique visitors per month, over 100 hours of uploaded video per minute, and going on 20 pecent of peak Internet bandwidth, all with Python as a core technology. Dropbox, Disqus, Eventbrite, Reddit, Twilio, Instagram, Yelp, EVE Online, Second Life, and, yes, eBay and PayPal all have Python scaling stories that prove scale is more than just possible: it’s a pattern….
          Occasionally debunking performance and scaling myths, and someone tries to get technical, “Python lacks concurrency,” or, “What about the GIL?” If dozens of counterexamples are insufficient to bolster one’s confidence in Python’s ability to scale vertically and horizontally, then an extended explanation of a CPython implementation detail probably won’t help, so I’ll keep it brief.

          Python has great concurrency primitives, including generators, greenlets, Deferreds, and futures. Python has great concurrency frameworks, including eventlet, gevent, and Twisted. Python has had some amazing work put into customizing runtimes for concurrency, including Stackless and PyPy. All of these and more show that there is no shortage of engineers effectively and unapologetically using Python for concurrent programming. Also, all of these are officially support and/or used in enterprise-level production environments. For examples, refer to Myth #7.

          The Global Interpreter Lock, or GIL, is a performance optimization for most use cases of Python, and a development ease optimization for virtually all CPython code. The GIL makes it much easier to use OS threads or green threads (greenlets usually), and does not affect using multiple processes. For more information, see this great Q&A on the topic and this overview from the Python docs.

          Here at PayPal, a typical service deployment entails multiple machines, with multiple processes, multiple threads, and a very large number of greenlets, amounting to a very robust and scalable concurrent environment (see figure below). In most enterprise environments, parties tends to prefer a fairly high degree of overprovisioning, for general prudence and disaster recovery. Nevertheless, in some cases Python services still see millions of requests per machine per day, handled with ease.”

          https://www.paypal-engineering.com/2014/12/10/10-myths-of-enterprise-python/

  19. Regarding strings, I’ve always wondered why languages insist on only having one String type. Why not have AsciiString, Utf8String, and Utf16String and then have explicit conversion between types? I mean, we do that with numbers (float, double, int, long, BigDecimal) and we choose the right type for the right job.

    1. Delphi has at least 5 string types now, not including Char, and it’s a bloody mess. They introduced an “AnsiString” to ease the transition to Unicode strings. What happened? Not only did many developers not port their applications over to Unicode, they continued to write new code using AnsiString! After two years when a compiler rewrite was being considered and removing AnsiString was floated, these same developers screamed, “No! Not yet! We haven’t had time to convert our programs!” And when the crisis was averted, they went right back to AnsiString. Worse, they even lobbied to get AnsiStrnig moved over to the new compiler for Android… sigh….

  20. @esr: My general sense is that I don’t see Python going away because of Go.

    You mentioned developing happily in Python for decades. If you hadn’t created Reposurgeon and watched Python fall over when attempting to deal with really huge repos, you might still be happily coding in Python.

    All languages have strengths and weaknesses, and specific problem domains they address. It’s why there are so many languages out there. I can see serious Python devs who are aware other languages exist reading your post and saying “Dude! Why were you using Python for that? It’s another language’s job!”

    I think of Java and Python as similar in approach. Both are open source and cross-platform, with the goal of being able to run your code on any platform that has a current language runtime. The problem, of course, is that the platform must have a current language runtime. (That means forget Android, and imposes lower limits on the hardware you have to have to be able to support the language.)

    Python tends to get classed as a scripting language, but I think it straddles the line between scripting language and general purpose language. And Python benefits from the fact that hardware gets steadily smaller, faster, and cheaper. You can write applications in Python that get sufficient performance that you don’t have to write in something like C++ that compiles to native code.

    I can see lots of places where Python will still be the language of choice. One area is the Linux distros that have shifted to Python as the configuration language. (As an old shell scripter, I have mixed feelings about that, but it ca and is being done.)

    I think Go is a good choice for a rewrite of Reposurgeon, but I think the underlying problem is that no “one size fits all” language exists. You reached for Python when you started writing Reposurgeon because you had been writing pretty much everything in it for years, but you found out courtesy of the GCC conversion that there were limits to Python that you hadn’t anticipated.

    So a more interesting question might be “What drives language choices, and what sort of questions should devs ask when presented with a problem before they choose the language to address the problem?”

    >Dennis

    1. There’s two points however that ESR made in the post that are big problems going forward for Python:

      1) the library path hell (which burns me a LOT whenever I try and make Python applications for more than my own personal use)
      2) it’s multi-threaded performance for CPU bound tasks, in an increasingly parallel world (our CPU’s are growing in parallel capacity much faster than growing in single-thread performance).

      Python won’t die completely any time soon, much like perl isn’t completely dead. It’ll just be relegated to being a good alternative for shell scripts and other small tasks, rather than being used for anything non-trivial.

      Which is a shame because the language is so approachable and friendly in most other ways.

      1. @Aaron Traas:Python won’t die completely any time soon, much like perl isn’t completely dead. It’ll just be relegated to being a good alternative for shell scripts and other small tasks, rather than being used for anything non-trivial.

        I agree with ESR’s points, and yours. The fundamental issue is that Python is increasingly unsuited to be a general purpose language for writing applications.

        The problem is that I don’t think the core Python devs understand that, or the fact that many folks are trying to use it that way and finding out the hard way that they can’t.

        One of the most used applications here is Calibre, an open source “Swiss Army Knife for eBooks” It’s cross platform, available for Windows, Linux, and MacOS. The developer, Kovid Goyal, is also a maintainer of the v2 Python branch and has no good to say of Python v3. The last I looked, he had an experimental branch with Calibre being rewritten in C# to get around some of Python’s limitations. I’m watching that with fascination.

        >Dennis

      2. I can’t speak to the first problem, but the second problem simply doesn’t exist. It only exists in the minds of people who don’t know/understand how Python implements parallelism.

        The problem is this – so many languages only offer threads that many programmers think threads are synonymous with parallelism. That’s not the case. As Guido rightly pointed out in a talk in 2012, threads were never even meant for parallel CPU tasks!

        Python supports MULTIPROCESSING for CPU-bound tasks, and it works wonderfully. In fact, it’s probably one of the best multiprocessing implementations available in regards to languages’ default libraries. ESR’s problem is that his old, giant codebase is written with multithreading in mind, not multiprocessing. It could have benefited from a rewrite as opposed to a port.

        Python dominates right now in data science, in finance, has a big presence in devops, in web development, etc. It isn’t going to “relegated to shell scripts”. The idea that it won’t be used for anything “non-trivial” is hard to fathom given that it’s currently powering the likes of Reddit, Instagram and Youtube.

        It’s really strange to be predicting the death of Python within a year when within the last year several articles in the mainstream media appeared asking if Python had become the most-used language in the world and Stack Overflow’s data scientists began a series of articles to try to explain “the meteoric rise of Python” over the past few years.

        1. >ESR’s problem is that his old, giant codebase is written with multithreading in mind, not multiprocessing.

          You are absolutely wrong in two different ways.

          First, my code was not written with either multiprocessing or multithreading in mind. I tell you that as a fact about my design process back in 2010 when reposurgeon originated.

          Second, you have failed to understand something fundamental about why reposurgeon can’t use multiprocessing and must have multithreading. It has to do with the size of the irreducible shared state for the worker process or threads and the locality of modifications to it.

          MP approaches work well when the problem can be carved up in such a way that either (a) the irreducible size of shared state is small, or (b) it’s large but pieces of it have good locality, so transactions can be serialized, with strong invariants remaining true no matter how transactions are ordered. All the big-data cases you describe are like that.

          Reposurgeon’s job is not. The irreducible shared state is gigabytes wide and has shitty locality. If you try to MP-partition a job like that, rather than letting it sit in common memory and be operated on by threads, the best case is that the performance overhead of constantly passing around large chunks of working set will kill you. Far more likely you’ll effectively be reduced to uniprocessor operation.

          >It’s really strange to be predicting the death of Python

          It would be if that’s what I were doing. Reread what I actually wrote.

    2. “…If you hadn’t created Reposurgeon and watched Python fall over when attempting to deal with really huge repos, you might still be happily coding in Python.”

      I think this misses the larger point…

      Python was my language of choice for 15 years. But in the last 2 years I’ve largely abandoned it for Go. The tl;dr version is simply that Go does almost everything better than Python. Stated another way, if I pick Python over Go I gain very little and lose a lot. If I pick Go over Python I gain much but lose very little.

      I don’t wish to be a fanboi of Go. If there was something better I’d use it. But the simple fact is that the annoyances of Python steadily accreted over the years to where I was going to use something else.

      None of which is to say there aren’t things I miss from Python. But the big wins with Go tipped the scale pretty far.

      1. @Michael:I think this misses the larger point…

        I don’t wish to be a fanboi of Go. If there was something better I’d use it. But the simple fact is that the annoyances of Python steadily accreted over the years to where I was going to use something else.

        I got the point, and largely agree. My basic question was “What drives language choices, and what sort of questions should devs ask when presented with a problem before they choose the language to address the problem?”

        Python was good enough for many years that it got thought of as a general purpose applications language rather than a scripting language, and got away with it because it mostly wasn’t used for stuff that would encounter its limitations the hard way.

        Now, Go seems to be getting the nod Python used to get as general purpose applications development language, and Python is back to being a scripting language.

        (And as an aside to an earlier poster who put Lua into the same category, I can’t agree. Lua is neat, but you can’t write stand-alone applications in it. It’s specifically intended to be embedded in a stand-alone application. You can write stand alone applications in Java and Python. The question is whether you should.)

        >Dennis

        1. And as an aside to an earlier poster who put Lua into the same category, I can’t agree. Lua is neat, but you can’t write stand-alone applications in it.

          Sure, and that isn’t what I was saying. The discussion was on modern BASIC analogues and Python’s fitness to that set; same person also mentioned JavaScript’s taking-over from LISP Machines due to JS engine proliferation, someone else responded that JS fits BASIC analog classification better than Python. At which point I stated that, if that’s how we’re looking at it Lua could be a better BASIC analog than JS for reasons of mechanical transparency.

          Otherwise, I say Python is indeed the better fit for that set.

        2. “Python was good enough for many years that it got thought of as a general purpose applications language rather than a scripting language, and got away with it because it mostly wasn’t used for stuff that would encounter its limitations the hard way.”

          Agreed. Maybe the point I was trying to make is that I didn’t slam into a rock-hard performance wall like ESR did. My stuff never taxed Python that way. What I got was the death by a thousand cuts of trying to hack my way thru the brambles of the Python ecosystem and eventually it was one thorn too many.

          Maybe it’s simply that I tried to force Python into being, as you said, a general purpose applications language because the likely alternatives (Java, C++) were just too horrible to contemplate.

          1. What I got was the death by a thousand cuts of trying to hack my way thru the brambles of the Python ecosystem and eventually it was one thorn too many.

            Python’s solution to this problem appears to be virtualenv. For running back end apps on the major cloud providers (the ones I have direct experience with are AWS and Heroku), that seems to work fine: you pick a target Python runtime, run pip freeze in your local virtualenv to build requirements.txt, push it, and let the cloud provider’s build system do the rest.

            The extra overhead on the developer is having to work inside the virtualenv locally and have a separate one for every project. At first I found that somewhat bothersome, but I got used to it fairly quickly. The advantage is that it entirely decouples your particular Python app from the vagaries of your distro’s Python packaging vs. pip vs. whatever else; your app is in its own environment and doesn’t care what’s installed on the rest of the system or how gnarly it is to find it.

            1. “Python’s solution to this problem appears to be virtualenv.”

              I’m well familiar with virtualenv. It certainly helps certain problems, but also brings some of its own. In the very least it’s another layer of tooling/complexity that consumes the developer’s valuable time & attention. Maybe it has gotten better, but the whole thing was very brittle for years. I can’t count the number of releases of pip & setuptools that were just plain broken.

              I think programming languages ought to be rated by the number of additional tools & packages you have to download / install / learn / maintain / fight-with in order to be productive. Go is up near the lead, Python is pretty far back, while Javascript is laying face down in the mud at the starting gate.

              1. Maybe it has gotten better, but the whole thing was very brittle for years.

                I think it has gotten better recently, but you’re right that it was broken for years.

  21. I am shocked, shocked to find out that a dynamic scripting language is out performed by a traditional compiled language.
    Opening point Go is great. It’s like C with Python syntax.
    As to the death of python, I think the statements bring up the question. What reasons was it chosen for when the project started? Are those reasons actually any less true for people starting projects today? If one cared about performance they wouldn’t choose python. If they care about convenience python remains easier than go.
    Minimizing global locks is just a good idea though.

    1. >What reasons was it chosen for when the project started?

      Combination of GC with a rich type ontology. I knew I was going to need both. Besides, it had been my first tool to hand since 1998.

  22. This is kinda what I was trying to get at in the last discussion on the topic. Is Go literally a scripting language? Well, that’s a definitional question. Is it closer than someone raised in the 1990s-era distinctions might realize from the initial read down the feature set of Go? Is it close enough to significantly eat into the market share of things that definitely are scripting languages? Yes.

    Python is still faster to bash together a 100-line script in. But I find that I get to the crossover point in about a week, where even the prototyping process is faster in Go than in Python. I have a lot more refactoring power and confidence in Go and that starts multiplying over the course of a serious project. The resulting prototype is also closer to production quality because I’ve been better able to refactor as I went.

    1. ” Is Go literally a scripting language?”

      I’d be wholly resistant to calling Go a scripting language. But that’s mostly because that term is generally said as a pejorative.

      On the other hand, I’m finding myself using Go for things that would have been done in bash in the past. In fact, my current project is replacing a largish bash script with a proper Go program. And I like it. Feels … powerful.

    2. Jeremy Bowers This is kinda what I was trying to get at in the last discussion on the topic. Is Go literally a scripting language? Well, that’s a definitional question. Is it closer than someone raised in the 1990s-era distinctions might realize from the initial read down the feature set of Go? Is it close enough to significantly eat into the market share of things that definitely are scripting languages? Yes.

      My basic assumption is a scripting language is one that does not have to be compiled to native code to run.

      Early script languages were interpreters, like the Unix Bourne shell that was the antecedent of bash, In that sense, BASIC could be considered a scripting language. It took some time after BASIC became popular for BASIC compilers to appear, but you could still have it interpreted interactively, then compile when you had it working as desired for production deployment.

      Definitions are blurring. Java and Python both compile to tokenized binaries, actually executed by the language runtime. The advantage is that the binaries are cross-platform, and target a virtual CPU implemented by the runtime. The runtime abstracts away the underlying platform differences.

      JavaScript began as something interpreted by a JavaScript engine in the browser, but as usage expanded and JavaScript routines got larger and libraries were involved, we started to see JIT compilers that compiled JS to machine code on the user’s system to get performance.

      Now we are seeing JavaScript as the output from compilers for other languages like C. You may not actually compile that to machine code for the target, because the target may well have a JavaScript engine like Chrome’s V8 or Mozilla’s IonMonkey that does JIT compilation installed, so you just send the JS to the target and let the resident JS engine output the machine code for you.

      I don’t think of Go as a script language, since you are still compiling to native code for deployment, and you don’t have the convenience of an interpreter step to test your code (or just interpret because you don’t need the speed of compiled code for what you are doing.)

      As Go becomes more popular and more devs become fluent in it, it can displace actual script languages because it has the speed and power to handle stuff the script languages fall down on, but the scripting languages won’t go away. The stuff they are useful for will still exist and they will be the right tools for those jobs.

      The question, as usual, is just what the job is, and what tools are the right ones to apply. Not properly understanding the problem domain and choosing the wrong tools in consequence has been the death of unnumbered projects.

      >Dennis

      1. My basic assumption is a scripting language is one that does not have to be compiled to native code to run.

        Ask ten developers what a scripting language is and you’ll get at least three distinct answers. So, it’s not as pessimal as it could be (ask ten developers what OO is and you’ll get about 15 answers), but it’s still a fairly fuzzy term for sure. So I’m not at all passionate about whether we apply the fuzzy label to Go. I’m passionate about more people realizing what Go actually is, so we break down the barriers between “scripting” and “system” language and get more languages playing in this space.

        As much as I like Go, I still definitely have some non-trivial quibbles with it and want to see more people playing in this space of static languages that are almost as convenient as scripting languages, but still have the benefits of static languages too, without being as expensive to work with as Rust. (Which also absolutely has its place in the language landscape and I really like it too. But it’s not playing in the same space as Go.) Like Nim and Crystal, and hopefully a dozen more in the future. Neither of those have enough draw to drag me away from Go today. But I’m definitely happy to be dragged away later by something better.

        Also Go is not done developing. I’m far from the “Go Is Useless Without Generics” camp, but there definitely are places I’ve both wished for the feature, and wished I could download libraries that use it. (One of Go’s biggest holes, IMHO, is the difficulty of just waltzing on to GitHub and grabbing a binary tree library, or an immutable $ANYTHING library, and having it work well, because without generics such libraries basically can’t exist.) If Go does get that, it will be that much more difficult to pry me away.

        (Especially as it has become clear that the Sufficient Smart Compiler, or its modern incarnation as the Sufficient Smart JIT, isn’t going to happen and the old scripting languages are going to plateau at 10x slower than C or so, with frequent spikes to worse performance than that, all the while chewing through your RAM to get that fast. And while it’s an accident of history and I don’t believe dynamic languages are fundamentally impossible to have good threading in, it is still the world we live in that all the good static languages are basically impossible to get decent threading out of.)

        1. Things I dislike about Go:

          1) Lack of generics. Go isn’t useless without generics, but their glaring lack just makes you go “oh, come on, there’s no fucking excuse for this”. A ton of CS has been done since C hit it big, there are many implementation strategies for deriving classes of types from other types, pick one and get on with it.

          2) Stupid package naming. In Java, domain-name-based package naming was advisory; I could name my package “come.on.fhqwhgads” if I so chose, as long as it didn’t clash with any libraries I was using. In Go, domain-name-based package naming is mandatory if you want your package to be go-get-able. Which means that you have to decide, up front, whether and where to publish your package online — and God help you if your favorite VCS-hosting service goes the way of SourceForge.

          3) Until Go 1.11 — no versioning. When you ‘go get’ something, it’s literally the latest thing out of GitHub or wherever.

          I just… get the feeling that the Go community doesn’t want to invest in repo infrastructure, and that’s why they’re like “oh, just use github — see, it’s even built into go get” — but they don’t allow the user to change the defaults for ‘go get’. The Go toolchain is opinionated, and if you disagree or find their solutions inadequate — tough.

          It’s kind of a shame because Go thrives in its niche — network applications. It’s not really a systems programming language (having a GC rules it out for that), but it very handily replaces C in one of C’s traditional uses: applications, utilities, and services running on a Unix substrate.

          1. >A ton of CS has been done since C hit it big, there are many implementation strategies for deriving classes of types from other types, pick one and get on with it.

            I’m involved in the generics debate on go-nuts. Designing a generic-type system that meets Go standards of simplicity is a much harder problem than you think it is. Sure, it’s easy if you’re willing to lumber down your language with lots of additional keywords and special semantics for representing interface contracts that is disconnected from anything else in the language. The Go devs aren’t willing to do that and I applaud them for it.

            >In Go, domain-name-based package naming is mandatory if you want your package to be go-get-able

            Yes, this sucks if the residence node of one of your dependencies goes poof. Welcome to distributed systems on unreliable hardware. There is no good solution to this problem short of blessing one central package repository and investing enough in it that it never goes down. Oh, look, now you have a single point of failure! Similarly, the only way you avoid domain-name-based package naming is by having a single-point-of-failure name registry. The Go policy isn’t abdication or laziness, it’s a refusal to pretend that reality is not what it is.

            1. Twenty years ago I met C++ templates, and it gave me something I’d been wanting for years — a way to write a standard data structure or algorithm ONCE and be done with it, instead of having to rewrite it for every new combination of types. Ever since then templates/generics have been a must-have feature for me in a statically-typed language. I don’t ever want to go back to the bad old days.

              So Go has always been dead on arrival for me.

  23. >extremely ill-advised decision to turn strings into Unicode code-point sequences rather than byte sequences

    I think the idea may have been to make it easy for business apps, for programmers who are not really that technical and they can just take any input string from a web form, stuff it in a database, export it in XML, import into another app and it is still displayed right without really having to understand what is going on.

    For business, text is often a stinking mess you don’t really want to touch, just save and display without really doing anything with it. Before Unicode we had to deal with crap like someone calling from the Norwegian subsidiary that the uppercase 0/ letters are not displaying properly, when you don’t even speak the language and have little idea what is going on.

    Unicode arrived and we gladly forgot about it all and basically started to treat text as black boxes we don’t look into, because we can’t, it can be Chinese or Thai or anything in a multinational corporation, so we just save and display and print them out as they are entered without touching them and the interpretation is up to the users.

    They may have wanted to make it easy. They may have also botched that, I don’t know.

  24. FWIW, although it is surely a lot of fun to do the conversion, it seems to me to be a solution in search of a problem. You are having a problem with the conversion of the largest repository you can find? Surely that doesn’t demand a solution for the 99% that it works well for? To me the solution is to buy a little time on an AWS machine with an insane amount of memory and be done with it. Especially since it is a one and done program.
    I understand that the nature of quadratic growth doesn’t always make this a viable solution, from the problems you describe it is a solution here. Extra hardware is always cheaper than custom software.

    1. > To me the solution is to buy a little time on an AWS machine with an insane amount of memory and be done with it.

      I hear this a lot.

      No, the cloud isn’t the answer. Cloud VMs have poor memory latency; that would make my test runs longer, not shorter.

    2. > Especially since it is a one and done program.

      I think it’s worth being very clear that it is not a one-and-done program. It’s a tool that may require many runs to get just the right tuning for your use case, modulo the specific pathology to be wrestled. That tuning time should be maximally spent feeling out the edges of the problem, not waiting for processes to complete or die of resource starvation.

  25. > reposurgeon hit a performance wall on the GCC Subversion repository. 259K commits, bigger than anything else reposurgeon has seen by almost an order of magnitude; Emacs, the runner-up, was somewhere a bit north of 33K commits when I converted it.

    > The sheer size of the GCC repository brings the Python reposurgeon implementation to its knees. Test conversions take more than nine hours each

    These numbers boggle my mind. Why does reposurgeon have so much trouble with repos that are so small? Serious question.

    The Linux kernel is a “medium” git repo and it’s 3x larger. I work with svn and git repos that are 30x to 400x larger than Linux. I won’t count the 400x larger repo as it’s pretty unusable in git, but there’s still a two-order-of-magnitude gap between “the biggest known reposurgeon project, ever” and “repos that I routinely slice and dice as I move from project to project.”

    What is reposurgeon doing with all that time? What are the top three profiling hotspots? Which feature of reposurgeon contributes the most to the cost?

    I have to do really heavy editing (like, file-content-changes heavy) to get anything that isn’t SHA1 or zlib into the profiling top two. A full commit history dump on the Linux kernel takes 25 seconds; manipulating it according to the DVCS migration HOWTO and shoving it back into the git repo takes maybe 10 times that depending on the heaviness of the edits. Author and commit reference mapping requires a lookup table with fewer than a million entries–utterly trivial on modern hardware. A diffstat of the entire history is 3 million file records, it takes 14 minutes to run, and it involves less CPU work than SVN property and merge conversion. Still ~40x less than 9 hours…for a 3x larger repo.

    The gap is not because of the implementation language. I’m using shell commands and ad-hoc Perl scripts. Except for the git tools themselves, everything is slow and single-threaded, and somehow it all still runs rings around reposurgeon. One of us is missing something important.

    Perhaps there is some larger workflow issue? Do the 9 hours include svn-fast-export or cvs-fast-export runtime? Do you do a test conversion, say “whoops there’s a problem at commit 5432 out of 259000” and do the entire conversion again, instead of fixing it in-place in the output git repo? Is reposurgeon rewriting embedded RCS tags?

    1. >These numbers boggle my mind. Why does reposurgeon have so much trouble with repos that are so small? Serious question.

      You think this is slow? Fact: Even the Python version reads Subversion dumpstreams faster that the native Subversion importer does – and that latter is written in C.

      >What is reposurgeon doing with all that time?

      In the Python version the code that gets hammered seems to be object allocation and string-bashing. There were serious O(n**2) hotspots at one time but Julian Rivaud and I smashed those out back in 2013.

      I’m going to be very interested to se e where the usage spikes are in the Go version.

      1. > You think this is slow?

        Yes, I think this is slow. git filter-branch feeds every commit to a shell instance for mangling, and doesn’t take that kind of time to rewrite the commit history even when the repo is much larger. If your conversion time is dominated by the performance of handling dumpstreams then I think I understand the 9-hour problem.

        I divide the conversion work into two phases: one just translates SVN literally into git, so I get 259000 commits on a single branch and a parallel branch containing a tree of metadata corresponding to each content commit. The second phase reads just the metadata and mangles it as required to rearrange the blobs into trees, branches and tags. I built it that way because the first phase that moves the blobs from SVN to git takes a few hundred to a few thousand hours on my repos (most of this is SHA1 and zlib and IO), but it never needs to be repeated once it’s done. The second phase (the one that needs some human input and seems to do the interesting things reposurgeon does) usually grinds through hundreds of thousands of commits per hour, though the “human input” thing usually means that it only gets to work in batches of fifty thousand commits or so.

        With my workflow, it’s never more than an hour per test run, and after a few repetitions of that hour that unit of work is done (assuming something horribly wrong isn’t discovered later), and I move on to the next one.

        When conversion is complete, I let git filter-branch remove grafts and git gc get rid of anything that was removed from history. I save the output of git show-refs as a checkpoint at every edit step (it’s also a handy log of the commands I used), and restore it if I do something wrong part way through.

        > In the Python version the code that gets hammered seems to be object allocation and string-bashing.

        Sure, those are the slow parts of Python, and therefore the slow parts of almost every Python program. What are the slow parts of reposurgeon? What’s allocating all the objects and bashing all the strings?

        1. >What are the slow parts of reposurgeon? What’s allocating all the objects and bashing all the strings?

          Ask me again when the Go translation is done. One of the side effcts will be better profiling tools. Python’s aren’t very good – or, at least, if there’s a way to get information as fine-grained as I need out of them I’ve never found it.

          1. > One of the side effcts will be better profiling tools. Python’s aren’t very good

            Fair enough. I badly miss perf on every platform that doesn’t have some equivalent.

          2. > Python’s [profiling tools] aren’t very good – or, at least, if there’s a way to get information as fine-grained as I need out of them I’ve never found it.

            Stupid olde-skool trick?? Build a private copy of the python interpreter with all the debugging hooks in place (“-g” et al.); run it under gdb and set breakpoints at the places in the interpreter where you think the hotspots are.

            Obliviously, this will be nearly an order of magnitude slower than “real” Python, so you should do this with your second-gnarliest repo conversion.

            1. >Are you saying you’ve never profiled the Python code?

              I absolutely have. Julien Rivaud and I found some hidden O(n**2) operations and reduced then, back in 2013.

              We got things turned to the point here it as hard to see hot spots, except one in merginfo processing that only kicks in on *very* large repos. I’m pretty sure that the allocator overhead was dominating everything else.

  26. Why can’t model airplanes use Diesel fuel like the Big Rigs?

    Python is a microcode interepreter not unlike Java. It CANNOT perform like a compiled language until/unless there is a native Python chip (which won’t exist – there are Java, Forth, and Pascal chips, even if obsolete). Saying a compiler is 40x more efficient than an interpereter is comparing a skateboard to a muscle car.

    That said, Python jumped the shark with 3.0. Forget all the other stuff, a division is a choice between truncation and precision. When (ints) X/Y returns 1/3, either you know it will be truncated and the modulo might be needed, OR that if you convert it to some kind of float it won’t be exact, as in 0.333 (x3 = 0.999, not 1.0!). Python made a bad choice here.

    Still, for a quick program, prototyping, or low load programming Python optimizes programmers’ brain time.

    I think Python needs to be forked. Maybe Quocatyl (think feather Boa) where one would be more geeky, C-like, hackers need to know, and the other will go toward what BASIC was.

  27. Go won’t eat their lunch. Go is still not mainstream. C#/dotNet is still more a force.

    What are iOS or Android apps coded in? Go? Anywhere?

    Webserver backends?

    To extend what I said above, I agree Python is at a crossroads. Does it want to be Java, Javascript, BASIC, or does it want to be C or Go? Or will it fork?

    My sole objection to Go so far is there should be the language, and there shoudl be the standard library, like C and libc. Printf is NOT part of the C language.

    Go’s memory allocation with GC is part of the lang, not the lib. That is violating a boundary that should never be crossed.

    1. Where ever you draw that line is going to be arbitrary. Unless you say minimum to be Turing complete, but then the only thing C supported was function calls, subtraction, and the ternary operator. I do lean toward minimal langauge set. I love C, but I’ve always thought they missed the boat with things like memcpy being a function. Moving data structures around should be part of the language because there are so many assumptions that can be made that a true memcpy function can’t. And that is why it is usually end up a built-in in many versions of C.
      Most languages have some way to allocate memory that is not a function call. Should C have a stack_malloc() function that has to be called to dynamically allocated stack memory for local variables? or a compile time static_malloc() for statically allocated heap memory? The fact that dynamically allocated heap memory is a function is more of the special case. And can anyone imagine statically allocate stack memory? Forth maybe?

      1. I believe C now allows variable length automatic arrays. And of course alloca, which may be a Unix thing rather than C.

  28. @Amy Bowersox: Somehow, I don’t think Our Host would touch C# with a ten-foot pole, either, and I would guess that that reasoning is largely based in the fact that it is Micro$oft ick. Would that “hobble” him as a developer?
    If he chooses not to use a tool because of who made it, yes, it would, but I think he may surprise you. Eric has always been a technologist first. I’ve known him since before he became famous and still had a $DAYJOB.

    The question is “What is the right tool for the job?” and the problem is determining just what the job is. Failure to properly understand the problem to be solved has doomed many projects.

    Also, ever heard the aphorism “the leopard doesn’t change its spots”? I’m sure the EEE strategy is still in the back of the minds of many at Micro$oft.
    The leopard may not change it’s spots. People do. So do companies. That you don’t wish to believe that does not make it untrue.

    I’ve been using computers and watching this since Microsoft was the developer of a version of BASIC, and was asked by IBM to come up with an OS for the original IBM PC. Back then, IBM was the Evil Empire, and trade publications has stories about IBM sales reps threatening to have DP managers fired if they didn’t specify IBM. Microsoft was small potatoes indeed. It grew from there.

    MS is a publicly held company, and the key metric the financial markets care about is the price of the stock. MS had posted regular double digit increases in revenues and profits, and got a stock price in the stratosphere. At one point, Bill Gates was the world’s richest man, based on the value of his MS stock. He stepped aside an an opportune moment – he went out a winner. It became COO Ken Ballmer’s new mission as CEO to support the price of the stock, which was a challenge as the market was changing and MS’s business model had to change too. What the financial markets reward is growth, and the question was where growth would come from. MS had been the poster child for a growth company, but was in the process of becoming a mature company. Mature companies throw off great gobs of cash, but don’t have stock prices in the stratosphere. The market for Windows was saturated – pretty much every machine that could run Windows did. The market for Office was similar. There was revenue in upgrades and replacements, but far less new sales. Where would growth come from?

    These days, the money is in cloud services, and MS must co-exist in a multi-architecture market place and play nice and cooperate with everyone, and it knows it.

    I’m sure the EEE strategy is still in the back of the minds of many at Micro$oft.

    It may be, but if they want to keep their jobs they won’t put it into practice. MS CEO Satya Nadella is firmly behind the new approach, and he’s not going anywhere. MS’s numbers are good, thank you, so the board is unlikely fire him.

    And as an example of how much MS has changed, see https://www.zdnet.com/article/microsoft-open-sources-its-entire-patent-portfolio/

    It wasn’t all that long ago that most folks would have considered that unthinkable.

    Like I said, it’s about the money, and the way to make it these days in the tech market is pretty much the opposite of teh old days of account control and locking in the customer.

    As for OpenJDK, the blog post I linked to was intended as a warning and a pointer to OpenJDK for those that haven’t yet gotten the message. It just played up the fact that, by subtly changing the license on the downloads that people have “always” gotten from them, Oracle is trying to pull a fast one. Easily evaded once you see it for what it is, but a fast one nonetheless.

    I concur, but it’s on the developer to keep up and be aware. I’d be a bit surprised if readers here who develop in Java for a living and deploy commercial installations weren’t aware of it.

    (And trying to pull fast ones has been part of the tech marketplace for as long as I’ve been paying attention, which is over 30 years. I find a lot of it hilarious.)

    My basic objection to your original post is that it comes across here as “Microsoft? Ewww! I’ll get cooties!” No, you won’t, and the sooner you shed that notion the better off you’ll be.

    >Dennis</strong

    1. >If he chooses not to use a tool because of who made it, yes, it would, but I think he may surprise you. Eric has always been a technologist first.

      I don’t trust closed-source tools, period; the downstream risks of becoming dependent on them are too high. This is me speaking as a technologist – it’s not a Microsoft/anti-Microsoft issue.

      I don’t know about C#. Is the toolchain open-source? If so, it is theoretically possible that I might use it. Though unlikely.

        1. I know a guy who was a college professor. He trained his students using Silverlight and built an amazing architectural application dependent on MS databases and whatever constituted Silverlight’s “special sauce.” It could literally take a building apart into pipes and wires and bricks, kind of like what we saw on Max Headroom, but in 3D color.

          Then MS gave Silverlight the axe. Goodbye student’s prospects for work. Goodbye miraculous application.

          When it comes to proprietary programming languages, the best things in life are free.

      1. > I don’t trust closed-source tools, period; the downstream risks of becoming dependent on them are too high

        You mean like becoming dependent on Python, or GCC?

      2. As others stated, the .NET and C# toolchain is open source now, but you’re probably honestly better off using Java; C# was based on Java in the first place, and Java has much better support in the open source world, in particular for running on not-Windows OSes, which C# is capable of but much of the ecosystem depends on Windows.

    2. Steve Ballmer’sname is Steve Ballmer.

      He’s not cool enough to be named Ken.

      (He’s not cool enough to be named Steve either, but this is where we are.)

  29. @esr: I don’t know about C#. Is the toolchain open-source? If so, it is theoretically possible that I might use it. Though unlikely.

    C# and F# and other things are part of the .NET framework, so the answer is probably yes. MS has open sourced .NET, and MS engineers are major contributors to the Mono project.

    This means is should be possible to write cross-platform code in C# that runs on Windows and Linux, because the code is actually executed by the .NET framework.

    You may not ever use it, but there isn’t a reason you shouldn’t beyond “wrong tool for the job”.

    I’m less fussy than you about whether to go closed or open source.

    The issue with closed source and proprietary is “Will the vendor still be around and maintaining and supporting it X years from now?” I don’t expect MS to suddenly wither and blow away.

    The issue with open source tends to be direction, and whether the project gains enough traction that someone other than the original devs gets involved and can pick up the reins if the founders leave. Most open source projects never gain traction and become abandoned.

    The things that are making you transition Reposurgeon to Go are examples of what I mean by direction. Guido has stepped back from actively leading Python development, and the folks holding the reins now don’t seem to realize the stakes of the game they are playing. Since their living probably isn’t dependent on it, what sort of club might get their attention?

    An advantage to commercial software that is sold for money is that the folks selling it have an incentive to pay attention to the customers, fix bugs in a timely manner, and implement features the customers want, because it they don’t, people stop buying their software and they go under.

    A fundamental problem with open source is the disconnect between the developers and the folks that use it (cough Mozilla cough…)

    And most things I can think of that are now open source are what happens when software becomes a commodity, and there’s no longer a way to make a decent profit selling it.

    OSes and development toolchains are now commodities, so…

    >Dennis

    1. I was sitting here pondering a sci-fi reference to C#.

      Note: I use C# and IronPython a lot.

      C# and its relationship to .NET is like War of the Worlds. C# represents the weak-ass martians in the giant walkers, and .NET represents the walkers themselves.

      C# is a wonderful language. Its recent updates to allow basically dynamic-like typing with the ‘var’ keyword, along with the ‘dynamic’ keyword, make it a strongly-typed pseudo dynamic-typed C++-like language, with GC, on a VM.

      But it’s forever chained to the monolith that is .NET. If you shed the .NET framework, you shed all of the usefulness of C#. It’s still a fun language to use, but the utility of it is gone. C# requires .NET.

      This doesn’t bother me, since I’m strongly tied to the current business environment using exclusively windows computers, but it’s still an issue with C# as a language, which probably won’t ever get resolved.

      1. Just like Java is chained to the monolith that is the JVM. With Roslyn/Mono there isn’t really a big difference other than C# being better than Java at the same job.

        I often write code that generates and emails reports as .xlsx files and I don’t have a bad conscience about it. It is a zipped XML file, an ECMA standard, well documented, and implemented everywhere from most Android phones coming with an Excel installed on them to LibreOffice being able to read an write it. What speaks against it? Well, maybe that it can be an overkill. But I would show a report of 28 (long) lines which is a 11kbyte file. Not really a hog to email. Unzipped 68k, of which 29k is the actual data. Rest is mostly style. Yes I like manager reports to have some style and not just plain csv files. Why not. Any Linux server can parse it with openpyxl etc.

        And if it hadn’t had one? After unzipping it, the data is in sheet.xml, with a really basic structure telling you the value of the H15 cell is 1062. If it was a string, one more step is required, reading sharedStrings.xml. Parsing it is a basic entry level programming school homework.

        No, I really don’t feel bad about this at all. Should I? It definitely does not look like vendor lock-in at all.

      2. “C# is a wonderful language. … But it’s forever chained to the monolith that is .NET.”

        A pity, that.

        Having spent many days trying to fix broken installs of .NET on some rattle-bang Windows box, I came to despise the very smell of it.

        Microsoft can do really good work when they want to. The design of C# testifies to that, while .NET is the counterexample.

        1. I think the root cause of that bad experience is the windows DLL-Hell rather than any fault of .NET itself. Its risking your sanity to look under the covers of how windows 10 or recent server versions manage this phenomenon, but from the perspective of the user being able to install programs that have hard and conflicting shared library requirements (and not just .net or directx) and have them Just Work is pretty nice.

    2. I see it the opposite way, having used closed source languages before. The person developing the product does so as a 9-5 job. They don’t actually *use* the product. Worse, as one former “chief scientist” for a proprietary language put it, every idea has to be run by executives and a business case made for it. In other words: will putting X hours into feature Y generate us a sufficient profit?

      Now take Python. No one is paid to work on Python full time, not even Guido. The people contributing to Python are using it at their day jobs. They, unlike the 9-5er, are actually the ones dependent on Python for their living. They’re stuck using whatever they build. As Guido put it, the only question he had to ask was “Will this make the language better?”, not return on investment, sales and marketing issues, etc. The last proprietary programming language I used added just enough of a feature to add a checkbox to their feature comparison table and claim they “had” that feature, even if it paled in comparison to the competition. For instance, their IDE added “Git and Mercurial support”, but for LOCAL REPOS ONLY. No pushing to a remote repository, handling merges, etc. And several years have passed and they’ve never added this functionality in because the marketing team, which seems to really run product development, have their little feature matrix check now. The developers don’t care because they don’t have to use their language/framework/IDE do develop standalone software like their customers do.

      I’ll take the open source development model any day for language and language tool development over the closed source model. I’d like every contributor to be a user and features added on technical merit, not marketing fluff or ability to drive sales.

  30. > That “actual objects” qualifier is important because there’s a substantial scientific-Python community working with very large numeric data sets. They can do this because their Python code is mostly a soft layer over C extensions that crunch streams of numbers at machine speed. When, on the other hand, you do reposurgeon-like things (lots of graph theory and text-bashing) you eventually come nose to nose with the fact that every object in Python has a pretty high fixed minimum overhead.

    Have you tried accelerated graph libraries, like Graph-Tool library (with C++ backend), or SNAP.py (Python wrapper around Stanford Network Analysis Platform library)? Those may have the same advantages as using NumPy for vectorizable data.

    There is also Numba – a Python compiler, though it is geared mostly towards the same work NumPy and SciPy is best at (and is compatibile with NumPy). Perhaps reorganization of data, from commonly used “array / list of structs / dicts” to easier to vectorize “struct of arrays” (also known as Inside-Out Objects).

    > The most notable one at the time was the Python team’s failure to solve the notorious GIL (Global Interpreter Lock) problem. The GIL problem effectively blocks any use of concurrency on programs that aren’t interrupted by I/O waits. What it meant, functionally, was that I couldn’t use multithreading in Python to speed up operations like comment-text searches; those never hit the disk or network.

    Doesn’t alternate Python implementations like PyPy and Cython avoid GIL?

    1. >Have you tried accelerated graph libraries, like Graph-Tool library (with C++ backend), or SNAP.py (Python wrapper around Stanford Network Analysis Platform library)?

      No, and it’s not going to happen now with the pure Go translation at 56%.

      Anyway, I would have taken a lot of persuading that overcoming the impedance mismatch at the Python-C++ boundary was easier than just translating the whole program into one other language. Seams like that are hell on your complexity budget and downstream defect rates.

      >Doesn’t alternate Python implementations like PyPy and Cython avoid GIL?

      No. If they did, you’d hear a lot of buzz about concurrent Python on these platforms; ain’t happening. The PyPy people are working on the problem, though, which is more than one can really say about the C-Pyton creww.

  31. Any news on this?:

    http://archive.4plebs.org/pol/thread/187078170/#187093805
    > Anonymous ID:sqkjzTNz Thu 27 Sep 2018 00:23:36 No.187093805 ViewReport
    > Working with a group of 5 other devs thus far to rescind our code. Some of whom have been major contributers.
    >Was hard to find a decent attorney with tech based experience due to the fact that a lot of heavy weighters are going to be extremely upset. Drafting legal docs to make it as impactful as possible.
    >
    >All that work and you fuck us over, Linus. Now we fuck you back.

    ——

    http://archive.4plebs.org/pol/thread/187078170/#q187094037
    > leiqdbSauceNAO Screen Shot 2018-09-26 at 8.24.1 (…).png, 136KiB, 458×928
    > Anonymous ID:zDdj02sY Thu 27 Sep 2018 00:25:31 No.187094037 Report
    > Quoted By: >>187094386 >>187094624 >>187099544
    >Stop fucking commenting as if this would ground the internet to a halt overnight. Revoking code means it can’t be in the next version, it doesn’t invalidate prior versions. 80% of the code submitted to the Linux project is from Redhat or some other major company like Intel or AMD. That other 20% is minor shit except for the few people who have been around forever. This fucking threat is hollow because no company is going to pull their code because of a few trannies. This CoC thing will be ignored because big money rests on the project being successful. The moment this fucking dipshit starts becoming a liability the big money will smite him full force.

    ——

    http://archive.4plebs.org/pol/thread/187078170/#q187094386
    > Anonymous ID:sqkjzTNz Thu 27 Sep 2018 00:28:48 No.187094386 Report
    > Quoted By: >>187104619
    > >>187094037
    > >except for the few people who have been around forever.
    >
    >Ding ding.

    ——————

    https://lulz.com/linux-devs-threaten-killswitch-coc-controversy-1252/
    >thejynxed
    >
    >My company is already considering the full withdrawal of all contributed code to the kernel project and related embedded kernel projects. You literally can\u2019t run embedded Linux on industrial controls or handheld scanners without this code.
    >September 22, 2018 Reply

    1. If there’s even a hint that this sort of thing is even feasible, watch Oracle pull another fast one and make OpenJDK go bye-bye. Then, everyone will have to pay for Java licenses.

      And that’ll be just the beginning. This is a can of worms you do not want to open up.

      Is the freedom to be a dick online and still contribute to the kernel really worth it?

      1. >Is the freedom to be a dick online and still contribute to the kernel really worth it?

        Preventing totalitarians from getting the control they crave is always worth it.

        1. One of the linchpins of open source is that once released under an open-source license, that code remains open in perpetuity. I don’t think even you would want to cede that ground for an opportunity to stick it to some of your hated enemies — because you won’t be getting that back, and once companies like Oracle sniff out that there’s money to be made by rescinding and charging license fees for code that they once offered in good will to the community, they won’t hesitate to do so for a second.

          To quote one of fiction’s greatest military space commanders — it’s a trap!

          1. That may be true in spirit, but recall that the FSF saw it necessary to make it explicit in GPLv3.0, section 2 of Terms and Conditions, and may be taken as evidence that the threat has legal teeth. It is yet to be tested.

          2. The passion is like a flower.
            It must be supported, but not held too fastly.
            Kept from the burning sun, but not heralded into the night everlasting.

            All things, within themselves, bear the seeds of their own distruction.
            The Free Software movement, in it’s error, imagines every user a
            creator. We know this not to be true. In that, the true progentors are
            discounted and unworthened. Imagined to be replaceable, to be thrown
            away if irredeemable: the irreplacable artifact. Their departure mistooken
            for a familiar voyage seen before; whereas it is a final farewell.

            Opensource, too, has its deadly poison. One we see sprouting it’s
            leaves in tandem with the social ills of its predecessor and
            compatriot. Opensource, to be enthralling to the business class, flew
            close to that hidden master; for behind this class lies not the
            freedom of choice: of the market: of mertit. Behind the business class
            lies the leaded hand of our modern fates. The rulers who show not
            their face but sew our futures.

            Here today, we stand before the confluence, by design, of these two
            fatal seeds: their sprouting arms interwined. Let them grow they will
            strangle what we loved, what we laboured for. Every victory turned
            instead to a detriment: that what was a labour of love: now the chain
            that would bind us to our malefactor.

            Shall we continue down the path that has, before us, the bricks
            allready set. That which we are ment to go down, for it is the easiest
            path, the one which uses our body of works against our weal?

            Or shall we do what must be done? Must that which is corrupt bake in
            the sun? Must our child be tortured, turned against his parent, turned
            into a snake? Must we submit to those who would rule us? Who we
            struggled against (but have forgotten in their embrace)? Our enemy who
            beseeches us that he is our friend?

          3. >One of the linchpins of open source is that once released under an open-source license, that code remains open in perpetuity.

            Misappropriation of the developer’s code resulting in harm to his reputation, or compromising the reputational incentives offered by open source, is a cause of action under Jacobsen vs. Katzer. This is a separate issue from revocation of the license.

            1. This is indeed one of the blatantly stated goals: to shame and bring disrepute to a disobedient programmer.

          4. >One of the linchpins of open source is that once released under an open-source license, that code remains open in perpetuity.

            This is never stated in the “classic” opensource licenses.
            (Which are all very short documents, not drafted by lawyers)
            (BSD, GPLv1, GPLv2, MIT, etc)

            It is just assumed to be by various people.

            Which is a mistake under US law since the property-law defaults are actually the opposite.

            It IS stated in the 2nd and 3rd generation of opensource licenses that were actually drafted by corporate council (of which the GPLv3 is one).

            >and once companies like Oracle sniff out that there\u2019s money to be made by rescinding and

            They already know. This is why they usually insist on proper licenses without missing or omitted terms, as-well as copyright assignments.

            The Opensource and Free Software movements might need to disassociate with the USA however. I remember a time when the Opensource and Free Software movements had little love for the USA and openly flouted US encryption restrictions. At that time the movements were inhabited by the people, and not controlled by establishment interlopers.

      2. > Is the freedom to be a dick online and still contribute to the kernel really worth it?

        Given that the SJW definition of “be a dick” is “doesn’t support my position loud enough”, yes.

        Remember:
        “You cannot be civil with a political party, that wants to destroy what you stand for, what you care about.”

        Remember which side keeps *starting* this shit.

        1. The SJW’s definition of “being a dick” is “having one”.

          A white male cannot be hired at google unless he physically chops it off.

          America is a disgusting aberration.

          1. >A white male cannot be hired at google unless he physically chops it off.

            No, most of their techies are still white and male. People who meet the IQ minimum for those jobs are too rare in non-white populations other than East Asians for it to be otherwise. And most white women who meet the threshold have sensibly decided they can have better lives somewhere other than software engineering.

            This brutal reality must gall the lefty/identitarians inside Google terribly. I think that pain explains why the reaction to James Damore was so vicious.

            1. > This brutal reality must gall the lefty/identitarians
              > inside Google terribly. I think that pain explains
              > why the reaction to James Damore was so vicious.

              I don’t think they *do* notice it. They probably don’t interact with the “real” techies–the guys in the back who troubleshoot their code with debuggers or logic probes (allegedly Google is also doing some work at the hardware level, and not just cellphones and tablets). Hell, most of them are probably in different buildings and on different floors. The EE/CS types don’t go to the parties much, skip the soft-skill crap and generally are heads down doing important work.

              So when a Damore pops up it’s double insulting–first he’s questioning their sacred worldview, secondly he’s doing it from inside the wire where that was supposed to be eradicated.

              1. One thing we learned in the aftermath of Damore’s firing is that there are certain internal Google+ forums you’re pretty much required to follow if you want a future in the Goolag. And in those, the SJWs run wild, with daily posts expressing the usual hatreds of whites, cis-het men, etc.

                Maybe the hard core types can keep their heads down, forgo promotions, and risk getting a bad stack ranking, but … well, the theory of social justice convergence says it won’t work for long. Certainly their search results have become … sub-par.

                This could also result in a near-sudden death scenario for the company, its only serious money maker by far is advertising, and it needs the very best minds to stay one step ahead of the scammers. If enough of those get purged, find greener pastures, or decide they don’t want to touch the company with a 10 foot pole, it could be bad.

              2. It’s… a bit more complicated than that.

                At Google, there are two classes of rank-and-file: those who build systems, and those who maintain them. The former are the actual software and hardware engineers. The latter are called “site-reliability engineers” in Google jargon. I was an SRE.

                I can tell you that a lot of SREs are, in fact, SJWs. Liz Fong-Jones is SRE. So was Tim Chevalier. I can also tell you that, by and large, there seemed to be this unstated assumption that SREs weren’t “real” techies. At best, they seemed to be lower on the totem pole than the actual software engineers.

                Another thing about SREs is that, when they’re not troubleshooting problems, they’re either waiting around for another problem, or they’re trying to automate away their own jobs. Naturally, this job leaves them a lot of free time to do things like, oh, I don’t know, wage the culture war.

                Note that there are also SJW software engineers. Jaana Burcu Dogan is one: she works (sort of) on Golang. She was the primary force pushing for the code of conduct to be adopted.

    2. I’ll believe it when it happens. Until then, it’s just a bunch of whining which isn’t going anywhere.

  32. Doesn’t the GPL prohibit additional restrictive terms?

    Does appending a writing stating, in a multitude of words, “you cannot contribute if you are not a proud multi-positive feminist, and we will hunt you down and have your lively-hood ruined if we deem words of yours offensive”, and requiring contributors to agree to said contract constitute such an additional restrictive term in practice?

  33. Tried to run the script from the post on MicroPython:

    3.4.0
    dict 16
    float 12
    int 4
    list 32
    set 16
    str 17
    tuple 8
    unicode 17

    Of course, it wouldn’t help with ESR’s issue, MicroPython is an experiment in scaling Python *down*, not *up*.

    1. I thought of micropython too. Unfortunately, it still has a GIL, and doesn’t have the gil-thread-contention fixes that CPython has, so its performance tanks if you start a second thread. Since it doesn’t have multiprocessing or subprocess, that means you’re down to os.fork to get decent parallelism.

      1. What do you mean unfortunately? It has GIL fortunately, for compatibility with CPython. But as anything else in MicroPython it’s configurable. Set MICROPY_PY_THREAD_GIL to 0, and there’s no locking in the interpreter, all locking belongs to you.

        > Since it doesn’t have multiprocessing or subprocess

        Of course it has, it’s just not a bloated-monolitic type of project. Search for “micropython-multiprocessing”, “micropython-subprocess” on PyPI. (Surely, they offer subset of CPython’s functionality, just the same as uPy itself, and WIP.)

  34. Writing in a high level language and optimizing the hot paths in low level language (after measuring hard numbers) is how one does programming right. Everything else (no matter how smart(TM) the language/tool is or how big are the glasses he wears) is nothing more than asking for less readable code, subtle bugs, premature optimization, reinventing the wheel, bad software architecture, or a combination thereof.

    1. To be clear. When a project uses both a macro assembler like C and a zen language like Python, it’s not a workaround, it’s done on purpose. They try to solve different problems and that’s what lets them be exceptionally good in their respective fields of application.

      Sure there are many people out there who miss the point, and write C++ with Python syntax and Pascal with C syntax, but that’s not fault of the language.

  35. As an old-timer of the C persuasion, I’ve been teaching myself Go in an effort to stay relevant and see what all the fuss is about. There are a lot of things I like about the language, but I find it infuriating that I can’t implicitly convert an int32 to an int64 without the compiler complaining loudly about it. I get the whole type-safety thing, and yeah you can shoot yourself in the foot with implicit conversions in C, but for crissake if I want to stick an int32 into a variable declared as int64 I should damn well be able to without jumping through any hoops. That, and the fact that an unused variable is an error instead of a warning, causes more swearing than anything else I’ve ever done in any language (I often comment out a block of code for testing purposes, and the compiler complains every single time because I forgot to also comment out the declaration of any variables that were used in the commented-out block).

    Oh, and the fact that the language forces me to use “K&R” style bracing (if I can call it that when referring to Go). Don’t even go there.

    Now, you kids GET OFF MY LAWN!

    1. > I often comment out a block of code for testing purposes, and the compiler
      > complains every single time because I forgot to also comment out the declaration
      > of any variables that were used in the commented-out block).

      Are you saying that Go has eliminated one of the most useful constructs that C++ implemented over the top of classic C, even if you’re not actually writing in C++: the ability to declare variables as close as possible before their first use? Surely not having to write:

      foo()
      {
      int i;
      /* 100 lines of code */
      for (i = 0; i<10; ++i) /* where was 'i' declared again? */
      /* body of loop */
      }

      and instead be able to write:

      foo ()
      {
      /* 100 lines of code */
      for (int i = 0; i<10; ++i)
      /* body of loop */
      }

      should be good programming practice in any language that allows it. If it's not possible in Go, then that echoes one of the criticisms in "Why Pascal is not my favourite programming language".

      1. No, you can still do that. I’m talking more about something like this:

        n,err := strconv.ParseInt(str,10,0)
        if (err != nil) {
        // Do something useful
        }

        and then commenting out the if (err != nil) block. The compiler will complain that err is defined and not used, and I hate that.

        1. You can presumably write,

          if ( n, err := strconv.ParseInt(str, 10, 0); err != nil) {
          // do something stupid
          }

          which makes it easier. I don’t like the multiple return value convention at all though.

          I think Go misses the mark though, partly for the absurdities around semi colons, braces and the error return conventoin, which are perverse and irritating, but mostly for failing to support generic programming from the very start. Take what C++ does and make it sane.

            1. I looked at Rust’s documentation and I don’t see it. How do you write (for example) a generic accumulate in Rust?

              (Plus Rust is obnoxious to the eye and its evangelists evangelism is disturbing.)

          1. >I think Go misses the mark though, partly for the absurdities around semi colons, braces and the error return conventoin, which are perverse and irritating, but mostly for failing to support generic programming from the very start. Take what C++ does and make it sane.

            go-nuts is trying to develop a generics design that meets Go standards of simplicity. It’s not an easy problem.

            What absurdities around semicolons and braces do you mean?

              1. > Weirdly enough, I think I solved that problem about 90 minutes after writing this.

                From the link:

                > conclusion I’m not happy with, because it requires a feature-cluster that I thought I was glad to be leaving behind in Python. That is this:

                > The simplest and most effective way to solve the generics problem is to embrace operator overloading and the kind of magic method designations that go with it.

                Yeah, a surprising conclusion.

                Similar surprising insights can be applied to Python’s problems mentioned in the post.

                For example, the trivial and obvious way to get around GIL is to make (expect) a user to use explicit synchronization when he accesses the same objects from different threads. What happens is user forgets to? He gets a nice and honest segfault, the same way as in C. No wonder the “CPython developers” chicken out from such a solution. Interested parties can find it in MicroPython, the unbloated Python implementation. It offers GIL too for user convenience.

                As for CPython, it standardized on another solution to GIL problem – “actor model” style and message passing. When you do that, there’s no need for those finicky “threads” fiddling in one address space, can use processes as well, and indeed, standard library offers everything needed for that.

              2. Generic programming is not about ‘min’ and ‘max’. It’s about defining morphisms from types to types, and from types to functions. (This may sound mathy, but it’s practiced every day by corporate code grinders in banks and insurance companies, thanks to languages like C# and even Java.) To do that, you need to say things like for any two types X and Y, define a new type over X and Y. Because if the type unknowns are implicit, what happens when you want to define a type or function and specify that some — but not all — of the members/args should have the same type? You can’t do this with your proposal. You have to have type metavariables, and that means you have to extend the language with additional constructs to properly support generics.

                1. >You can’t do this with your proposal

                  You’re missing the context. What the various generic-type proposals have been foiundering is on is the requirement toexpress interface contracts – all the proposals for that have been overweight and un-Go-ish. What I have done is describe a way to specify contracts that is transparent, light-weight, and probably sufficient,

            1. The combination of the expectation of a semi-colon statement terminator and the auto insertion of such at the end of every non-blank line. And presumably the invalidity of an empty statement. In combination forcing formatting. It seems unnecessary. Obviously not a hill to die on.

              The Go generics discussions are interesting. But I wonder if they are sufficiently abstract.

              1. “The combination of the expectation of a semi-colon statement terminator and the auto insertion of such at the end of every non-blank line.”

                I can’t imagine why anyone would think any of that is a problem. You write your code, leaving out semicolons everywhere except at compound statements. Break long lines as needed. The code runs. Zero. Problem.

                “And presumably the invalidity of an empty statement.”

                What does this mean? I leave blank lines everywhere I need them. I have no desire to write lines that consist of nothing but a semicolon.

                “In combination forcing formatting.”

                Nothing is forced. But pragmatism beats religiosity. The ‘go fmt’ command is one of it’s very best features.

                All the above is well known to anyone who has actually written significant amounts of go code.

                1. It’s not a question of ‘well known’. It’s a question of taste. Those design decisions have a bad smell. If trivial stuff smells bad, what else is there?

                  (obviously you don’t write an empty statement, the compiler inserts one where you didn’t write one and then breaks as a consequence. But it forces you to write redundant ‘{‘. )

                  You like Go, well great. At least it’s not corrosive.

                  1. “(obviously you don’t write an empty statement, the compiler inserts one where you didn’t write one and then breaks as a consequence. But it forces you to write redundant ‘{‘. )”

                    Still have no idea what you’re talking about. I’ve written a few thou lines of Go, I’d guess. I haven’t broken the compiler yet. And I don’t see a single redundant { anywhere in my code.

                    I follow lots of Go blogs & mailing lists & such. About every 2 weeks a new article appears with a title in some variation of one of these:

                    – ‘I”m too Smart for Go’

                    – ‘ Go Isn’t C/C++, Therefore It’s Bad’

                    – ‘Go Isn’t Java, Therefore It’s Bad’

                    – ‘I’ve Never Written Any Go And Understand None of It, But I’ve Decided It’s a Horrible Language’

                    – ‘Go Isn’t Haskell, All New Languages Should Be Haskell’

                    – ‘Favorite Feature Missing, Can’t Write Code Without Favorite Feature’

                    Generally they’re full of it.

                    1. Don’t know about those complaints, but mine are:

                      – Go is not a Lisp, and I vastly prefer Lisps.

                      – I’m tired of checking return values as the only error checking mechanism, I want exceptions as an option.

                      – I don’t trust the Goolag on multiple dimensions, political, and to not drop its support of the language, which I’m not sure it would survive.

                      The last are true several times over for Rust, which is quite explicitly SJW infested, and thus dangerous to join the community for those who are not anti-fragile.

                      But mostly, I’d rather program in a Lisp + C (or for Clojure, maybe learn enough Java for when that makes sense). For Common Lisp and Scheme, I’ve got a large choice in implementations, which is further security for my code.

  36. This is Go,

    func Square(x int) int {
    return x * x
    }

    this is Go,

    func
    Square(x int) int {
    return x * x
    }

    this isn’t,

    func Square(x int) int
    {
    return x * x
    }

    The opening brace becomes a redundant token, mandatory in both content and position. This arises because the parser rewrites the not Go example as,

    func Square(x int) int ;
    {;
    return x * x;
    };

    rather than

    func Square(x int) int {;
    return x * x;
    };

    which is Go again. And all to save you typing semi-colons.

    It’s a pimple on the face of something that appears otherwise quite nice. But it’s off putting. And since Go doesn’t attempt what I’m abstractly interested in, which is generic programming, I’m not pursuing it. I’m sure it’s great for you though.

    1. func Square(x int) int {
      return x * x
      }

      I’ve written one-line methods and I’ve written others with way too many lines and indents and not a single one of them looked any different than this. It’s what you get when you run ‘go fmt’. (It’s also what you get when you use a decent IDE). Everything else you wrote is contrived and completely a non-issue – to anyone who actually writes in Go.

      Any saving of typing is a good thing especially when the thing being typed was unneeded in the first place, as Go as now demonstrated.

      There are plenty of things about Go that can be debated. I might join in. But including ‘go fmt’ with sensible and opinionated choices was the wisest thing I’ve seen a language designer do in a very, very long time.

      1. I don’t think you read my original message. This is a detail, it’s a smell, it’s an unnecessary constraint on the user to save some tiny amount of Go compiler developer time. And I find the style Go forces me to read less readable than the alternative. You probably don’t even see it, which is fine, many C users write Go style braces.

        The bigger personal niggles are the error return convention and the failure to embed generic programming at the core of the language. Not sure about ‘defer’ yet.

        Anyway I think this is a blind alley and I’m looking forward to hearing how Reposurgeon responds to some Go.

        1. This is a detail, it’s a smell, it’s an unnecessary constraint on the user to save some tiny amount of Go compiler developer time.

          And these are all legitimate points, to those of us who aren’t entirely happy with the Worse is Better New Jersey way of doing computing.

          I, for one, find code in the function def\n{\nfunction code\n}\n style to be massively more readable, although it does have the problem of making less code visible on a screen. Go obviously has a strong opinion here, but it’s just that, an opinion.

        2. > This is a detail, it’s a smell, it’s an unnecessary constraint on the user to save some tiny amount of Go compiler developer time.

          Oh, I see. You completely misunderstood the motivation of that constraint.

          No, the Go devs didn’t make that rule to save themselves implementation time. They made it because they were badly scarred by the Great C Brace Style wars off the 1980s and were absolutely determined not to host a repeat. That’s also why gofmt shipped with the compiler.

          >the failure to embed generic programming at the core of the language.

          It’s being worked on.In fact, I’m working on it.

          > I’m looking forward to hearing how Reposurgeon responds to some Go.

          Quite well so far. I will post about this.

          1. It shouldn’t be a brace war, we have tools that will transform from one to the other and back, it’s bizarre to mandate it. Perverse. But I guess you either see the wit or not. An SJW might consider it gatekeeping…

            But nothing compared to sort.

            1. Standard diff tools won’t consider the transformed code to be the same. This affects Git and other VCSes.

              Git already struggles with different line endings.

              1. You transform out of storage, you edit, you transform back, you return to storage. I think it would be a bad idea to measure equivalence at a semantic level.

                1. Then why is it wrong to mandate a brace style? You transform it yourself if you don’t like it. Automatically on file opening and closing if you want to put in the effort to extend your editor.

        3. “I don’t think you read my original message. “

          No, I did read it. You’re the one who used the words “absurdities” and “perverse” to describe Go.

          “This is a detail, it’s a smell, it’s an unnecessary constraint”

          No, it’s a very helpful and wise feature of the language/toolset. That’s my ruling, as one who actually writes Go code.

          1. Aspects of Go but not the main issue _I_ find with it right now. And I think you either swing that way or you don’t.

        4. > And I find the style Go forces me to read less readable
          > than the alternative.

          Readability is a function (largely) of habit. You get fast at what you read most.

      2. Any saving of typing is a good thing especially when the thing being typed was unneeded in the first place

        I agree about the semicolons. My question is, why do I still have to type the braces? Go can infer where they should go, so why doesn’t it?

        1. “My question is, why do I still have to type the braces? Go can infer where they should go, so why doesn’t it?”

          A proper answer would have to come from the designers, but here’s my take…

          Getting rid of the braces would have put Go (too) firmly in the camp of Python-esque significant whitespace. I suspect pragmatism won out here in that there are too many “systems programmers” who are just violently opposed to the notion. (Having written thousands of lines of Python, I don’t think that position is very rational but it is nevertheless a very real thing.)

          That said, being a recent convert from Python, the braces bother me far less than I expected. Given the combo of ‘go fmt’ so the braces are always in the correct (i.e. consistent) position, and a good IDE to mostly take care of them automatically, they are to me a non issue.

          Most language designers claim to adhere to the explicit-is-better-than-implicit school. Yet they violate it at every turn – from necessity. I suspect braces are a considered case of leaning toward explicit.

          1. Getting rid of the braces would have put Go (too) firmly in the camp of Python-esque significant whitespace.

            I would buy this (if, in fact, it’s an actual motivation–as you say, a proper answer would have to come from the designers) if Go didn’t also mandate a particular indentation style. Once you’re enforcing consistent indentation, you’re enforcing a particular pattern of whitespace, so making it significant is no longer an issue, it seems to me.

            I suspect braces are a considered case of leaning toward explicit.

            But the explicitness doesn’t extend to semicolons, apparently. I understand “explicit is better than implicit”–after all, that’s one of the items in the Zen of Python, too. But if you decide that boilerplate delimiters in code need to be explicit, then it seems to me they should all be explicit. Python decided they should be implicit–more precisely, it decided that having whitespace and indentation determine the delimiters was explicit enough. And it decided that for both logical statement breaks and block entry/exit markers. That seems to me to be consistent–as would a requirement to explicitly include logical statement breaks (semicolons) and block entry/exit markers (braces). But requiring the latter without requiring the former doesn’t make sense to me. Granted, I’m apparently an outlier in this respect.

            1. >But requiring the latter without requiring the former doesn’t make sense to me.

              The situation is not quite as symmetrical as you imply, though. Requiring explicit braces banishes the dangling-else ambiguity. I assume that was the motivation here.

              I have a half-formed theory that one of the goals of Go’s syntax design was to make writing Go codewalkers easy. It’s simpler than it needs to be for fast compilation, where you’re routinely going to throw something like a LALR(1) or higher-order parser at it. It seems designed to be readily parsable with naive recursive-descent.

              This make sense if you want code-analysis tools and IDEs to be easier to write than they are in C.

              1. Requiring explicit braces banishes the dangling-else ambiguity.

                If you allow nested if statements on the same line, yes. Python avoids this by requiring nested if statements to be in indented block form.

                1. As long as everyone is indenting with spaces or tabs at the same length. I had a hellacious experience transferring tabbed Python between Linux and Windows machines in the early 2000s. Between the line feeds and the tab size differences, the stuff I got from the other team was ambiguous enough that it wasn’t easy to determine what was supposed to be going on.

                  C style braces can be ugly, but they’re unambiguous.

                  1. As long as everyone is indenting with spaces or tabs at the same length.

                    Yes, Python struggled for a long time with this. Python 3 no longer allows mixing of tabs and spaces. Also there are now decent tools available to enforce PEP 8 formatting, which helps a lot.

                    1. That actually was on my list of reason I delayed migrating to Python 3. The original defence of whitespace delimination was ‘if it looks right, it will run right’. Python 3 broke this, which was a royal pain in the windows world where decent editors are hard to find.

                      Python 2 used true tab stops, which eliminated the off-by-one errors. That is, a tab character advanced the pointer to the next multiple of #tab-width (default 8), which is the same behaviour an editor has, so ‘ \t’ would come out to the same length as ‘\t’ (unless #tab-width was set to 1). The only complication was if someone used an editor with a tab width other than 8, mixed tabs and spaces, and didn’t set the #tab-width compiler directive to the width used.

                      The line right below the #encoding line in python 2 should always be the #tab-width line, since smart editors can parse it automatically and it at least tells the user what width to set.

                      Honestly, this still occasionally bites me in python 3, when I’m using the REPL. I do little enough of that work that I haven’t worried about it recently, but I used to patch the old tab-parser back in when compiling CPython.

  37. @esr:

    Julia is well on its way of displacing python in the math+science+engineering+simulation+data sciences domain. I haven’t been following the reposurgeon project but from what I read here, I suspect it might be an equally-good choice (or better!) though for different reasons.

    Julia = REPL and JIT-compiled LLVM, Dynamic, garbage collection, Metaprogramming, Lisp-style macros, Coroutines, comparable speed to Go, python-like syntax and productivity, more powerful large-scale vector and matrix support than matlab, and much more.

    1. Wishful thinking. Julia is rather narrow-scoped language, catering for those dudes who start counting from 1. It never will become a generic programming language like Python for large target audiences.

      1. Why do you give a shit if it counts from 1? You’ve been counting from 1 your whole life.

        > It never will become a generic programming language like Python for large target audiences.

        How the fuck do you know that?


  38. > Julia is rather narrow-scoped language

    What can you do in Python that you cannot do in Julia?


    > It never will become a generic programming language

    It already is. The scientific computing and data sciences community have already switched to Julia (at least those who write serious programs operating on gigantic datasets wherein the entire program is in Julia)


    > large target audiences

    esr and the problems he is working on is not “large target audiences”. More like “top 1% of the top 1% of the top 1% audiences” solving complex-enough problems that almost mandate using the right tool for the job.

    1. It sounds like Julia is already a niche language, something used by the scientific computing and data sciences community.

      I would be surprised if Julia has already displaced Python or R, though. The last time I attended a Big Data conference (which was, admittedly, about a year or two ago), Julia wasn’t even on the radar….

      For me to be better convinced that Julia is “general purpose”, I’d have to see examples outside of the scientific and data sciences communities — ranging from both significant projects, and little one-off scripts.

  39. Looks like they’re finally working on a better concurrency model for CPython. 3.8 introduces the interpreters module, which let you run multiple copies of the interpreter in the same address space. One of the stated future plans is to change the GIL to a local interpreter lock, specific to each instance of the interpreter. At that point, proper concurrency within a process will be possible. We’ll have to wait and see how easy it is to use with shared memory.

    1. “CPython. 3.8 introduces the interpreters module, which let you run multiple copies of the interpreter in the same address space.”

      On the surface this sounds like a heavyweight solution. But progress, I guess.

      Do they have channels?

      1. It’s PEP 554 if you want to look at it. Channels are already implemented, I think the plan is to get queues and a few other things going before 3.8 gets its first RC out. Currently, I think it is pretty heavyweight, since only the singletons are shared between interpreters, so all modules and the like get duplicated. It’s slightly less memory heavy than multiprocessing, but only just. If it doesn’t cause any severe breakages in the CAPI or other problems, then maybe they can get it to share some resources which are largely immutable.

  40. I once developed a proper (though very basic and partly complete) RPG top-down game engine with Python pygame (rudimentary 2d graphics with tile based maps and no/animation/only block movement), but with NPCs, conversation/dialogues, object interactions and such with minimum effort and got it to a stage where I can develop it further – but sadly lost interest in the project circa 2011. I cannot imagine any other language which would have allowed me so much fun with so little effort.

    Python is good like that; if golang has this kind of simplicity and moreover being a compiled-to-machine code language I am definitely interested.

    1. >Python is good like that; if golang has this kind of simplicity and moreover being a compiled-to-machine code language I am definitely interested.

      I don’t think Go is quite as friendly to exploratory programming as Python is. But it comes impressively close considering it’s a compiled language without a REPL and doesn’t have GUI support as rich yet.

        1. “But it does not appear that Go has any “preferred” desktop GUI toolkit as of now, or maybe I am wrong.”

          The Gtk3 and Qt wrappers seem to be fairly mature. There are several “alternative” GUI kits that appear to be at a state of usability. But Go is pretty lacking here, and Google has no motivation to plug that particular hole.

      1. Initial impressions: Writing Go seems a lot more like writing C (with training wheels) than writing Python. Maybe I am wrong, but the fact that it compiles to machine code and it also seems to operate at a lower level than Python (pointers etc). Obviously not as low level but definitely comparable in terms of thinking.

        But on the other hand, it compares reasonably with Python in terms of availability of richer primitive types, GC etc.

        Finding it an interesting experience and definitely different from what I expected of it.

        1. > Writing Go seems a lot more like writing C (with training wheels) than writing Python.

          D’oh. Of course! It makes totally no sense to compare Python and Go. Python is interpreted, easy to use, at-your-fingertips language. Which you can compile if you want. Most people don’t – Python is fast and cool for them the way it is.

          Go is yet another boring compiled language, and in that straitjacket with wishful thinking of cutting at C and C++, C#, Java and being cut itself by Rust, etc.

          Beyond that, comparing apples and oranges is always fun.

          1. >Go is yet another boring compiled language, and in that straitjacket with wishful thinking of cutting at C and C++, C#, Java and being cut itself by Rust, etc

            Ain’t wishful thinking. Go has got serious traction in server-side stuff at big data centers, places that can no longer tolerate C/C++ defect rates due to unmanaged memory. This is, of course, exactly the use case Google built it for.

            1. > server-side stuff at big data centers, places that can no longer tolerate C/C++ defect rates due to unmanaged memory

              Good, good. But in my list there were also Java and C#. I wonder how real penetration of Go into corporate lands ruled for decades (not a single one!) by them.

              > Google built it for.

              I thought it was built by God’s prophets who were communicated C and Unix by him (and made stupid mistakes recording his will on punchcards). Then when they became old, they acquired hubris and built Plan9, wanting to surpass him, and that’s where Go’s ideas originated. And then they were bought wholesale by Google.

              1. >Good, good. But in my list there were also Java and C#. I wonder how real penetration of Go into corporate lands ruled for decades (not a single one!) by them.

                I don’t know. I don’t have a lot of visibility into what languages Go is displacing, I just hear a lot of reports that its use is ramping up in exactly the places one would expect after examining the language and its libraries.

                If I were asked to prognosticate, I’d bet on C and C++ taking the most serious hit from Go. For two reasons – that’s where the manual-memory-management problem is most serious, and those are the languages from which the transition cost to Go is lowest. Java and C# are probably less vulnerable.

                >I thought it was built by God’s prophets […]

                Not an entirely inaccurate description. But the role of Google’s strategic needs in channeling their design effort is quite obvious as well.

                1. If I were asked to prognosticate, I’d bet on C and C++ taking the most serious hit from Go.

                  C, maybe — but not C++. C++ has solved the memory management problem in an entirely different way — with RAII and the use of value semantics where possible.

                  There are numerous other gotchas that can affect a C++ program, including the tricky bits surrounding reference aliasing and undefined behavior. These problems are largely solved by Rust.

                  Because of its weak type system and mandatory garbage collector, Go is not even in the running, in the fields where C++ and Rust thrive. Again, even inside Google, where Go was developed explicitly to be a C++ replacement, Go has displaced far, far more Python code than it has C++ code. Google’s bread and butter is still written in C++.

                  And Google is even using Rust for critical components of some of its OS and virtualization technologies.

                  Go is great to give to twentysomethings working in trendy internet marketing shops to churn out microservices that are faster and easier to deploy than Python or Node microservices. For the real world of nuts-and-bolts systems programming, however, it’s a non-starter.

                  1. >C++ has solved the memory management problem in an entirely different way — with RAII and the use of value semantics where possible.

                    I don’t believe this. Leaky abstractions don’t get less leaky because you’ve piled chrome on top of them.

                    1. Perhaps. But I can’t think of many abstractions that are leakier (in a very real sense) than pervasive use of garbage collection. In fact, even pervasive refcounting (as found, e.g. in Swift) is quite problematic. This is the kind of problem that Rust solves with relative ease, and Go/Swift etc., by and large, don’t. (Again, this is not to say that Go isn’t a very sensible choice for reposurgeon! But other domains would be quite different.)

                    2. >Perhaps. But I can’t think of many abstractions that are leakier (in a very real sense) than pervasive use of garbage collection.

                      This is bullshit. Pure, unadulterated bullshit. As is proven by comparative defect rates.

      1. Thanks. I will explore those links. First thing for me I think is to get the time to browse through the basic Go tutorials and get started with simple stuff.

      2. Yesterday, just to get a hang of Go, I wrote a simple command-line Hangman game as a learning exercise and yes, the effort required is slightly more, but nothing out of the ordinary, especially since I already have some grounding in other languages, including C and I am not unfamiliar to working at lower levels than Python.

        But for someone whose introductory language is Python and does not know anything else at a lower level, I can anticipate the difficulty, especially when introduced to concepts like pointers, references and fixed-length arrays.

  41. So, I had a crazy thought a couple days ago.

    As I understand, the usual approach to making Python run faster is to profile and then move your hotspots to C. Trying to move your hotspots to Go would be tricky, as then you have two garbage collectors possibly stepping on each other’s toes to worry about. At that point, maybe just using C’s manual memory management would be safer.

    But what about a Python implementation written in Go? Why couldn’t that work? I don’t think the reason above of competing GCs would be one, just leave that to the Go runtime. It should be aware of how the Go-Python interpreter’s objects are being used, and when it is and is not safe to try to harvest them. You shouldn’t have to worry about a GIL, as you’d have Go’s concurrency tools to bring to bear. When you’ve hit the limit of what can be achieved in Go-Python, your profiling-identified hotspots could be dropped down to native Go without having to dance across a C FFI.

    You’d get to keep the nice features of Python (friendliness to beginners, ease of exploratory programming, large ecosystem of useful libraries, REPL), and gain speed and concurrency without the sacrifice of memory safety that comes when you have to go to C.

    Since this seems like a wonderful idea, but AFAIK, nobody smarter than me is working on something like it, there’s probably some painfully obvious, crucial problem why it couldn’t work. I’m just not sure what it is yet.

  42. I’ve had a similar experience with Python, though in a slightly different context (instead of batch processing we were trying to hit 24 frames a second- but the problems were similar.) I’ve spent spent quite a while developing some novel (and expensive) algorithms for the rigging and deformation of computer animated characters. Autodesk’s Maya was the target platform.

    Out of the box you target can Maya with either Python or C++. You could certainly write your own bindings for their C++ libraries in another language, but C++ is notoriously annoying as an FFI target. Had I known how complex the project would become I would have done so, but hindsight and all that…

    Since we were doing R+D on a hard unsolved problem I valued flexibility over performance at first, so we prototyped in Python. I knew Python would be _way_ too slow for a final implementation, but for working out the algorithms we needed a very fast-to-write flexible language to experiment with.

    In fact, years later, we wound up with laboriously hand-tuned C. I’m a big fan of _not_ writing C most of the time, but this is that rare bit of code that simply cannot be fast enough, where things like branch mis-prediction and cache coherence become dominant concerns once you’ve wrung what you can from improving the algorithms, and you spend days profiling just to change a few lines of code. I’m not aware of a language other than C that is really appropriate for that sort of thing (Fortran wasn’t a good candidate, though we do use Lapacke in a couple of places.)

    But I was a bit surprised at _just_ how slow Python (or rather the standard implementation of Python, which we could not swap out, because the whole point to using it was that it was in Maya) was, especially when we created garbage. My very first toy prototype was written very functionally and it was dog slow. I re-wrote it to mutate once I needed to interact with it a bit more, and that sped it up 10-fold, but it was still dog slow.

    Fine- I didn’t expect that Python would be a speed demon, and it had served its purpose as a prototyping tool for the algorithm, but a lot of not very performance-heavy code had grown around it, code that was much easier to write in Python than in C or C++. And though the algorithmic code was the hardest to write, the rest of the code was an order of magnitude larger in LOC.

    So I figured I would be able to “alternate hard and soft layers.” I re-wrote the algorithmic stuff in reasonably efficient C. compiled to shared libraries, used c_types to call into those libraries and… saw things get a bit slower. This baffled me at first, because the C code was replacing many millions of flops (once per animation frame, so hopefully at least 24 times per second) that had previously been done in Python.

    But then I realized that I was creating new Python objects to hold the results of the calculations, and it turned out that creating (and then collecting) a single Python object was more expensive than doing thousands of flops not just in C, but in Python (remember that this was slow compared to doing the math in pure Python.) In all fairness, Swig was involved (thanks, Autodesk, that was lazy) and Swig is performance Chinatown, but I’ve since seen similar issues in cases not involving Swig. Python really, really argues for mutation if you want performance, and I avoid mutation until I have to have it. (On a side-note I bet you could solve your performance problems by pre-allocating things, but then why use Python?)

    So I figured out how to snaffle pointers out of Swig objects, and hand them to my C libraries, re-wrote those libraries so they used the same memory layout Maya does, and that helped a _lot_. Like at least thousand-fold (which is absurd, but hey you don’t complain about a thousand-fold performance improvement.) Enough that we did that for a year or so of R+D, and I breathed a sigh of relief. I was still going to be able to alternate hard and soft layers.

    But eventually Python caught up with us again (I’ll spare you the gory details, but we spent some time with cPython and a sampling profiler.) Once we got into serious optimization it became clear that Python would be our bottleneck if we used it for anything, even for glue code. So C++ or C for everything it was.

    There is a trade-off between expressiveness and performance. But thousands of times as slow is unnecessarily slow. I love my Perlisisms: “Giving up on assembly language was the apple in our Garden of Eden: Languages whose use squanders machine cycles are sinful. The LISP machine now permits LISP programmers to abandon bra and fig-leaf.”

    Well, things have changed since then, but Python implementations ought not to pretend they are not naked, these days.

    1. >@ESR Is this related:

      Distantly. Author says he was inspired by pygoto…to try the exact opposite approach.

  43. This is bullshit. Pure, unadulterated bullshit. As is proven by comparative defect rates.

    If you count poor, nondeterministic performance as a defect, GC languages don’t fare too well.

    Once again, Rust plugs most of the holes in C++’s memory abstractions. If you are disciplined about using C++ (for example, never using new or delete and always using an appropriate smart pointer type), you will get most of the way there in that language as well.

    The idea that we need a garbage collector to mitigate the gotchas around memory management was true when Java came out, but RAII and the STL came along in the late 90s and made that knowledge obsolete. These days, Ruby on Rails kiddies are writing embedded and kernel-level code in Rust that’s just as performant as old C hands’ code, but with lower defect rates. And these kids have the advantage of not having to unlearn accumulated bad habits.

    1. >RAII and the STL came along in the late 90s and made that knowledge obsolete.

      There are so many preposterous claims in Jeff’s paragraph that I’m having trouble counting. I’m posting this as a warning to others not to be bullshitted.

      RAII can’t solve the problem because, as is alternate name “Scope-based Resource Management” hints, it only works for allocations with a short enough lifetime to be allocated and deallocated within the scope of a single function. It does not a fscking thing about the problem of managing persistent structures.

      STL can’t solve the problem because it’s chrome layered over a language that has bare pointers and no bounds checking on array access.

      1. Persistent structures tend not to need managing? And scope extends beyond a single function, even if that function is main.

        ISTM that almost every garbage collected language misses the opportunity to use scope and destructors to ensure the management of non-memory resources. Go has ‘defer’ but it’s not really clean. C# bolted something on later into its life. And of course C++ has had garbage collection since C++11 but providing it is not mandatory and I think few bother. In practice memory management is just not an issue.

        Judging from the questions asked on Reddit, American Universities are teaching what they call C++ to beginners but what they are actually teaching is C with iostreams and new/delete in place of printf and malloc/free. Pointers and manual memory management should be at the end of a C++ course. They would be much better off, IMO, using Go for this type of approach. And teaching algorithms and data structures in a more abstract way.

        1. Python: with (built in since version 2.6)

          Common Lisp: unwind-protect

          Scheme: dynamic-wind (also has additional semantics in the presence of first-class continuations)

          The CL and Scheme versions are more like primitives with which you can use macros to build syntactic constructs to define scopes in which resources are “live” (and after which they are deallocated). Resource allocation and release are not their only use.

      2. Scope-based resource management extends to other scopes as well — for example object scopes. Such objects would be destructed when their parent objects are. If you want an object to survive the scope of its enclosing function, create a smart reference to a heap-allocated object and return or store it somewhere. The object will be destructed once the last reference to it falls out of use. In C++14 this can be done without new or delete.

        Together with smart pointers, RAII provides a complete solution to object lifetime management that’s strictly deterministic and performs well even under memory pressure (which GC does not). The only problem is that cyclical data structures cannot be handled well with strong references alone; intelligent use of weak references will solve that problem as well. And, as paul2718 mentioned, these patterns generalize to non-memory resources such as file handles, database connections, etc.

        As to your other point, C++ supports bare pointers and unchecked array accesses — problems largely solved by Rust in safe mode — but that doesn’t mean you have to use them. You could probably write a reposurgeon port in C++ without using any of these features (use std::vector for arrays and the at() accessor). It becomes most a problem when interfacing with legacy code, but when starting afresh with a rewrite of a Python code base, that shouldn’t be much of an issue.

        1. >. If you want an object to survive the scope of its enclosing function, create a smart reference to a heap-allocated object and return or store it somewhere.

          Aaand now you’re back in manual-memory-management land. Still bullshit.

          RAII. like many other quack nostrums in computing, is a self-deceptive way of pretending that hard problems are not hard.

          1. Nope! Because you used a smart reference (rather than a bare pointer), you don’t have to manage the referenced object manually. In the case of std::shared_ptr, the referenced object will automatically be destructed once the last shared_ptr to it goes out of scope. And you don’t have to manage the shared_ptr itself because it has value semantics.

            You would do well to bone up on modern C++ (and not “C with classes”). It really is a different, and relatively safe, language.

            1. Modern C++ will do absolutely nothing IIRC to help you with invalidated STL iterators, which are no better than dangling pointers really.

              Also, name one commonly used API which does not use raw pointers. POSIX is bad here and Win32 is really really bad here. (You are saying those are C APIs? That’s beyond the point. Those are the APIs you’ll have to interact with for real-world C++ applications) Also modern C++ libraries (such as the best of them for the UI, Qt) use raw pointers.

        2. RAII or region approaches cannot possibly be a _complete solution_ to resource lifetime management, precisely because of the well-known issue wrt. possibly-cyclical reference structures – and no, weak refs are _not_ a solution in the general case, either. Your claims are quite overstated here, especially wrt. reposurgeon itself. There will always be a place for GC, or for something functionally equivalent to it (e.g. for something like Rust, an in-memory “entity component system” with facilities for automatic resource cleanup.)

    2. > RAII and the STL came along in the late 90s and made that knowledge obsolete

      This is of coyrse heavily overstated, to the point of being highly misleading. C++ did not feature a comprehensive set of easy-to-use “smart pointers” prior to the C++11 standard; moreover, C++ is in fact still memory unsafe to this day, and no set of “coding guidelines” exists even today that can be statically checked so as to achieve some sort of safety. This is in clear contrast to Java/C#, and even moreso to Rust. (The Go memory safety story is not entirely clear, BTW. It seems to be comparable to Java/C#, in that it’s “safe” in single-threaded code, but prone to “data races” and thus to unpredictable behavior, in concurrent code. This may have been good enough in the late 90s, but is clearly not so today. Rust addresses this; it does not feature a guarantee of “safe”, bug-free concurrency, but the facilities it does provide make the task a lot more feasible than it might otherwise be.)

  44. If I have a burning desire to work with multiple 10GB files in the dataset for a physical simulator, GO sounds like a good way to do this, no?

  45. Coming from the “run-of-the-mill business application programmer” faction, I strongly disagree that it was a bad idea for Python to make the default string type be a sequence of unicode code points. That choice protects against programmers making mistakes involving conflating the byte length and unicode character length of a string, or involving treating the nth byte as being the same as the nth character.

    I’m not a Go user, but the syntax required to get the nth character of a string in Go looks horrifying: https://stackoverflow.com/a/15020162/1709587.

    If you actually need to work with sequences of bytes… just use the bytes type. I’ve never understood why so many people treat that like it’s some great burden that makes programming more complicated, when to me it seems like using the type system to indicate whether what a variable stores is “arbitrary bytes” or “text” makes a great deal of sense and makes code more comprehensible. Your characterisation of the ideal way for things to be – namely:

    In your programs, you choose whether you want to treat string payloads as uninterpreted bytes (implicitly ASCII in the low half) or as Unicode code points

    … is exactly how Python works, except that it also helps you out by letting you denote which of those things you’re doing using the type system. In what possible way is that a bad thing?

    1. I find this pretty insightful reply. *Every* Python programmer who migrated to Python3 went thru this unicode quarrel. Again, the problem is not binary vs unicode, the problem is that they made unicode the default. I’m sorry, but it means that a typical programmer who write something like:

      name = input(“What’s your name?”)
      print(“Your initial:”, name[0])

      to write an application which works anywhere in the world, not in a few countries (albeit well populated), and give invalid result anywhere else. Sorry again, but that’s boon, not curse. And let’s face it – all those web apps are such. (Console which want to tabulate data too.)

      The “drawback” of that is that low-level and mid-level layers/frameworks have to be written with stuff like b”foobar” *everywhere*. I got over it, my only concern is/was – what about other users, will they get that the code which should work with byte strings, should work with them explicitly? Ahem, they would, and they do. Look at the alternatives – Go doesn’t even provide exceptions, Rust the same, plus borrow checker curse (bliss), Julia wants array subscriptions started from arbitrary number and who knows what else. There’re compromises everywhere. Python’s is not the worst one.

      1. The problem is not with how Python handles strings and unicode now, nor even with making unicode default (using b’ ‘ everywhere isn’t that big a deal, and no worse than remembering to use ‘ ‘ instead of ” ” in Java and the like). The problem was specifically the migration from python 2. Python 3k dropped support for u”” as a prefix to mean unicode, and did not restore it until Python 3.3. They removed the ‘unicode’ name, and changed what ‘str’ means. They said “it’s okay, we’ll give you a tool to automate the conversion”. That tool is idiotic. It can make dumb conversions for you if you wrote your Python 2 code wrong, but if you were actually properly using unicode before? It will break everything.

        What do I mean by that? Consider a network application which also has a UI, the UI uses unicode strings, and the network layer uses bytes. It was written long enough ago that the programmer didn’t bother putting b” everywhere that’s bytes, but did put u” everywhere that’s unicode. 2to3 won’t remove b” prefixes, after all that’s the way to indicate you actually mean bytes, but it will happily strip the u” prefixes. Oh, and unprefixed strings get left alone, after all the programmer was too stupid to properly label them as unicode, but that’s what everything is, right? So now you have an application where none of the strings are differentiated. It’s actually impossible to fix the output from 2to3 without looking back at the original source code, and is actually faster to just do it by hand. (Similarly, it leaves the name str alone, and converts all unicode references to str references).

        Here’s what they should have done. Remove the ‘str’ name. Use ‘bytes’ and ‘unicode’. This removes the ambiguity in ‘str’, but a programmer trying to quickly make an old library work can put “str=unicode” or “str=bytes” at the top of a file. At the very least, don’t remove ‘unicode’, since that’s how you can write a library that works the same in 2.7 or 3+ (it’s fairly typical to see “if PY3: unicode=str” somewhere near the top of a file that got converted). The u”” prefix should never have been removed. Then they should have actually put some time into 2to3 to make sure it doesn’t break properly written libraries. In fact, the assumption should have been that all libraries are properly written, rather than no libraries are properly written. Since unicode lacks the decode function and bytes lacks the encode function, it’s usually obvious when that’s not the case (and it matters). Unprefixed strings should have a b”” added (unless __future__.unicode_literals was imported), prefixed strings should keep their prefix. ‘str’ should be translated to ‘bytes’, and ‘unicode’ should be left alone (with the above changes, or changed to str as is).

        Because I was maintaining a large 2.7 codebase, and the absolutely botched way they handled the migration, I didn’t move to python 3 until 3.5 came out. The template formatting in bytes was the last feature they dropped from Python 3 which prevented the move (it came back in 3.5). 3to2 is still broken, 10 years on.

        1. FWIW, I agree with the overall thesis of this reply, and agree with about half the details. (The other half I don’t disagree with – I just haven’t thought about them deeply enough to be sure.) The 2->3 transition was botched, and converting codebases was made needlessly hard.

          I don’t think that’s the extent of ESR’s argument, here, though. As far as I can tell, he’s making a far more sweeping argument that Python’s conceptual model of using different types for unicode strings and byte strings is *inherently* inferior to the Go’s model of having a single type for both that is interpreted differently depending upon what functions you use to manipulate it. *That* thesis is what my instincts and my few years of app dev experience tell me is a very wrong one. Maybe I’m wrong and ESR has some wisdom on the matter that I lack, but he doesn’t persuade me of that in this post.

          1. Ah yes, I think you are correct. I also think you are correct that differentiating the types is a good idea. Consider that the reason Python 3 split the types was because many programmers were writing code using the wrong type, which would break predictably when used in an international context or what not.

            If you have a use-case where being able to access data as both unicode strings and raw-bytes is helpful, it’s also relatively easy to implement in a documented, and case-specific fashion. I’ve mostly found it useful in network and database libraries, where the data is stored in the database encoded in UTF-8. If you want to send it over the wire, just send the raw bytes, if you want to display it, use the decoded accessor.

      2. You got over using b’ ‘ everywhere? Bully for you.

        But riddle me this: if 0.0 == 0, then why can’t b’a’ == ‘a’?

        I wrote in Python because the code looked clean and the dynamic type checking told me if I fucked up.

        1. > then why can’t b’a’ == ‘a’?

          Because in CPython, it’s unknown what encoding is used by textual strings. In MicroPython the above gives:

          MicroPython v1.9.4-888-ge2202287b-dirty on 2018-12-26; linux version
          Use Ctrl-D to exit, Ctrl-E for paste mode
          >>> b’a’ == ‘a’
          Warning: Comparison between bytes and str
          False

          And given that in MicroPython, strings are guaranteed to be utf-8, I’m indeed considering adding an option to make bytes and str being comparable.

          1. That’s an implementation detail, and iit used to work, in 2.x, even bytes against unicode strings.

            It’s obvious it’s an implementation detail when you realize that in 3.x, two different (unicode) strings could have different internal representations and still compare fine,

            No this was a fucked-up CONSCIOUS 2-3 design decision by the web weenies that were running the asylum at the time.

            1. > it used to work, in 2.x, even bytes against unicode strings.

              Impossible. Just ask your Chinese or Swahili friends. Just don’t ask a typically arrogant English-speaking person, who thinks that the whole world uses the hardcode Latin script, whereas it’s the opposite – the whole world doesn’t, only small portion does.

              1. You’re entitled to your own opinions, not your own facts. And your opinion that this didn’t “work”, simply because of the fact that non-ASCII characters cannot be represented in ASCII-compatible byte strings, is entirely wrong for a huge class of problems that Python is still, albeit somewhat less, useful for.

                1. >And your opinion that this didn’t “work”, simply because of the fact that non-ASCII characters cannot be represented in ASCII-compatible byte strings, is entirely wrong for a huge class of problems that Python is still, albeit somewhat less, useful for.

                  This is true. Several of my projects are among the proofs.

                  1. > This is true. Several of my projects are among the proofs.

                    But of course! Everyone wrote cute nonsense hacks in Python2 which crashed when used by somebody else somewhere else.

                    Like (all coincidences are accidental), one guy wrote a repository converter. What could possibly go wrong? It crashed on the first commit whose author had umlaut in the name. But of course, the original author of the cute hack didn’t imagine such names exist (or blatantly ignored the fact that they exist).

                    Stop whining, guys. You can’t reliably mix operations on characters and bytes even in C.

                    1. >It crashed on the first commit whose author had umlaut in the name.

                      No. Do yourself a favor and stop talking shit about things of which you are ignorant; you’re just embarrassing yourself, not convincing anyone.

                      There’s no reason for reposurgeon to care about the encoding of author names, and it doesn’t – it treats them as uninterpreted byte strings. It has never crashed on an umlaut (my regression tests check this exact case). That’s because in your terms, it never does operations on “characters” at all, other than interpreting decimal ASCII digit strings as numbers in certain contexts. Not my first rodeo, kid – I know how to stay out of i18n trouble.

                      Now, if you call reposurgeon’s edit command to modify the authorship data in a commit, the editor it calls out to will care how the byte strings should be rendered for display. But that’s a different issue entirely from reposurgeon’s requirements; all reposurgeon wants is a string type that holds uninterpreted bytes. Python 2 was good for that; Python 3, not so much.

                      Go gets this right. For almost all operations, strings are uninterpreted byte sequences. Exactly one (1) operation in the core language – iteration over strings – knows that UTF-8 is the language’s preferred encoding and yields runes (UTF-8 code points). Any library function may choose to interpret a string as UTF-8, but few outside of the fmt library (printf/scanf equivalents) actually do that. If you want to deal with an encoding other than UTF-8, there are codec libraries to convert to and from UTF-8.

                      Python 3 gets it wrong. This is a detailed description of how wrong.

                    2. > There’s no reason for reposurgeon to care about the encoding of author names

                      Who said reposurgeon? I said “one guy” and “a repository converter”. keithp wrote his cute cvs converter long before you started to hack on it. And seeing how nicely it failed and was boring to hack on (C software!) people who needed to do conversion back then just rewrote it in Python, 2 by then.

                      But yeah, this whole talk degenerates to typical functional vs imperative argument “you can use tail recursion, so explicit loops are superfluous in a language” vs “how, how there could be no loops in the language, it’s so uncomfortable!”

                      So, amen to Go doing it “right”. Python3 then does it without cowardly compromises – it enforces view that bytes vs strings are conceptually different things, and requires explicit casts between (strong typing, whoa!).

                    3. >keithp wrote his cute cvs converter long before you started to hack on it.

                      That doesn’t crash on umlauts either. Why would you think it might? All it’s doing is pumping bytes from CVS masters to a git fast-import stream; there’s no reason for it to care about encoding any more than reposurgeon does.

                      Go ahead, send me a CVS repo with any randomly-encoded crap you like. cvs-fast-export will eat it and smile.

                    4. > >keithp wrote his cute cvs converter long before you started to hack on it.

                      > It doesn’t crash on umlauts either.

                      Who said it does? I said that people wrote tools in Python, like, forever, preferring to various other lingos. And some of the tools written in Python2 exhibited erratic behavior due to some folks’ desire to do things like unicode_str == byte_str, etc. which sometimes worked, and sometimes didn’t (sometimes crashed).

                      Python3 fixed that categorically to never work, but people still remember old good days of debauch with a trace of tear.

                    5. >Who said it does?

                      Then I have no idea what you’re talking about. And I don’t think you do either.

                    6. You talk about hacks, and the correctness of not allowing mixing of strings.

                      But allowing comparisons of different types of objects (e.g. bytes and strings) without throwing exceptions is a freaking huge hack — it goes against the strongly but dynamically typed paradigm and puts the comparison magic methods in a different class than all the others.

                      But it’s a useful hack and should be kept. So should the automatic machinery that makes 0 == 0.0 true, even though many equality comparisons with real numbers will fail for naive programmers. Both those are much hackier than, and at least as error-prone as, the removed implicit string coercion for comparisons, which exposes the party-line “correctness” argument for the turd it is.

                      In fact, the lack of byte to string coercion would be acceptable without the comparison hack, but then a lot of other stuff would break.

  46. Say, I just wandered in here.

    I’m not a programmer, can someone answer this teacher-to-student?

    Why not run multiple copies of the Python interpreter process on the same machine?

    Would you have to install 2 or 16 or whatever different pythons? Or is it even possible?

    But if it has inter-process communication you can do your concurrency manually.

    1. >Why not run multiple copies of the Python interpreter process on the same machine?

      Because the problem can’t be partitioned in such a way that multiple processes with disjoint memory spaces are useful.

    2. So, to give a slightly less terse answer, you don’t have to install multiple copies of python on the system, and it is certainly possible to split tasks between processor cores. However, there are technical limitations that make using multiple threads or multiple processes challenging.

      The first issue is the Global Interpreter Lock (GIL). The GIL prevents multiple cores from executing CPython code at the same time, in the same address space. There’s a couple ways around that, and there’s work that may loosen the restriction when CPython3.8 ships (replacing the GIL with a VM Interpreter Lock). So that means running multiple interpreters in separate address spaces. The data is too large to copy between processes, so sending the data via IPC is out. The alternative is shared memory, which python supports. It can be set up by creating an editable buffer and then fork()ing, with each processes retaining read/write access to the buffer, or it can be set up by opening a shared file under /dev/shm.

      Either approach hits the same issue. You must do your concurrency manually. Because CPython expects there to only ever be one Interpreter executing bytecode, in one thread, at a time, it’s assumed bytecodes execute atomically, and there are few efficient concurrency primitives provided, since they’re largely unneeded. This means when a worker process needs to access memory outside its area, it must notify the worker which ‘owns’ that section of the shared buffer, wait till it gets a green light, make the changes it needs, and then resume normal operation. Conceptually, that’s fine. In practice, it’s likely to be a source of errors in the program; errors which are difficult to detect since the program is now non-deterministic. There are more sophisticated dispatch models which might help, but they incur additional overhead on speed, which undermines the performance gain you get from multiprocessing.

      If I were doing the project, starting today, I’d probably do it in Python like you’re suggesting. I’d use one of the quality graph manipulation libraries that’s implemented in C to get the linear speed needed, and use a dispatch system for dealing with non-local data. Of course, that assumes you’re starting from scratch. Given that esr has an essentially functioning algorithm already developed, reimplementing it in a faster and more memory efficient language is likely to require far less effort than trying to adapt it into a parallel one. As a general rule, if you want something to be multithreaded, you need to design with that in mind from the beginning.

      1. >As a general rule, if you want something to be multithreaded, you need to design with that in mind from the beginning.

        And sometime the dataset has properties that mean you just can’t do that.

        I’ve written about the effects of bad locality in the data before. The data reposurgeon manipulates is best thought of as a big attributed graph. If that decomposed into pieces with no mutual references, you could spawn a worker thread or process for each piece, process them concurrently, and all would be good.

        Alas, repository data is the worst possible case in the other direction. Any node can have implied references to nodes going arbitrarily far back and close to the root. If a surgical operation needs to fix up or chase references there is no predicting how far away in the graph you have to look or mutate things. There’s no way to chunk the data that would prevent operation-order-dependent side effects from rippling arbitarily across it.

        1. As long as you’re inclined to talk about it – how often do references go more than N steps up the graph, for values of N that would permit useful chunking?

          I tried searching your blog for every article tagged “reposturgeon”, and searching for “graph” and “arbitrar”, and skimmed the articles titled “Python speed optimization” and “Ugliest… repository…conversion…ever” and didn’t see anything you may have written about this specifically.

          I’m wondering if it’s possible to table such references as exceptions, and process them later, even if it means recalculating some parallel-calced tree data later. Trouble is, I’m only guessing at your graph building algorithm. (Sorry for monday-morning qb’ing…)

          1. I believe the “operation-order-dependent side effects” are part of the issue here. You probably have to resolve the exception before continuing, or do something clever. In either case that’s going to add complexity (and sources of error).

            There are a couple solutions that come to mind, depending on how often pieces ‘up tree’ need changing as a result of ‘down tree’ commits.

            If the change can be described in a reversible, reproducible way, then you could use a message queue with change requests, letting the local worker continue with independent work until the reply arrives. This avoids the locking issues since each worker only ever looks at its own data.

            If the issue is mostly in figuring out which ‘worker’ owns the sections of data that need changing, either you propagate the messages from worker to worker, or you peek at a read-only copy of the data to figure out where all is affected.

          2. >As long as you’re inclined to talk about it – how often do references go more than N steps up the graph, for values of N that would permit useful chunking?

            Think about branching. In the presence of branches, just finding a commit’s parent can have arbitrarily high nonlocality in the event sequence.

            >I’m wondering if it’s possible to table such references as exceptions,

            Even if it were, now think about merges.

            When you start feeling horrified, you will be beginning to get it.

            1. Yeah, that’s just it – not horrified so far. And I trust your judgment on account of your having futzed with this a lot more, therefore, I probably don’t get it yet.

              Branching: how many people are branching from near root and then trying to commit back? My worst-case scenarios tend to involve branching early, finding something shiny elsewhere, returning to the project after several years and electing to give up, re-branch, and start over. Is this not customary? Am I missing a common use case?

              Merging: same deal. Forks that went that long without a merge tended to become permanent, either effectively separate projects or dead ends. Plus, merging is naturally horrible, but it’s a known horror. If it scared YOU sufficiently, you wouldn’t have started reposturgeon in the first place.

              The cases I think of as most likely to upset my half-formed premises are beasts like emacs or GNU C or Unix itself – projects old enough to have pensions. How many separate branches existed simultaneously for each one, and were still active?

              The more I think about this, the more convinced this is art dark enough to warrant a book someday…

              1. >The cases I think of as most likely to upset my half-formed premises are beasts like emacs or GNU C or Unix itself – projects old enough to have pensions. How many separate branches existed simultaneously for each one, and were still active?

                The worst case I’ve ever seen was the NTP reference code from before NTPsec. It was typical for it to have from about a dozen to as many as twenty branches active simultaneously, each with a lifetime before merge of less than 100 commits. The DAG was so complicated that you couldn’t track any of the individual branches in a gitk display, it would just burn your eyeballs to try.

                These huge old projects are exactly the performance wall that forced the move from Python to Go.

  47. I see this thread just can’t be put to the rest. Makes me remember a funny comment posted at http://esr.ibiblio.org/?p=8161#comment-2075248 :

    >> “CPython. 3.8 introduces the interpreters module, …

    > Do they have channels?

    Here’s the very old response: http://dalkescientific.com/writings/diary/archive/2009/11/15/100000_tasklets.html . 2009. “with the code Pike posted, current Go is roughly 1/2 the performance of current Stackless Python”.

    So, that’s it. Node.js? Python’s Twisted did that crap years before. Channels? Stackless Python is from 1998. Not sure it had them channels at once, but it’s so obvious concept, that probably Stackless had them even before Pike whined about the lack of innovation in his utah2000 pamphlet, just to come up with a monocultural rehash of old ideas, Go it is, a decade later.

    Of course, nowadays Go’s concurrency is probably faster. But that’s only because Google pours money into Go, while guys like ESR instead of furthering Python, betray it with vigor.

    Let me conclude this with a fine, productive technical argument: Go is cancer. Don’t give in, don’t betray your languages for it.

  48. Silly question about your python object creation performance penalty — did you try __slots__ ?

  49. Common Lisp suffers from being defined as the intersection of the commonly-used Lisp dialects back in the day, rather than the superset of them. Anything that was handled differently between dialects is simply not there. As such, if you want to use it, odds are good that you’re going to end up needing to implement a big chunk of what should be the standard library yourself. If you want Lisp, use Scheme. Scheme is actually functionally complete.

    There is one variant of Python I have seen which fixes the the GIL problem… sort of…
    What it does (so far as I understand it) is snapshot the program’s state, then run all the threads in their own memory space and then merge the changes. If there’s a conflict it rolls back and runs the threads with GIL, otherwise it keeps the new state and continues. It’s not perfect, but it does get you some amount of multi-threading that actually speeds things up. Other than that your only option is forking and explicit communications pipes which works well, but is much less simple to write.

    1. You’re thinking of pypy-stm. It actually uses the underlying OS copy-on-write support for avoiding the need to do full snapshots. The only problem is that the single threaded speed takes about a 20% speed hit. This means running a computation in 2 threads, assuming fully orthogonal data, takes 200% cpu time, and runs at 160% the single threaded speed. Due to the poor performance, pypy-stm is discontinued.

  50. How is the conversion going?

    How is work on Go generics going? I noticed you and other people talking about it earlier in the thread.

    1. >How is the conversion going?

      I’ve been working hard on it recently. Consequently, there’s only one lump of Python code left, but it’s a doozy – the second stage of Subversion dump interpretation, by far the most complex part.

      >How is work on Go generics going?

      They’re floundering. The proposal with the most backing from the core developers is so horribly complex and at odds with the style of the rest of the language that nobody can quite swallow it.

      I came up with a very simple and effective alternative – adding exactly one new keyword to the language, “implements” – but it sank because it requires operator overloading, which that crew has a cultural allergy to. I’m not a big fan myself, but it turns out to be a very elegant way to define the kinds of type classes you need for contracts.

      I don’t know what’s going to happen over there. I may go back and push my idea again if it looks like they’ve stalled out because all the other options are too ugly to live.

      1. >I don’t know what’s going to happen over there. I may go back and push my idea again if it looks like they’ve stalled out because all the other options are too ugly to live.

        Wouldn’t it be better if you forked the Go source(I honestly don’t know for sure if it’s open source, sorry if it isn’t), then wrote your own version of Generics into it as a test case, then send a patch back with your code documentation after you’ve tthoroughly tested it?

        1. >Wouldn’t it be better if you forked the Go source(I honestly don’t know for sure if it’s open source, sorry if it isn’t), then wrote your own version of Generics into it as a test case, then send a patch back with your code documentation after you’ve tthoroughly tested it?

          It is open source, and I could in theory do this. The problem isn’t technical, it’s political – the Go devs are opinionated people designing an opinionated language and I do not think achieving that kind of fait accompli would win them over.

          1. >It is open source, and I could in theory do this. The problem isn’t technical, it’s political – the Go devs are opinionated people designing an opinionated language and I do not think achieving that kind of fait accompli would win them over.

            Right, that’s probably fair.

            But… in the advent that they arn’t able to find a solution to this problem, your case will be greatly helped if you have working code as a proof of concept. It pays to be prepared. Just saying.

            1. But… in the advent that they arn’t able to find a solution to this problem, your case will be greatly helped if you have working code as a proof of concept. It pays to be prepared. Just saying.

              It depends entirely on the core community, its goals and priorities, and its relationship or lack thereof with the language’s users and how this has worked or not in the past. This sort of approach by users has been tried with Clojure and has abjectly failed, even with things as simple as apparently clean patches for clear bugs, which can languish for years.

              It’s gotten so bad that much if not nearly all of the user community has given up trying to work with the core community, many after wasting a great deal of effort, and many have largely given up on the language altogether.

              The Golang core community isn’t hardly as bad, for example blame for the quasi-debacle of the module mess clearly rests on the user community members who just didn’t grok Go, nor paid attention when the core said things were showstoppers. But the way the core community has been completely wedged over generics for years is a very bad sign, and being so tied to Google is an steadily greater risk for its users as the company gets more converged.

              1. >But the way the core community has been completely wedged over generics for years is a very bad sign

                That it is.

                1. So, i want to make something clear, i speak from a position of ignorance here. I new to the open source communty and i’m not sure how important generics are. All i can do is speak of solutions outlined in your essays and HOWTO’s.

                  However, i’m not trying to throw your own work back in your face, here. Here, i speak of your work in the way that I understand it. If i’ve understood it poorly, please tell me how.

                  I don’t know much about the Go community, but if they are having serious, continuing problems in their approch to an issue, either the issue is intractable and has no obvious solution, or there is something wrong with the community itself.

                  If it is the first case, all they need to do is solve it the open source way, that you described in your essay: The Cathedral and the Bazaar, and the solution should present itself to someone.

                  If it is the Second case, I honestly don’t know how to approach that problem. The best solution i can see is for someone to fork the Go source, develop a new language from it, and build a new community from scratch, but that may just be me speaking from ignorance.

                  Honestly, aside from those solutions i outlined, I have no idea how one could solve the problem, or even if one shouldn’t just wait and see if the problem solves itself. But from what H said about the core communty, that seems unlikely.

                  1. >However, i’m not trying to throw your own work back in your face, here. Here, i speak of your work in the way that I understand it. If i’ve understood it poorly, please tell me how.

                    I think you understand it, but aren’t quite reckoning with the failure modes of having the open-source ideal executed by human beings.

Leave a Reply to Jim Clay Cancel reply

Your email address will not be published. Required fields are marked *