Term of the day: builder gloves

Another in my continuing series of attempts to coin, or popularize, terms that software engineers don’t know they need yet. This one comes from my apprentice, Ian Bruene.

“Builder gloves” is the special knowledge possessed by the builder of a tool which allows the builder to use it without getting fingers burned.

Software that requires builder gloves to use is almost always faulty. There are rare exceptions to this rule, when the application area of the software is so arcane that the builder’s specialist knowledge is essential to driving it. But usually the way to bet is that if your code requires builder gloves it is half-baked, buggy, has a poorly designed UI or is poorly documented.

When you ship software that you know requires builder gloves, or someone else tells you that it seems to require builder gloves, it could ruin someone else’s day and reflect badly on you. But if you believe in releasing early and often, sometimes half-baked is going to happen. Here’s how to mitigate the problem.

1. Warn the users what’s buggy and unstable in your release notes and the rest of your documentation.

2. Document your assumptions where the user can see them,

3. Work harder at not being a terrible UI designer.

4. Watch the issues list/user’s forum/mailing list, and actually respond.

5. When someone tells you it requires builder gloves, believe them. And fix it so it doesn’t.

Becoming really good at software engineering requires that you care about the experience the user sees, not just the code you can see.

Published
Categorized as General

96 comments

  1. “If you want to use this code while Stayin’ Alive, don’t forget your Bee Gees.”

    I’ll show myself out.

  2. 4. Watch the issues list/user’s forum/mailing list, and actually respond.

    5. When someone tells you it requires builder gloves, believe them. And fix it so it doesn’t.

    1. >4. Watch the issues list/user’s forum/mailing list, and actually respond. 5. When someone tells you it requires builder gloves, believe them. And fix it so it doesn’t

      Well done. Just added those.

      I especially like your point 5. It’s not just something I should have written, it’s in the style I would have used.

    2. 4. Watch the issues list/user’s forum/mailing list, and actually respond.

      4a. Use the forum yourself periodically, to test that it’s still operating.

      I could call this “I have no mouth and I must scream”, if it weren’t so long a phrase. To find a mission-killing bug in software, report the bug in the forum, and find the forum itself is bugged is the most disheartening thing I see in tech support – not only am I stuck, but the team to call is blissfully unaware of any problem. On their end, all lights are green, software’s great, and their low usage numbers are solely due to ignorant lusers or malicious suits.

      To compound this, I’ve even seen tech support get sniffy when I suggest testing their reporting mechanism, as if I’d insulted their intelligence.

  3. Timing is amusing…. Having some trouble today with a tool I wrote for my own use, but it’s been so long since I’ve needed it that I’ve forgotten just enough to cause some trouble.

  4. Isn’t that what containers are *really* for? I’m kind of joking but kind of serious. If your installation instructions are a mile long and require lots of obscure incantations, you’ve failed; but if you put it all in a container so someone can just pull it down and run it because you’ve put all of the weird incantations into the Dockerfile, then they can just run it…

    What I’ve found in practice though, in 33 years of maintaining a software suite that is always being made easier to install, is that whenever you finish ironing out a particular set of edge cases that naive installers trip on, you immediately open up a new stratum of even *more* inexperienced users that have even sillier problems. It’s turtles all the way down.

    1. >whenever you finish ironing out a particular set of edge cases that naive installers trip on, you immediately open up a new stratum of even *more* inexperienced users that have even sillier problems. It’s turtles all the way down.

      True. Anyone who can’t stand this best find a different line of work.

      1. > True. Anyone who can’t stand this best find a different line of work.

        Around the turn of the century, I had just started my first real programming job – working for a small company in New Zealand, Netwin, cutting C on their commercial POP3 server.

        I recall one support ticket, from a sysadmin who had heard that we got significantly better performance from our SMTP server on Linux than we did on Windows ME (surprise, surprise). Anyway, he’d downloaded our Linux build, but couldn’t get it to work.

        It took us a little while to figure out that he hadn’t changed OS; he was trying to run the Linux build of DSMTP on a Windows ME box.

        1. > … sysadmin
          > …
          >It took us a little while to figure out that he hadn’t changed OS; he was trying to run the Linux build of DSMTP on a Windows ME box.

          You keep using that word. I do not think it means what you think it means.

          1. > You keep using that word. I do not think it means what you think it means.

            I swear, the person in question was professionally employed as a system administrator by an actual ISP.

            It’s not like small-town ISPs are all bad though. A few years early, while at University, I lived next door to Alan Brown of MITS and ORBS fame. We even had some Cat 5 running between the buildings (through open windows!) while one of my flatmates was doing some contract work for him. I recall Alan being *very* clueful, and also very friendly and helpful towards us as teenage larvae.

            1. > I swear, the person in question was professionally employed as a system administrator by an actual ISP.

              Inconceivable! :-)

      2. Well, except that whether you can stand it or not, you’ll still have to deal with it as a user even if you’re not dealing with it as a developer. So you may as well be a dev and make sure that your software has both a default Fisher-Price mode and a way to turn it off . If you don’t, someone else will make an equivalent that has a Fisher-Price mode with no off switch, and you will have to use that. So anyone who can’t stand this just has to learn to deal with it.

        This is how we lost the GTK ecosystem: MATE was hostile-forked off of GNOME when that project went off the rails, but nobody bothered to hostile-fork a GTK2+i off of the main GTK line as an alternative to GTK3. Since GTK2 is no longer maintained, pretty much the whole ecosystem has now switched, despite the fact that GTK3 is not an acceptable replacement for GTK2.

          1. Considering that it’s already getting us barfulation like client-side decoration on XFCE – previously a bastion of relative sanity – I’d venture “not good, if not fatal”.

            1. Wayland pretty much forces client-side decoration, so get used to it. And Xorg is deprecated and barely maintained.

                1. Ubuntu uses GNOME by default, again, now. So there are lots of users on GNOME.

                  But what about my post was GNOME-specific? KDE is fully on board with Wayland, as are both of the major toolkits.

              1. Great.

                Honestly, my beef with CSD is mainly that I’m locked into the author’s idea of what good titlebar decorations look like, and increasingly that looks like ‘flat, no actual titlebar, iconify, maximize, destroy in that order in the upper right corner”.

                I know some people like it. Personally? Give me a titlebar – I *like* the “roll up window” feature. Also, I want iconify and destroy pretty far apart. Preferably on opposite corners, a la NeXT, which got that right IMHO. And maximize? I use it so rarely that having it as a menu option is more than enough. (I run things fullscreen on a phone, and if I’m stuck on a losing 1366×768 laptop, but on a 1080p/1200p/1440p/4k display? Why?)

              2. >Wayland pretty much forces client-side decoration, so get used to it. And Xorg is deprecated and barely maintained.

                I was going to ask how a compositing display server protocol is supposed to enforce CSD (other than the trivial case of an application being able to draw whatever it wants in the window area it’s actually given access to, in any windowing system), but I found something interesting in the Wikipedia article on Wayland: Wayland does not require CSD. Weston (the reference implementation) does, but KWin will do decoration server side, though I’m still not sure what prevents a DE running on top of Weston from rendering each client window in a slightly bigger window of its own, drawing decorations that the end-client doesn’t get to touch, and specifying that compliant programs will render CSD to /dev/sundontshine .

        1. A question for our host that I meant to ask:

          A lot of the justification used for all the deliberate lossage (loss of theming, client side decorations, et. al) in GTK3 and similar fiascos is “Corporate developers want to brand their own apps with their own look, distros want to do the same, etc. CSD helps developers brand things as they want, and user theming can interact with app-branding themes in ways that break an app’s UI. The user then submits a bug report to the app developer, and the developer is sad, so they file a bug report with the toolkit for letting users break their branding theme.”

          So the user wants to put his own brand on his desktop. The developer wants to put his employer’s brand on his app, or else the employer will be sad and not give him a raise. The toolkit dev finds that there needs to be one party in control of theming (per app), or apps break. As he interacts more with application developers than users (who tend to file bug reports with apps/DEs/distros more than with toolkits, if they’re even aware of what a toolkit is, and if they even file bug reports) he decides to give the app developer what he wants.

          This seems to be a significant failure model for middleware, and especially UI middleware: the middleware developer has more contact with the developers of user facing applications than with end users, so when there’s a conflict between the interests of developers and end-users, developers tend to win. And the perverse thing with UI middleware is that it’s middleware in the programmatic sense, but in a much more important sense, it forms the user-facing part of user facing applications.

          Do you have any thoughts on this tendency or on how to counter it?

          1. >A lot of the justification used for all the deliberate lossage (loss of theming, client side decorations, et. al) in GTK3 and similar fiascos is “Corporate developers want to brand their own apps with their own look, distros want to do the same, etc. CSD helps developers brand things as they want, and user theming can interact with app-branding themes in ways that break an app’s UI.

            Assholes. I have zero interest in their “branding”.

            If this is why our toolkits have been losing functionality, the people who made those decisions should be drawn and quartered.

            >Do you have any thoughts on this tendency or on how to counter it?

            This is what hostile forks are for.

            1. >Assholes. I have zero interest in their “branding”.

              >If this is why our toolkits have been losing functionality, the people who made those decisions should be drawn and quartered.

              That sentiment is by no means alien to my thoughts. Still, a potential pick-your-battles solution (in a time when there are many fires to extinguish) might be to convince the offending toolkits (GTK is the one that’s given me most grief, given that GNOME2/MATE has been my preferred DE since I started using Linux) to create “utility” and “branded app” beaches of their toolkits. The “utility” branch would be meant for stuff like DE components, text editors, etc, and anything where the developer did not have specific branding in mind, and would provide fine user control of theming, with no features to help developers with branding their apps. The “branded app” branch would be for any app where the developer wanted fine control over the visual style of the app, particularly for branding. This is probably the best solution assuming that the GNOME team, et al, are being honest about their motives for crippling toolkits.

              OTOH, I’m not sure that GTK3 isn’t just the result of the same cathedralish mindset that gave us systemd, with the branding thing being a fig leaf.

              As an aside, the snarky part of me wants to integrate /bin/mail into systemd as systemd-maild (there’s a method to my madness, don’t recoil in horror here), submit it to the systemd project, and declare “Zawinski’s law has been fulfilled, so expansion is no longer essential to the survival of systemd. Please stop growing”. Or maybe jwz is the one to do that.

              >>Do you have any thoughts on this tendency or on how to counter it?

              >This is what hostile forks are for.

              Yeah. Unfortunately, the MATE protect forked the DE (GNOME 2), but not the toolkit (seems to have been a manpower issue). MATE and the two primary non-GNOME GTK DEs (XFCE and LXDE) have by now gone to GTK3, because GTK2 is unmaintained, and LXDE is working on porting itself to QT (which seems to be the toolkit these days with the most slow-and-steady, don’t fix what ain’t broken development style).

              What do you think would be required to get a “GTK2+i” project off the ground, given that MATE seems to have been manpower-constrained from doing that themselves? Do you think that MATE, XFCE, and LXDE have the manpower between them to hostile-fork GTK? Or do they just all need to jump ship to QT? I’m a bit worried about the latter leading to a monoculture where there isn’t any alternative to jump to if something similar happens to QT in the future? Are there any other sources you can think of for manpower to fork GTK if MATE/XFCE/LXDE can’t do it between them?

              1. >What do you think would be required to get a “GTK2+i” project off the ground, given that MATE seems to have been manpower-constrained from doing that themselves? Do you think that MATE, XFCE, and LXDE have the manpower between them to hostile-fork GTK? Or do they just all need to jump ship to QT

                I don’t have enough knowledge about this area to venture opinions, alas.

      3. A time long, long ago, which you probably remember better than I do, programmers were allowed to scoff at clueless users who don’t even read manuals. Then things changed and first the idea of user-friendly programming was introduced. This was something most could accept, because they could still keep their sense of superiority and interpret this as being condescendingly friendly to the clueless.

        Then someone genius, or evil, or an evil genius, introduced the term “usability” and “usable software”, instead of user-friendly.

        This came to many programmers as a shock, an insult. It is one thing to be asked to be friendly to lusers, and another thing to be accused that your software is unusable if it is unusable by clueless people.

        The reason I call it genius, or evil, or evil genius is that it was, and I think calculatedly, an arrow right in the middle of the “feels”, ones professional pride and whatnot. And it seems to be calculated so, not a coincidence. This is a very big middle finger to say if users make mistakes or do not understand even a user-friendly and documented interface, it is somehow your fault.

        The genius part is that this big insult I think made programmers really think about what they want. Those who still wanted to develop end-user facing software had to learn to swallow their pride, their sense of superiority. Those who did not want or unable, moved into those subsets of programming, kernel, system, server, where their customers are themselves programmers or sysadmins so are expected to have some amount of clue, and they might say things like hey this e-mail server is a tad hard to configure but would not call that a lack in usability.

        1. Although this is a bit of a different topic, as builders gloves are used by builders, people with a clue, not end-users. Thus the problems are likely different.

          Usability from the end-user perspective is having to make decisions that should have been for them. Classic example is trying to run KDE in 1997 or so and sending an e-mail with a picture attachment and it asks the user if you want MIME/base64 encoding, some other, or no ecoding at all. Most people will shrug and choose the last one, which is the wrong one.

          These decisions could be made automatically, by detecting stuff or just having the common cases as (changeable) defaults. Another good solution if there is someone in between, like a corporate IT, who configure stuff for the users and they do not have to make such choices. For this reason usability was not a huge problem in a corporate offices. Besides, it was accepted that users might need training to “operate” the software they are expected to work with. Operating it in the sense as a worker operates a lathe machine. The worker serves the machine, not the other way around.

          It was a problem in home computing mostly, where there were no such intermediaries. Thus, programmerrs had to learn a perspective that was very alien to them. Of course if you know what you are doing, you want to have options. But if not, then not. Like, with an .mp3 music file, most people most of the time just want to play it. They want it the most straightforward way, click or tap on it and the sound comes right out of the speakers. Every kind of decision or option is confusing. While for the programmer themselves, especially if he is also a sound technician or used to develop software for sound technician it is not so logical at all. Even when he just listens to music, he is likely to use playlists… so the default action from his perspective might be “add to playlist”…

  5. If you feel the need to explain anything to your testers, outside of the manual provided with the software, I figure that’s a sign of builder gloves too?

    1. >If you feel the need to explain anything to your testers, outside of the manual provided with the software, I figure that’s a sign of builder gloves too?

      Ayup.

  6. ESR,

    While you’re on a roll with defining new terms, did you ever get anywhere with “ideomania?”

    1. >While you’re on a roll with defining new terms, did you ever get anywhere with “ideomania?”

      Where is there to get?

      1. You tell me. You enthusiastically latched on to the word as soon as I proposed it and talked about writing a post on the concept. Go back to this post and search the comments for “ideomania” if you’ve forgotten: http://esr.ibiblio.org/?p=8448

        1. >You tell me. You enthusiastically latched on to the word as soon as I proposed it and talked about writing a post on the concept.

          I haven’t forgotten. But when I tried to block out a post in my head it just seemed like repeating things I’ve already said. I’ll give it some more think time.

          1. Okay, here’s a couple of thoughts:

            1) We had a disagreement over how to define it, so specifying exactly what the term means is probably the best place to start. My comment on that was here: http://esr.ibiblio.org/?p=8448#comment-2281957

            2) I’m mainly interested in how and why an ideomania spreads and, most importantly, how to cure or inoculate against it. Without that last part the concept doesn’t really do much good.

  7. Interesting concept. I think a lot of open source software suffers from early users who quickly learn the ins and outs of the software so well and don builders’ gloves, so the developer who gets feedback onlly from these kinds of users rarely finds out the real usability issues as it becomes some kind of closed loop. Further new users with genuine issues are unfairly assumed to have not read the “manual” and hence find it extremely annoying when the established users are aggressive or hostile in the software community mailing lists/forums when it is pointed out by the new users that the software is “hard” or “difficult” to learn or is in some way non-obvious . Even documentation suffers from this kind of defect, with implicit assumptions of pre-existing knowledge.

    I think this “builder firewall” must be broken by the developer(s) from time to time to actually get useful feedback from new users, especially at time of releasing major versions. The established users are already so used to the quirks of the software that they actually think that donning builders’ gloves to use that particular software is the norm.

    1. Linux *distributions* suffer from this. Until I worked professionally in areas where we had to actually sell stuff to customers, I had no idea about the whole realms of stuff which aren’t addressed. Ubuntu would not meet the standards of the products I’ve worked on at some of the most fundamental levels.

    2. I think a lot of open source software suffers from early users who quickly learn the ins and outs of the software so well and don builders’ gloves, so the developer who gets feedback onlly from these kinds of users rarely finds out the real usability issues as it becomes some kind of closed loop.

      Indeed. Open source is, after more than three decades, still a massive usability FAIL, and can only watch as the taillights of Apple and Microsoft recede into the fog.

      To build usable software, you have to conduct UI research with users representative of your customer base. UI/UX design is also an endeavor best not undertaken by software engineers, who can’t seem to take their damn builder gloves off. Experts in human psychology, design, and ergonomics should be consulted instead.

      These people like to be paid for their services. Like, a lot.

      If you want polished, usable software — buy proprietary.

      1. >If you want polished, usable software — buy proprietary.

        COVID-relevant counterexample: Proprietary videoconferencing sucks. And not just because it sells your life to adtech vampires or spies on you for the Communist Chinese; the UIs on these things all suck badly compared to Jitsi, which my non-techie family actually likes.

          1. Teams is a focus-stealing piece of shit, at least on Windows.

            On Linux, since I use XMonad, I had to fsck around for a while to finally get notifications to *not* take over the entire screen….and it doesn’t just use the built-in notifications a la notify-send like any sensible piece of software…. Oh, and I still need to figure out how to get the notifications to not steal focus, but haven’t had the patience to mess with that because I only really get notifications when I’m actually, yaknow, busy.

            It also is impossible to dial in if you have to have both (unless you pay extra $$$ every month for that), which, yes, is still a relevant case.

            Once it’s actually installed and configured, it’s not terrible, but it still crashes frequently enough for me to notice on Ubuntu 18.04….

            1. Teams is a focus-stealing piece of shit, at least on Windows.

              I use it on a Mac. I have not known it to steal focus.

            1. If you work inside a corporation, yes it is. Outlook is a “one-stop shop” for business communications. People write emails with it, schedule meetings and appointments, and even book conference rooms. Exchange is also ubiquitous, as it’s a doddle for corporate IT to administrate and was designed to work seamlessly with Outlook. As a business worker (which most of us are going to be anyway, because it’s not like we’re making tons of money with our open-source output) you rely on Outlook, and the constellation of related Microsoft communications products, to stay in touch with and reachable by your coworkers — especially now.

              Open-source devs, by and large, still think in terms of “mail clients” and “mail servers”, which just goes to show how clueless and out of touch they are with what business purchasers are looking for — and why Outlook and Exchange totally ate their lunch.

              1. Well, there’s “it’s an advantage” as in “there is selection pressure to do it”, and there’s “it’s an advantage” as in “it doesn’t tend to turn your product lineup into a ball of kludges.”

                Now that Win16 and Win9x are gone, MS software is much more stable than it used to be. *Windows* nowadays is rock solid. But, in my experience, most of the remaining crashes in MS software these days are related to the incestuous level of integration within the Office suite and between Office and other software.

              2. Open-source devs, by and large, still think in terms of “mail clients” and “mail servers”, which just goes to show how clueless and out of touch they are with what business purchasers are looking for — and why Outlook and Exchange totally ate their lunch.

                aaaaand here you demonstrate a clear case of “missing the point”

                Open source people tend to prefer programs that do one thing and one thing well. This means a mail client that does mail and only mail. This means a mail server that does mail and only mail. This means LDAP and WebDAV and all the other things independently doing what they do best.

                MS Outlook and Exchange failing to do one thing makes them difficult to describe, difficult to use, and difficult to maintain.

                footnote: GNOME Evolution is the open source world’s clone of Outlook, with all the functionality and crisis of identity. Novell also had some groupware server that cloned Exchange… I don’t remember the name of it.

          2. Microsoft Teams on my laptop is currently using 330 MB of RAM and will spike to using a full CPU core when switching focus to it, both undeservedly. As a product to use it’s reasonably decent. As a piece of software it’s pretty obviously crap under the hood.

      2. Open source is, after more than three decades, still a massive usability FAIL, and can only watch as the taillights of Apple and Microsoft recede into the fog.

        Upon trying to posit that open source UI sucks, you pick Microsoft and Apple as your examples of something supposedly superior? Are you high? These two companies produce the worst UIs in the world, proprietary or not.

        1. True, these days they’re nowhere near the usability of Windows 95 or System 7.6. And the Windows 8 experiment has left the entire Windows UI a confused, schizophrenic mess. But common open-source UIs still introduce too much friction compared to what Apple and Microsoft offer, and offer little in the way of benefit to most users. Not just to moms sending emails and browsing Facebook, but to actual practicing developers (most of whom, if they target Unix with their code, use Macs).

          1. If Windows 95 is your idea of a high point in UI design (I grew up on it, so it more or less is for me), then MATE is about the best DE that has ever existed, being the ultimate refinement of that style.

          2. You should checkout MATE and GNOME. They blow everything Apple or Microsoft have ever done clear out of the water.

            1. Certainly MATE blows Apple, at any point in its history, out of the water. I wouldn’t quite say that about MS, the Win9x/2k/XP-classic-mode interface is quite good, and, AFAICT, pioneered most of the elements that GNOME2/MATE refined, though recently the MS UI has been much weaker. Even there, I have to give MS some credit: Win8 was the most serious attempt that anybody has made so far at turning mobile devices into portable desktop systems. Their methodology was all wrong (instead of trying to merge desktop and mobile UI, they should have developed an OS that could present a mobile UI on the go and a desktop UI when docked), and they ended up weakening their desktop UI in the process, but at least they actually *tried*.

              GNOME3, on the other hand doesn’t so much blow anything out of the water as it just blows. I am perhaps being a bit unfair: a lot of the critical elements are still there (at least in classic mode), you don’t have to stick with the insane default configuration, and many of its regressions WRT GNOME2 come in through GTK3, which MATE now uses too. The GNOME *project* OTOH, has not much good that can be said for it, and is flying the entire GTK ecosystem into the ground through its management of GTK3.

        2. Fun 4-minute fast-forward video where someone starts with MS-DOS 6.22 and does back-to-back updates to Windows 8. Though there’s the occasional crash along the way, software largely keeps working and settings are preserved across updates.

          I can’t even depend on being able to do upgrades between Ubuntu LTS versions, especially if I’ve changed any settings. I finally yanked my add-on sound card because it was too much of a pain to get working reliably every time I performed some form of upgrade.

      3. @Jeff Read
        “Indeed. Open source is, after more than three decades, still a massive usability FAIL, ”

        I think you confuse “familiarity” with “usability”. There is a saying that there is only a single intuitive UI, and everything else is learned (and that saying probably overstates that one interface). I still remember people claiming MS DOS was so good an OS and MS Windows 3.0 had such an intuitive UI.

        Shiny UIs are what you create for unmotivated new users doing simple tasks. It is well know that when people are motivated, no UI is too arcane.

        The worst UI ever created was the telephone keypad for texting. highschoolers could text blind (phone under the table) and used it prolifically.

        But this is the Hole-Hawg drill story from Neil Stephenson again. Yes, proprietary companies are good at creating shiny UIs. But users always struggle to go beyond the straightforward tasks in the ads. Real tools tend to not have shiny UIs, but have real functionality.

        An example is statistics. There are some really nice, proprietary interfaces for statistical software, SPSS comes to mind. That being “true”, the software itself is a disaster and everyone who really needs statistics switches to R, which is 100% open source. Also, proprietary packages are years behind R in modern statistical techniques because everyone publishes their tests in R.

        The same with Matlab and Python, especially in machine learning.

        1. Oh, hello. Emacs user here, and as such I understand exactly where you’re coming from and I used to agree. I got into Emacs back when learning it or vi was considered a rite of passage among Unix nerds. We thought of these editors as Real Tools used for Real Work.

          And then TextMate came along. That was when everything… changed. TextMate showed that it was possible to get Real Work done with a beautiful, intuitive[0] UI that conforms to Apple’s UI guidelines. It was even extensible, much like vim or Emacs. Then, the motivation you speak of to learn something like Emacs just vanished among devs. Hackers even started extolling the merits of paying for a high-quality text editor (TextMate was proprietary). Vim hung around in part because the Ruby on Rails community liked it. But today, most developers (51% or so) use Visual Studio Code, which is a successor to Atom, which is a successor to Sublime Text, which is a successor to TextMate. Both Atom and VSCode use an “open core” model and are not 100% open source.

          I’m still on Emacs just because it’s what I’m familiar with. Had I been born ten years later, I might be a happy Visual Studio Code user who couldn’t be bothered to learn old, arcane tools like Emacs or even vim.

          Ease of use is always a win. Always. I don’t care how skilled or motivated your user base is. Ease of use is not about shininess, it’s about reducing cognitive load to free the user’s brain up for more important stuff. And that has benefits even for highly-skilled technical workers, most of whom — if they were running Unix workstations before — switched to Macs around when Mac OS X came out. Even Neal Stephenson switched to Mac at about that time and hasn’t looked back since.

          Once a way to get the power of R in a clean, smooth UI emerges, statisticians will embrace it. Jupyter looks promising, but any company that can improve on Jupyter substantively, just in terms of UI, will make a mint.

          [0]Yes, yes, I know. “The only intuitive interface is the nipple” and all that. But there is a spectrum of intuitivity. Mousing is more intuitive than command keys. Recognizing is more intuitive than remembering. And using global conventions (global keybindings like Cmd-X, Cmd-C, and Cmd-V for cut, copy, and paste; menu bar at the top; menus start with File, Edit, etc.) is more intuitive for people already familiar with those conventions. Again, reducing cognitive burden is key.

          1. Part of the issue here is that I suspect that different brain-types have different kinds of interface that they consider intuitive. There seem to be some people for whom an Apple-style dock *isn’t* a massive usability fall. You seem to be one of them (admittedly, Apple’s dock is less of a usability fail than that of the late, unlamented Unity).

            Another is that I/O restrictions can affect what types of interface constitute usability fails: Docks aren’t bad if you’re restricted to the use of a trackpad, but it still boggles my mind that anyone could consider it too much of a portability hit to bring a *real* mouse along with their laptop (in which case a dock is a usability fail). Even a Bluetooth mouse will take full advantage of a Win9x/MATE style interface, and a corded mouse carries the additional advantage of not running out of battery at inconvenient times.

            1. Hey, Unity is less puke-worthy than GNOME. And there’s still an Ubuntu spin with it, it’s just not the official default anymore.

            2. If you’re taking your hand off the home row, or certainly off the keyboard, it’s a productivity fail just about any task. The time-in-motion needed to hunt down an option in a top level menu is more than the time-in-motion for keyboard shortcuts.

  8. A lot of scientific/academic programs require builders gloves.

    Take, for example, the model that caused most of the countries in the world to destroy their economies: Neil Ferguson’s Covid-19 model

    It seems it was impossible to run outside his own lab in its original 15,000 line C++ file form. Numerous people from Microsoft/github have been refactoring it anf fixing things so that it can be used by others.

    The original appears to be utter garbage, see this for some examples:

    https://lockdownsceptics.org/second-analysis-of-fergusons-model/

    1. I spent a little time looking into the code. Oh good lord, it is awful.

      First of all, it appears to be primarily C, rather than C++, and particularly poorly-written C at that. ESR, if you’re ever feeling too sane, take a look at https://github.com/mrc-ide/covid-sim/blob/master/src/SetupModel.cpp and recoil in horror.

      Aside from almost everything being in global variables, it mixes binary file parsing with internal data structure initialization. I’m only slightly experienced with C++ (and barely at all with C), so I don’t know if some of the idioms are normal/acceptable for C, but having a massive 630-line function with loops and multithreading doesn’t seem like a good idea in any language.

      Having worked with other scientific code in the past, I’m not shocked, but I am appalled.

    2. @Francis Turner
      “Take, for example, the model that caused most of the countries in the world to destroy their economies: Neil Ferguson’s Covid-19 model”

      The point is, they listened to people like you instead. South Korea and Taiwan took things seriously and had the epidemic under control before it started. Other countries claimed it was all a hoax and not something to worry about.

      In an epidemic it is important to start early. The importance of an early start cannot be overstated.

      BTW, most national health authorities and run their own models.

      1. >BTW, most national health authorities and run their own models.

        Which I’m sure we should trust just as implicitly as we trusted Ferguson’s fetid pile of crap. And all the other ones, like the IHME, that grossly overpredicted mortality rates and drove the lockdown panic.

        At this point, epidemic modelers would be doing well to have the gravitas and credibility of astrologers.

        1. “At this point, epidemic modelers would be doing well to have the gravitas and credibility of astrologers.”

          The truck loads of bodies and crumbling hospitals in NYC are telling a different story. But it is true that epidemiologists cannot predict human stupidity. And it is well known that human stupidity determines the outcomes of disasters as much as the original cause itself.

          NYC has a severe lockdown and still the covid epidemic is wearing down the health care service. Bodies have to be stored in truck for lack of space. That is with only 180 deaths per 100k inhabitants and actually stopping the epidemic. The fatality rate would have been several times higher without the lockdown.

          Even with a fatality rate of 180 per 100k extrapolated over the whole USA, covid would kill well over 500,000 people in the USA. That is twice the fatalities of the USA and its allies in the Vietnam war and more than the number of USA fatalities in WW II.

          1. > Even with a fatality rate of 180 per 100k extrapolated over the whole USA

            And there’s your problem. You can’t treat the entire US as NYC. NYC is the densest, most populous city in the US. Of course it’s hit hard by the pandemic.

            The rest of the US is decidedly not New York City. Take Georgia, for example. They just started reopening, and yet they are not dropping like flies from CoVID-19.

            1. The number of new cases in Georgia skyrocketed as soon as they reopened.

              Reopening now is dangerous and stupid. If any governors should be marched on, it’s the ones exposing their citizens to unnecessary COVID risk, not the ones enforcing sensible lockdown orders.

              But really, everybody should just stay the fuck at home.

              1. > The number of new cases in Georgia skyrocketed as soon as they reopened.

                Once again, you are (deliberately?) ignoring the long incubation period for Wuhan coronavirus. Because of this, a spike right after reopening doesn’t mean that reopening caused the spike.

              2. Where is this “skyrocketing” you’re talking about?

                https://dph.georgia.gov/covid-19-daily-status-report

                There is a large jump from Apr. 26 to Apr. 27, more than a week before the lockdown restrictions began to be lifted (May 8 according to (https://thehill.com/homenews/state-watch/495527-georgia-to-lift-stay-at-home-order-for-most-residents-friday); since then the trend if any has been mainly downward, admittedly still with large fluctuations (but none that I see justifiably characterized as “skyrocketing” upwards).

                That’s ignoring the response you received when you raised this “point” in the last thread, and were reminded by multiple folks that there is an _incubation period_: even if there had hypothetically been a “skyrocketing” of cases “as soon as” the lockdown was lifted, those cases would have been contracted a week or so _before_ said lifting.

                > everybody should just stay the fuck at home

                And once again Lord Jeff who _has_ a job that lets him WFH demands that all the peasants who _don’t_ have such jobs “stay the fuck at home” anyway.

            2. “NYC is the densest, most populous city in the US. ”

              Population density only affects the speed of transmission. It has no effect on the number of people who get infected.

              As long as R > 1, eventually, everyone will get infected. Just as, before vaccination, every child would get the measles, no matter how low the population density was. Nowadays, influenza reaches every corner of the country.

              Given the state of health care in the USA, the fatality rate due to covid-19 would be higher in rural areas than in NYC.

              1. >Given the state of health care in the USA, the fatality rate due to covid-19 would be higher in rural areas than in NYC.

                It’s not. I think I know why.

                Initial load matters a lot to the severity of viral diseases in general, and for COVID-19 in particular; the odds that the COVID will kill you are proportional to the number of virus particles in initial exposure.

                People in dense urban areas are more likely to get high viral load on initial exposure. NYC’s subways are the absolute worst case.

                1. “nitial load matters a lot to the severity of viral diseases in general, and for COVID-19 in particular; the odds that the COVID will kill you are proportional to the number of virus particles in initial exposure.”

                  Indeed. But rural people do visit large social gatherings, sports, fairs, marriages. The region in North Italy that was devastated by covid-19 contained many villages.

                  However, when studying the factors that affect case fatality rates, the factor that explained most were “ef-fective containment measures”. Population density only explained the speed of the spread:
                  https://www.medrxiv.org/content/10.1101/2020.04.21.20073791v1.full.pdf

                  See also:
                  https://www.researchgate.net/publication/340680894_COVID-19_in_Northern_Italy_An_integrative_overview_of_factors_possibly_influencing_the_sharp_increase_of_the_outbreak_Review

                  Density (densities are national levels, I assume)
                  https://ourworldindata.org/grapher/covid-19-death-rate-vs-population-density

                  1. > Density (densities are national levels, I assume)

                    Which makes that graph absolutely useless for answering questions about potential let alone actual differences between urban and rural areas within a given nation.

                    > But rural people do visit large social gatherings, sports, fairs, marriages

                    Except nobody in the US is doing that, especially “rural people”. As I’ve said multiple times, the exceptional “large social gatherings” you’ve seen (and _again_ this is CNN itself, not exactly an alt-right source) are the LA beachgoers, and NYC Central Park, and Chicago “house parties”.

                    Quit mixing in your just-so stories with random newspaper clippings you’ve collected and then claiming to have important insights into what’s going on here.

                    1. “Quit mixing in your just-so stories with random newspaper clippings you’ve collected and then claiming to have important insights into what’s going on here.”

                      Except, they were serious research papers.

                      And I always wonder why it is commenters here that tell me that all will be well, when the medical staff on the ground is worried sick about what is coming at them. What do you know that the medical staff does not know?

                      About the susceptibility of the rural population: Rural Americans tend to be older and in worse health than urban Americans. Also, health care capacity in rural America is (much) worse. Many localities in remote arias do not even have doctors.

                      Many small communities around the United States don’t have a full-time doctor — and in Alaska, many aren’t connected by road. Instead, they rely on community health aides, a physician who visits a few days out of the month, and either commercial or medevac flights to larger urban centers during emergencies.

                      https://www.vox.com/2020/3/28/21197421/usa-coronavirus-covid-19-rural-america

                      Much more facts about the spread of the corona virus in rural USA.

                    2. >About the susceptibility of the rural population:

                      You can talk about susceptibility all day long, but COVID-19 deaths per capita are still lower in rural areas than in urban ones. In fact COVID-19 excess deaths in rural areas are low enough to be hard to distinguish from measurement noise. The medical system is not failing these people. Yet.

                    3. > Except, they were serious research papers.

                      If by “serious research papers,” you mean “Fabio-Gramscian circle jerk so idiot NPCs like Winter can point to them and say, ‘See? Reality does have a liberal bias! We lefties are always right! See? See? The science is on our side! The solution to everything really is global communism administered by the UN! You have to give up all your property and freedom so we can pretend to solve these problems while the elites get everything and you get nothing! See? See? Please Mr. Xi can I have a seat on the Politburo? Pretty please? I helped you take over the world by killing America and the West! You have to give me a seat! It’s only fair!'”

                    4. > And I always wonder why it is commenters here that tell me that all will be well,

                      I never said that. (What is it with lefties jamming things in other guys’ mouths? It’s weird.) I said you were improperly conflating rural and urban numbers from the US, improperly (at least via the graphs you claimed were relevant to your point; I’m not accusing the graph-makers of making such claims) comparing US numbers so conflated with other conflated national numbers, and improperly comparing cherry-picked regional numbers with postulated US numbers (“But rural people do visit large social gatherings, sports, fairs, marriages. The region in North Italy that was devastated by covid-19 contained many villages.”, implying that _because_ rural Italy has been devastated, rural US “must” similarly be in terrible shape (despite _actual_ numbers showing the opposite)).

                      > Rural Americans tend to be older and in worse health than urban Americans.
                      Right, because rural babies are always mailed back to the cities to be Dickensian factory workers, and old city dwellers always move out to the farms on retirement (no doubt because farmwork is so easy compared to city life, just what a physically infirm older city dweller is up for).

                      _That’s_ what I meant by “Quit making up just-so stories”, fabricating smack about US life from nothing and passing it off as an established axiom.

                      > Also, health care capacity in rural America is (much) worse. Many localities in remote arias do not even have doctors.

                      Dude, I _grew up_ in rural America, and our health care was way better than the crap I have had to deal with since moving to Chicago. You are making up shit _again_.

                      http://esr.ibiblio.org/?p=8632#comment-2399388 Last paragraph, the bit about how much your cheese shop sucks.

                    5. @esr
                      “You can talk about susceptibility all day long, but COVID-19 deaths per capita are still lower in rural areas than in urban ones. In fact COVID-19 excess deaths in rural areas are low enough to be hard to distinguish from measurement noise. ”

                      Slower, not necessarily less:
                      Montana Town’s High COVID-19 Death Rate Is A Warning For Rural America
                      https://www.capradio.org/articles/2020/04/23/montana-towns-high-covid-19-death-rate-is-a-warning-for-rural-america/

                      Rural Counties Seeing Faster Growth in COVID-19 Cases, Deaths
                      https://www.usnews.com/news/healthiest-communities/articles/2020-04-30/coronavirus-cases-deaths-growing-at-faster-rates-in-rural-areas

                    6. > [Winter] Rural Counties Seeing Faster Growth in COVID-19 Cases, Deaths

                      From that article:
                      “In both rural and urban areas, roughly 4% of people who had COVID-19 died from it, meaning the fact that there are more infections in urban areas is likely what’s driving the higher death tolls there, the analysis notes.”

                      So the overall death rate (per infection, not per population; yes I get that these are different) is still apparently stable, and not “spiking” or “worsening” generally in rural areas.

                      “Recent increases in case counts are likely tied in part to better testing, which Morgan notes has been in short supply in rural communities.”

                      So again, any “rising numbers” don’t necessarily mean things are dramatically “worsening” in all of our rural areas, we are just getting a clearer picture of what the baseline has recently been.

                      > Slower, not necessarily less:
                      Montana Town’s High COVID-19 Death Rate Is A Warning For Rural America

                      That article points to a map at
                      https://geodacenter.github.io/covid/map.html

                      When I linked to the Michigan Department of Health and Human Resources map of Michigan covid-19 cases, Alsadius promptly accused me of “sneaking something by” because of course you need to take population density into account as well. (The fact that that map looks almost exactly like a population map of Michigan went without mention.) Now when people say that rural areas are likely to be less hard hit than US cities due to (among other things) drastically lower population density, you respond that it’s not population density that matters, it’s ““ef-fective containment measures” mumble mumble. Then you link to a map that by default is set to show population density.

                      And yes, Toole looks bad: oh my gosh the red is starting! What if it spreads! But it looks “bad” on the density map precisely because of its low population density… If you toggle the map to Confirmed Deaths, it practically disappears, and on both maps it is surrounded by huge swaths of land with much tinier numbers (assuming we don’t consider Toole county’s six deaths already pretty tiny). The article itself said this particular spike happened in ONE nursing home (people already at high risk for any infection, and for such an infection to prove fatal). Focusing on this one county (really one nursing home in one county) and ignoring the vast majority of the rest of the map seems like cherry-picking to me. (The local maximum is a local maximum, wowzers!)

                      Don’t get me wrong, I really like this map, it is really helping me get a better perspective on things; and I also like the scatterplot at https://ourworldindata.org/grapher/covid-19-death-rate-vs-population-density you linked to. But I don’t think they’re showing what you’re claiming. The US “as a whole” may be in the bad part of that scatterplot… but by rights you need _two_ points in that plot, US urban areas like NYC (10k people per km^2 by google, and about 2000 deaths per 10^6 by the geodatacenter map), about an inch due north of Monaco; and a second for the vast non-urban bulk of US land area — looking at https://www.census.gov/dmd/www/pdf/512popdn.pdf, say 25 to 100 people per km^2, and (again by geodatacenter) somewhere between 0 and 100 deaths per 10^6 people — somewhere in the blob around Norway to Paraguay,

                      Now even if you don’t, someone will chime in here and say “but if you divide those other countries along urban / rural lines the US will still fail on both counts!” I don’t buy that either. You’ve been going on about how hard all those “rural villages” in Northern Italy were hit, so I finally went and googled it (https://www.reddit.com/r/MapPorn/comments/eqgmkx/population_density_map_of_italy/). Northern Italy has a _minimum_ population density around 500 people per square mile; that’s not a “rural area” in the US, that’s a freaking _city_ here.

                      The more I dig through data like these (supposedly backing the “left” point of view), the more it looks like what one would basically expect: cities or areas with high population density have been hard, areas with _genuinely_ low population density _usually_ not so much. Yes it’s great that New Zealand and South Korea successfully contained everything instantly with minimal disruption to society; they also have very small fractions of our population, overall population density, land area, and (most importantly I suspect) _variations_ in population density within relatively short geographic distance.

                  2. > However, when studying the factors that affect case fatality rates, the factor that explained most were “ef-fective containment measures”. Population density only explained the speed of the spread:

                    Containment measures also only (directly) affect the speed of spread. But high speed of spread *does* affect mortality by overwhelming the medical system.

                    Rural healthcare systems do have less capacity, but they’re also under less load, both because people in rural areas tend to spend less time within 6 feet of multiple people to begin with, so COVID isn’t spreading as fast there, and because the really frail elderly, who are the ones that are dying, tend to have been in nursing homes in town long before COVID hit, exactly because there aren’t the medical resources to support them where they live.

                  3. > Population density only explained the speed of the spread:
                    https://www.medrxiv.org/content/10.1101/2020.04.21.20073791v1.full.pdf

                    That paper is completely irrelevant to the “rural vs urban” argument you were making.

                    From Sec. 3.2:
                    “However, instead of considering the population density of the country as a whole, we took
                    the population density of the most affected region or city in those countries.”

                    That is, for the purposes of their analysis, the US is treated as having the population density of New York City itself. Table 1 on the next page shows that the _smallest_ population density they considered was Bergemo Italy at around five _thousand_ people per square mile. That’s FIFTY times the population density of the US as a whole. The authors themselves are comparing _countries_ to one another, are only comparing countries based on the “most affected cities” in those countries, and are making NO attempt to compare urban vs rural rates within any given country.

                    (That’s before mentioning that it’s a preprint of an article that has not been peer reviewed, making your “serious research papers” assessment a rather generous one.)

                2. > People in dense urban areas are more likely to get high viral load on initial exposure. NYC’s subways are the absolute worst case.

                  You and Winter are both talking out of your hindquarters. The two big factors in determining which areas get the really high mortality rates are the medical system being overwhelmed (so that borderline cases die), and spread in nursing homes (where high old-age mortality takes its toll, a factor in both Italy and NYC).

                  The high rate of spread accompanying high population density helps overwhelm the medical system (which is where Winter is wrong), and the nursing home patients where the disease in running wild aren’t *themselves* riding the subway, so they’re not going to get subway-level viral loads.

                  1. @Jon Brase
                    “The high rate of spread accompanying high population density helps overwhelm the medical system”

                    Indeed, and that depends not only on the population density, but also on the available number of hospital beds and ICU beds per 100k. Also, the accessibility of these hospitals is a big factor. NYC was mediocre on these. Many regions are worse.

                    But even with good medical care, a lot of people die.

                    One aim of the lock down in many places (not necessarily the USA) was to let the epidemic die out before it reached the nursing homes. That was not an unmitigated success.

                    1. > Indeed, and that depends not only on the population density, but also on the available number of hospital beds and ICU beds per 100k.

                      But the denser the population, and the faster the spread, the more beds per 100k you actually need. Let me go back to something you said upthread:

                      “As long as R > 1, eventually, everyone will get infected.”

                      But the thing is, R (not R_0) will remain > 1 *longer* in places with higher population density, because each infection will transmit the disease to more people. That means you need a higher degree of herd immunity to bring R below 1.

                    2. “But the denser the population, and the faster the spread, the more beds per 100k you actually need. ”

                      And did you check the available ICU’s in the rural areas? Would there be enough? Or is this just your gut feeling (having no experience in the matter)?

                      “That means you need a higher degree of herd immunity to bring R below 1.”

                      Nice try, but you do not have the data telling us that this is all enough to not do a lock down in the rural areas, let alone the cities in the Midwest.

                      You are gambling with a lot of lives with NO data and a lot of mights and maybe’s.

  9. The first example that sprung to mind of users being (mildly) burned by software is the classic bit about someone trying to use vi, being bewildered by modes, and then being unable to even quit the program.

    Are we saying vi / vim’s developers are on the hook to keep users from burning themselves while trying to edit a simple text file?

    To what extent is any UI obligated to avoid burning new users? I’m okay with “do not destroy the user’s content, or through inaction, allow it to be destroyed”, but how much further should the software go? All the way back to the nipple?

  10. Slightly off-topic, but Eric, what is your thoughts on the current state of Rust ? Have you given it a look after your earlier review of it? From your earlier review, Rust seems to be a programming language that in itself requires builder’s gloves.

    Recently thinking of learning another language and Rust seemed an obvious candidate, but it seems conceptually quite hard to grasp and it doesn’t seem to offer anything so unique/special that learning it seems worth it. I tried going through the tutorial but I gave up because it seems hard to grok beyond the trivial examples. Will try to look at it once again.

    One minor irritant for me apart from the language itself (to be fair, most modern programming languages seem to be adopting this model): Rust seems too much tied into its all-in-one build, distribution and dependency management system Cargo. I very much prefer to install my software from my distribution package manager from one source.

    1. Oh, just started on the Rust tutorial and used print! macro instead of println! used in their tutorial, found that it doesn’t auto-flush the buffer, had to google it and I now begin to understand what you meant by Rust requiring elaborate rituals before performing anything. Here’s the link to the issue: https://github.com/rust-lang/rust/issues/23818

      I would say Rust is a programming language example for requiring builder gloves.

      Why should it simply not work as printf () does in C? I mean what is the point and so much elaborate discussion for this “feature”??

      1. >Why should it simply not work as printf () does in C? I mean what is the point and so much elaborate discussion for this “feature”??

        There actually is a point. Autoflushing isn’t in the spirit of Rust as I understand it. Rust-thinkers are allergic to abstractions that don’t have zero or at least very low and completely predictable costs.

        1. Let me slightly correct myself. C printf doesn’t automatically flush the buffer either unless you have a new line character. But what happens is that in most languages a call to any function reading stdin does automatically flush the stdout buffer prior to reading. This is standard expected behaviour. Also the simple use case of displaying a prompt and getting input on the same line as the prompt is not documented in the Rust tutorial which blithely uses println! so I got burned by using print! instead of the println! Macro. What I find more annoying is the assumption by senior Rust developer in that thread linked above that programmers are generally unaware of output buffering of stdout. Truth is Rust has non standard behaviour in this regard and it is not well documented where it should be.

          I shouldn’t need to google such a simple issue.

          1. Truth is Rust has non standard behaviour in this regard and it is not well documented where it should be.

            Nonstandard? According to whom? Buffered I/O is codified in the C standard library, which Rust happily avoids hewing to. Rust can set its own standards, and its developers have chosen not to make C’s buffered I/O behavior the default.

            If you mean that on POSIX systems the buffering behavior you describe should be the default, then consider that:

            1) POSIX is broken;

            2) It is much easier to reason about code where the I/O behavior you want is open-coded rather than where it is assumed to take place as part of the standard-library implementation.

            1. I think I made the point clear that neither C or C++ flushes the buffer with a call to printf () alone (except when newline is sent – in which case the newline causes a buffer flush). However, when you read from stdin, the output buffer is flushed automatically. This works this way in most languages, except Rust. Hence by definition Rust’s behaviour *is* non-standard, in this regard.

              In the Rust tutorial , they use println! instead of print! (probably to avoid explaining this basic issue) and hence people like me trying out Rust for the first time and going through the tutorial, and naturally try and use print! instead of println! run into this issue of not getting a prompt when the print! is placed before the stdin().read_line.

              This printing a prompt and getting user input on the same line is very common behaviour. The tutorial should have addressed this issue of print! macro.

              To see my point try the tutorial by simply replacing the println! with print! command. I certainly got confused initially and needed to google as to how to flush the buffer manually.

              1. > This works this way in most languages, except Rust. Hence by definition Rust’s behaviour *is* non-standard, in this regard.

                Atypical, yes. But there’s no document that says all programming languages have to work that way, so it’s not non-standard in the way Jeff is understanding your use of the term.

      2. Here are my thoughts:

        Rust does itself a profound disservice by not having a standardized semantics. The semantics of a Rust program is “whatever the Rust compiler outputs today when run on that program”. This despite the fact that Rustaceans pride themselves on using a language that is better defined (and thus less prone to UB) than C.

        The people who actually give a shit about safety — those in industries like military, aerospace, medical, etc. — want to see a well-defined standard which codifies what each and every program line should do, so that they can reason about the program’s behavior in a rigorous way before ever submitting it to the compiler — and they have some assurance that the program will not do something catastrophic or even unexpected, no matter who wrote the compiler or what version it’s on. This is why Ada is so important in some of these industries — and even why they still prefer carefully coded C or C++ to Rust. With appropriate standards and static-analysis tools, it’s possible to code in these languages reasonably safely while still benefiting from the fact that they’ve been battle-tested for as long as most programmers have been alive.

        That said, Rust’s safety guarantees are very much appreciated, and borrow-checking is the future of systems programming. Getting to that future, however, will take some paperwork.

    2. >Slightly off-topic, but Eric, what is your thoughts on the current state of Rust ? Have you given it a look after your earlier review of it?

      I have not, mainly because Go has proven well-suited to my flavor of need for a C replacement.

      I know what kind of project could motivate me to take another swing at Rust, but no such project has captured my interest. That could change, but seems unlikely to in the near-term future.

Leave a comment

Your email address will not be published. Required fields are marked *