Dragging Emacs forward

This is a brief heads-up that the reason I’ve been blog silent lately is that I’m concentrating hard on a sprint with what I consider a large payoff: getting the Emacs project fully converted to git. In retrospect, choosing Bazaar as DVCS was a mistake that has presented unnecessary friction costs to a lot of contributors. RMS gets this and we’re moving.

I’m also talking with RMS about the possibility that it’s time to shoot Texinfo through the head and go with a more modern, Web-friendly master format. Oh, and time to abolish info entirely in favor of HTML. He’s not entirely convinced yet of this, but he’s listening.

You might think “Huh? Emacs already has a git mirror. What else needs to be done?” Quite a lot, actually, starting with lifting Bazaar commit references into a form that will still make sense in a git log listing. Read the recent emacs-devel list archives if you’re really curious.

Fixing these things are important to me as part of a larger project: cracking Emacs out of an encrustation of practices and history that has made it seem insular and archaic to a lot of younger hackers who grew up with the faster pace and the techniques of the web.

RMS did too good a job. Because Emacs can be a total environment that you never have to step out of, the culture around it has tended to become inward-looking and hold on to habits that smell two decades old now.

My favorite quote about this is from Text Editors in The Lord of the Rings:

Emacs: Fangorn

Vast, ancient, gnarled and mostly impenetrable, tended by a small band of shepherds old as the world itself, under the command of their leader, Neckbeard. They possess unbelievable strength, are infuriatingly slow, and their land is entirely devoid of women. It takes forever to say anything in their strange, rumbling language.

Fortunately, RMS recognizes that this points at a real problem. Some of his senior devs don’t get it…

And if the idea of RMS and ESR cooperating to subvert Emacs’s decades-old culture from within strikes you as both entertaining and bizarrely funny…yeah, it is. Ours has always been a more complex relationship than most people understand.

296 comments

    1. >Does that mean you will be rewriting reposturgeon in elisp?

      Heh. No.

      It will be playing a starring role in the conversion, though.

  1. @esr –

    At the risk of going too far off-topic, this reminds me of a question I have been meaning to ask you publicly for some time:

    If you accept Paul Graham’s ordering of the strength of computer languages, and if you accept his claim that Lisp is the “strongest” language commonly available –

    and since you are (arguably) a Lisp hacker “old as the world itself”, and have advocated for the study of Lisp by all serious hackers –

    why didn’t you just exhaustively use and promote Lisp, instead of Python?

    No snark intended. If this is too far afield of this discussion thread, I would respectfully request and encourage you to blog about it separately.

    1. >why didn’t you just exhaustively use and promote Lisp, instead of Python?

      Because Lisp doesn’t have implementations that are truly production quality for the kind of work I need to do. The range of library support in Lisps is very weak – batteries are, as Python people like to say, not included. There’s also a persistent problem with them lacking a full set of bindings to ANSI/POSIX C facilities.

      The sad result is that while you can do beautiful things in Lisp, you end up doing them miles distant from where most of the action is. I keep hoping somebody will solve this, but every promising start seems to fizzle out short of the point where it makes sense for me to invest.

  2. @esr: “I’m also talking with RMS about the possibility that it’s time to shoot Texinfo through the head and go with a more modern, Web-friendly master format. Oh, and time to abolish info entirely in favor of HTML.”

    If you succeed at this item we shall all surely owe you a great debt.

    I guess some of us just never “got” Textinfo.

  3. No, I don’t doubt for a second that Stallman would favor such a move, and do what he can to make it happen. Stallman has long suggested changes to Emacs that never happen because it would upset too many apple carts. I’m still waiting for the long-promised switchover to Guile to happen.

    More recently he has proposed that Emacs work more like a word processor and/or an IDE like Eclipse — but of course few of these changes make it into the final product because of the stiff-neckedness of the Emacs community.

    But this is different because it HAS to happen, or Emacs risk losing all its developers, not merely failing to attract new users.

    why didn’t you just exhaustively use and promote Lisp, instead of Python?

    The Lisp community is arrogant and insular, therefore toxic. When it comes to serving user needs and actually getting shit done, the Python community has it all over the Lispers.

    1. >I’d vote for Markdown over HTML, if this were a democracy ;)

      Aaarrrrrgggghhh. Markdown sucks the anus of a syphilitic camel.

      It’s not any of the individual dialects of Markdown are bad, it’s that there are individual dialects, about seventy skajillion of them, all mutually incompatible. Curse the originator for underspecifying the format and then sleeping while it fragmented.

      asciidoc does markdown’s job better than markdown does.

  4. Moving from [tex]info to HTML without losing features would require sufficiently smart HTML and HTML index generation, and sufficiently smart console HTML viewer, and probably wrappers (e.g. install-info used in RPMS).

    I like that with Emacs I can jump to function description in appropriate info file (in info viewer in Emacs), thanks to index in said info file (e.g. glibc.info).

  5. Texinfo is one reason digging through GNUish documentation is sheer torture to me. Shoot it in the head. With a .50 BMG.

  6. Also, I’m all for modernisation, but the day Emacs becomes ‘more like Eclipse’ is the day I fork a version that isn’t.

    You’re right. Emacs should be more like Visual Studio. :) (Except in the case of the latter’s crashiness…) The point is that IDEs are a Good Thing, and the community resistance to making Emacs more IDE-like has held it back both in terms of power and in terms of user base.

    One thing that’s an absolutely must-have for large-scale software engineering on modern projects is refactoring tools. Unfortunately in order to do this right, the code editor needs real knowledge of the syntax and semantics of the target programming language — which Emacs has always lacked, due in no small part to Stallman’s own obstreperousness. He deliberately obfuscated the interface between the front and back ends of gcc because he feared that proprietary software companies would code one of either and keep the source code closed, diluting gcc’s value as a free software program. This basically killed interoperability with gcc and leveraging its parser, for instance, in other applications like IDEs.

  7. And if the idea of RMS and ESR cooperating to subvert Emacs’s decades-old culture from within strikes you as both entertaining and bizarrely funny…yeah, it is. Ours has always been a more complex relationship than most people understand.

    From interviews with you and Stallman, I gathered that you were not on speaking terms with each other. Is that something that has changed recently, and if so what caused that change?

    1. >From interviews with you and Stallman, I gathered that you were not on speaking terms with each other.

      I’m always on speaking terms with RMS. Whether he’s on speaking terms with me depends on many things, apparently including the phase of the moon and random quantum fluctuations.

  8. Perhaps we’re using different definitions of ‘IDE’.

    What I’m particularly afraid of is that IDEs are often used as a crutch for inferior stacks. E.g. back in the WinForms days, there was rather a lot of boilerplate necessary to get a WinForms app going. That was mitigated by having the IDE generate a lot of the code.

    This led to less pressure to fix the language & libraries, and programmers generating code they didn’t understand (nothing wrong with that as a learning tool, plenty wrong with production code to be maintained by others).

    Some IDE features – integrated debugging, navigation, refactoring – would be excellent. But, in the case of Emacs, I see these as being per-language features rather than Emacs features per se.

    I would say that keeping Emacs programmable and general purpose, then building those extensions on top of it, would make a lot of sense. Plus you could skip the bloat if using Emacs for non-IDE tasks.

  9. >why didn’t you just exhaustively use and promote Lisp, instead of Python?

    A wise(ish) man once said,

    “Lisp’s perennial problem of lacking a standardized OS binding for portability is solved by the Emacs core, which in effect is its OS binding.”

    I always understood the first part of that sentence to be an explanation of why Lisp isn’t more prevalent. Then again, why hasn’t that perennial problem ever been adequately solved?

    And as long as I’m asking Emacs questions based on my reading of TAOUP, there’s this passage:

    “Perhaps the most conspicuously dispensable part of the Emacs design is Emacs Lisp. It is essential to what Emacs does that it features what we nowadays call an embedded scripting language, but Emacs would be little different in capability if that language had been Python or Java or Perl.”

    Has anyone ever attempted a variant of Emacs using a different embedded language? Would doing so produce any significant benefits?

    Just curious; apologies if out of context.

  10. Count me as yet another person who thinks that killing texinfo would be a thousand times more important than modernizing Emacs – but then, I haven’t used Emacs in ten years or so, specifically because the online documentation is state-of-the-art for 1985. I’d vote for a replacement that isn’t identical to HTML5 or EPUB, but is trivially convertible to both; markdown and wikitext are both examples of how to do this, although markdown has horribly wrong link syntax and wikitext is painfully difficult for writing tables. One caveat – a successful documentation project needs a style guide just as badly as it needs a format specification; info sucks specifically because there’s so little consistency in the organization of different project docs, compared to man pages.

  11. “Whether he’s on speaking terms with me depends on many things, apparently including the phase of the moon and random quantum fluctuations.”

    So you can either know whether he is on speaking terms with you, or else you can hear from him, but not both?

    “Has anyone ever attempted a variant of Emacs using a different embedded language? Would doing so produce any significant benefits?”

    That would depend strongly on how much of what you consider to be Emacs is the stuff written in elisp. As for benefits, it would let people use a scripting language that has use outside of editor customization to customize their editor.

  12. “But I say unto you, Love your enemies, bless them that curse you, do good to them that hate you, and pray for them which despitefully use you, and persecute you, that he may be children of-”

    Wait, is RMS and ESR working together a good enough excuse to quote from the Sermon on the Mount?

    “Aaarrrrrgggghhh. Markdown sucks the anus of a syphilitic camel.”

    Holy redditshat! Eric and I actually agree on something!

  13. and their land is entirely devoid of women.

    I’m not sure gender neutrality is a good idea, but the rest are.

    I forgot how long – I know it was over a decade(!) that I said info is nearly useless and should be replaced with HTML (so that even lynx could browse it if you needed text-only – and there was the www package for emacs). There was a crippled stand-alone info, so every time I had to go to some document for emacs, gcc, or some other GNU thing, I would find myself trying to remeber the arcane navigation. GNUtrality was not good.

    At least HTML is ok (shoot anyone who suggests embedding javascript). Maybe the FSF should discard its 8-track tapes and jump to mp3 players too.

  14. @strongpoint: “Has anyone ever attempted a variant of Emacs using a different embedded language? Would doing so produce any significant benefits?”

    Yes, they have. There are *many* emacs variants. A lot of them use the emacs design and key assignments but don’t implement a macro language. Others do both but don’t necessarily use lisp as the embedded language. One critical difference is the nature of the embedded language, Gnu emacs is essentially a lisp interpreter for a dialect of lisp, and most of emacs is written in the embedded language. Other emacs variants implement a macro language as part of the editor, but the editor isn’t written in it. There are an assortment of editors written in Python, but I’m not aware of anyone trying to rewrite emacs in it.

    The emacs variants I’m aware of are listed here, with Craig Finseth’s regularly updated list of emacs implementations as the canonical source: http://texteditors.org/cgi-bin/wiki.pl?EmacsFamily

    See http://texteditors.org/cgi-bin/wiki.pl?PythonEditorFamily for a list of editors in Python.

  15. @strongpoint: “Has anyone ever attempted a variant of Emacs using a different embedded language? Would doing so produce any significant benefits?”

    Arguably an image based Smalltalk with the development tools installed is just that thing. It is usually specialized around coding in Smalltalk, but it doesn’t have to be. At one job one of my responsibilities was making VisualWorks an IDE for a application development package we sold.

  16. @strongpoint: “Has anyone ever attempted a variant of Emacs using a different embedded language? Would doing so produce any significant benefits?”

    I’ve long thought of implementing an Emacs-like editor using Python. A&D regular Russell Nelson invented an embedded language for his Freemacs editor called MiNT (MiNT is not TRAC), which superficially resembles Lisp but isn’t.

  17. Peter: Forget it. Clang is BSD-licensed. RMS would never sit still for GNU Emacs depending on something that’s not GPLd.

    1. >RMS would never sit still for GNU Emacs depending on something that’s not GPLd.

      Not true by demonstration. Emacs builds with GIFLIB for interpreting GIFs. It’s under a permissive license.

    1. >Have you looked at rST? Do you find it inferior to asciidoc?

      I’ve looked at it a little. They seem approximately equivalent to me.

  18. Is it Texinfo the human-writable language you want killed or the info page system? I haven’t written any texinfo myself, so I don’t know how it is to write, but I enjoy the compiled output. Navigating and reading info pages in Emacs is very efficient (once you’ve learned the keys) and I’d hate to lose that. Info pages, due to their restrictiveness, impose some structure on documentation that is often lacking on HTML pages.

    1. >Is it Texinfo the human-writable language you want killed or the info page system?

      I think both should be killed immediately.

      >Info pages, due to their restrictiveness, impose some structure on documentation that is often lacking on HTML pages.

      No feature info provides is worth the huge cost it imposes. Info documentation is isolated from the web! That’s just unacceptable in 2014. It makes younger hackers point and laugh, and with good reason.

      The structure you want can be achieved in HTML, assisted by a few special keybindings in a browser embedded in Emacs.

  19. @Esr —

    As long as we are talking about LISP vs. Python and bindings. What do you think of Clojure as solving the LISP modern libraries problem?

    1. > What do you think of Clojure as solving the LISP modern libraries problem?

      I don’t know. I haven’t looked at it carefully enough to evaluate yet.

  20. Maybe the FSF should discard its 8-track tapes and jump to mp3 players too.

    Ogg Vorbis players.

  21. HTML instead of Textinfo would be great. I guess emacs would need to integrate WebKit or Blink layout engine.

  22. One thing that’s an absolutely must-have for large-scale software engineering on modern projects is refactoring tools. Unfortunately in order to do this right, the code editor needs real knowledge of the syntax and semantics of the target programming language — which Emacs has always lacked, due in no small part to Stallman’s own obstreperousness.

    @Jeff, well, there is Semantic parser (Tools > [ ] Source Code Parsers (Semantic)) in Elisp, a part of CEDET project, nowadays built-in / distributed with GNU Emacs (at least for v24).

    Some IDE features – integrated debugging, navigation, refactoring – would be excellent. But, in the case of Emacs, I see these as being per-language features rather than Emacs features per se.

    @Duncan: Emacs has GUD, Grand Unified Debugger

    1. >@Duncan: Emacs has GUD, Grand Unified Debugger

      Which, by the way, I originally wrote. :-) I haven’t touched that code in many years, though.

  23. HTML instead of Textinfo would be great. I guess emacs would need to integrate WebKit or Blink layout engine.

    Nonsense. Emacs must work in both terminal mode and GUI mode. This means that Emacs would need, at the very least, the ability to load different HTML layout engines, such as Gecko or WebKit for GUI mode, and something like Lynx or ELinks for text mode.

  24. By the way, Asciidoc provides also compilation to info pages (see e.g. Documentation/Makefile in git.git). On the other hand Asciidoc compilation chain is quite long and a bit fragile, and requires many third party tools (xmlto, makeinfo, docbook2x, dblatex, …).

  25. Daniel, the fact that Texinfo documentation requires either a dedicated program or emacs to view it makes it unusable for things that aren’t emacs itself. Whenever I need to look at gcc documentation, I invariably come out frustrated because I’m not an emacs hacker.

    The documentation format isn’t, strictly speaking, something that needs to be jettisoned if you run every Texinfo doc through something like doclifter to turn it into HTML for actual people to use. The info browser itself, on the other hand, is an active blight on the community and should be terminated with extreme prejudice.

  26. There is emacs-w3m (which uses w3m) and Emacs/W3 package on ELPA (written in Emacs Lisp).

  27. > Aaarrrrrgggghhh. Markdown sucks the anus of a syphilitic camel.

    I was surprised by this (I was going to make the same suggestion) and it might lead me to re-evaluate markdown. I use a dialect for missing features, and it felt like a smell at the time I made the decision, but I couldn’t quite put a finger on why.

    On the other hand, I dislike HTML as a master format for anything. What makes markdown appealing to me is that it can be naturally read and written in its native form, while being trivially convertible to any distribution format you like. HTML is a pain in the ass to both read and write; I find the markup to be a distraction from producing the content. I’m not sure about its convertibility. My destination formats are usually plaintext, html, and latex/pdf.

    I’ll look into asciidoc and maybe rst for future projects.

  28. @A: “On the other hand, I dislike HTML as a master format for anything. What makes markdown appealing to me is that it can be naturally read and written in its native form, while being trivially convertible to any distribution format you like. HTML is a pain in the ass to both read and write; I find the markup to be a distraction from producing the content. I’m not sure about its convertibility. My destination formats are usually plaintext, html, and latex/pdf.”

    HTML is a preferable format for *displaying* information, but there’s no requirement that you write in HTML markup. Tools are available to convert what you write into HTML for presentation. One I’ve been looking at is Pandoc, an open source package written in Haskell. which can take input in markdown, reStructuredText, textile, HTML, DocBook, LaTeX, MediaWiki markup, OPML, or Haddock and output a wide variety of other things including HTML. If you need a custom writer, it’s scriptable in Lua.

    See http://johnmacfarlane.net/pandoc/

  29. @Jay Maynard: “The info browser itself, on the other hand, is an active blight on the community and should be terminated with extreme prejudice.”

    +1

    I use Windows as well as Linux, and I’m not normally in emacs on either platform. I have an assortment of stuff in Info format. I had a *lot* of fun finding a console mode info viewer that worked in Windows. The “official” Win32 port is broken and unlikely to be fixed. (Someone on another list pointed me at a patched version I’d have been unlikely to find otherwise.)

  30. > HTML is a preferable format for *displaying* information, but there’s no requirement that you write in HTML markup.

    I may have misinterpreted what ESR meant by master format. I was thinking of it as the documentation equivalent of source code, e.g. checked into github along with the source or something. True, you could write in whatever form you wanted and convert to html, then check that in, but you can’t check it back out, convert to a reasonable composition format to edit, then convert back to html and expect reasonable results. Especially not when different people using different tools are involved.

    > Tools are available to convert what you write into HTML for presentation. One I’ve been looking at is Pandoc

    I use Pandoc for some documents. It is good.

  31. @A: “I may have misinterpreted what ESR meant by master format. I was thinking of it as the documentation equivalent of source code, e.g. checked into github along with the source or something.”

    I agree with your definition. For that, I’d probably use XML, as a storage format relatively easily output in whatever display form you like. But I wouldn’t assume you would write in raw XML. You would write in something that would output well-formed XML,and check that in. When you checked it out to do updates, you would use the same tool to do the changes.

    And on those lines, I don’t think I’d want an HTML engine in emacs. The emacs paradigm that got us info assumed you would boot emacs on login, and it would be your shell and you would do everything from within it. For how many people is that still true? I’m fine with HTML as a display format, and if I need to access the HTML from within emacs, emacs can spawn an external browser appropriate to my environment to display it. If I’m not in emacs (the usual state), I just run a browser to view it.

  32. @esr (re: Clojure)

    I don’t know. I haven’t looked at it carefully enough to evaluate yet.

    Clojure is thought to solve the Lisp library issue by using the JVM as a platform, hence providing access to the JVM’s types and libraries. IMHO, this is not as useful as it looks. It requires you to basically know two languages, Java and Lisp, and then be able to read Java library documentation whilst translating it in your head from Java to Lisp.

    That’s okay, I’ve done stuff like that, for example, with stuff like WxPython (which often requires you to read the WxWidgets documentation, translating into Python because its own documentation leaves much to be desired), but there is enough friction there to make me want to steer clear of it as a productive development platform.

    1. >Clojure is thought to solve the Lisp library issue by using the JVM as a platform, hence providing access to the JVM’s types and libraries.

      I knew that much. What I don’t know is whether I should consider it a well-constructed Lisp.

  33. > It requires you to basically know two languages, Java and Lisp, and then be able to read Java library documentation whilst translating it in your head from Java to Lisp.

    I’ve done some stuff in Clojure which dependent heavily on java libraries. It was no problem. Even despite me never having written a single line of java code. IMO the biggest pain point is the ages it takes for the jvm to start up. Also I wish Clojure was more like arc (unhygienic macros ‘n’ stuff) and had ponies.

  34. I’m impressed that you got RMS to listen to anything. I’ve always put him in the same category as John Draper.

  35. “Ours has always been a more complex relationship than most people understand.”

    Excellent. The world gets better when smart people work together.

    Yours,
    Tom

  36. FWIW, I use rST for a lot of my documentation needs. There’s a lot to be said for plain text in repositories.

    BTW, there is now a Clojure-like substance that runs on Python instead of Java:

    http://docs.hylang.org/en/latest/

    I don’t really know anything about it except that the developers use one of my Python libraries.

  37. I can’t help but want to ask – not entirely seriously, but not entirely joking – why save emacs at all?

    (Also, isn’t it already not really getting new users?)

  38. One of my favorite Emacs features (besides being extensible in Elisp – c.f. sierotki.el) is TRAMP – Transparent Remote Access, Multiple Protocols, which includes stuff such as SSH-ing via gateway, and sudo edit.

  39. ABCL (Common Lisp), SISC, and Kawa (both Scheme) also solve the Lisp library problem the same way — by building on top of the JVM — and they employ the semantics of familiar Lisps and a syntax that won’t give you hives.

    The Clojure community appears to have been colonized by people similar to the type of person who favors Ruby on Rails, which does not incline me to want to participate in it.

    There’s also a persistent problem with them lacking a full set of bindings to ANSI/POSIX C facilities.

    For a while, Guile was my go-to scripting language — because it comes with a full (or nearly full) set of POSIX bindings and it’s Scheme. It was held back by its performance, being an interpreter based on SCM before version 2.0, but Andy Wingo’s work integrating a much better VM for Guile 2.x has raised its performance bar from “at least as good as Ruby” to “at least as good as Python”.

    Plus the fact that it’s embeddable and FFI-able — as I like to say, “Guile goes with everything.” It’s alive and actively developed — the only thing really holding it back is a lack of marketing. Maybe when Guile Emacs becomes an actual thing it will get more street cred. As the most practical utility and scripting Lisp it certainly deserves it.

    For code that needs to be really fast, Gambit-C is out there as well, and it integrates very easily with C code. It doesn’t come with all of Guile’s “batteries” but there are third-party libraries and bindings for it.

  40. > and time to abolish info entirely in favor of HTML.
    To the guy that, as I recall, does not even use a web browser? Well, good luck with that.

    Unrelated, almost, but if anyone is considering ordering an FSF-approved Gluglug laptop: Don’t fucking do it, the guy behind it comes across as an awful scam artist. Either that, or he’s as completely clueless about business as a slug is about salt mines.

  41. It’s important to distinguish between Texinfo and Info. Nearly all .info files are generated from .texinfo files, but the latter do have other uses.

    Info should be shot. It irreversibly mashes out every single detail of formatting that cannot be rendered on an Apple //c. It can’t even adapt to a monospace display that isn’t 80-column. And at least man pages get to use actual boldface.

    But Texinfo is quite decent. The one thing annoying about it is the presence of the @include statement, which means that if I just extract the .texinfo file from some package and try to process it on it’s own, it usually won’t work thanks to package maintainers’ cute games with “version.texi” files.

  42. If you think Markdown sucks the anus of a syphilitic camel because of its fragmentation, you might consider txt2tags as an interesting choice: first of all it’s much more expressive and logical in comparison to markdown. In addition, you can create “on-the-fly” new syntax and macros so you can easily expand it to suit your needs. And since extra syntax definition in contained in the same document, it can’t be fragmented like all the markdown variants.

  43. I thought Eclipse might kill Emacs, but it’s just too miserable to program extensions in Eclipse. I therefore continue to use Emacs for anything but Java editing. Eclipse is architected as a providers/subscribers divide, whereas with Emacs, it’s easy to make custom adjustments on your local installation.

    Agreed about killing the info program as a browser. I don’t know enough about texinfo the format to say anything about it.

    Regarding the Markdown sub-thread, don’t overlook RestructuredText (ReST). Unlike Markdown, it’s fairly complete for the kinds of things you would want to write in a user manual, e.g. sections, indexes, and tables of contents. I’ve used it on a few projects and found it to work well. Oh, and it has a spec.

  44. @sigivald:

    [W]hy save emacs at all?

    I’m going to attempt a heretic’s answer (I worship at the altar of Bill Joy, so my soul’s forfeit, yet esr lets me play along) –

    Way, way back (late 80’s??), there was an issue of Dr. Dobb’s Computer Journal (after they had upgraded from black-and-white tabloid format to glossy magazine, but eons before the Web) whose theme was “Programming Editors”. The key idea was that a new programmer would “imprint” on his/her first serious editor as part of the process of hatching out as a hacker, and would forever remember editor commands and features at the level of muscle memory. This freed them from having to think about editing at all except at the abstract level of changes to the text, some of which would happen automagically to suit the programming task at hand. (The cover picture was of some tiny ducklings following their “mommie” programmer.)

    There is quite a lot of truth in this premise. (As an anecdote, every computing environment I have to use for more than about a day I find some way to get ViM or one of its work-alikes loaded onto. ‘Cause my fingers remember how to do it, and I don’t have to struggle with just changing some words as I struggle with learning everything else. Also notice others on this thread have commented upon their own preferred keybindings and command sets, almost always learned very early in their histories.)

    IIUC, emacs does this extremely well. Because it is an extensible editing environment (and not merely a tool for whacking on streams of ASCII characters), as specialized programming tasks evolved, emacs evolved along with them, and carried its core ideas and techniques (keybindings, the idea of a “word” vs. “non-word”, etc.) along with it. It’s much lower mental friction when your mail reader, your documentation browser, your editor, and your command interpreter all share the same method to “repeat the last thing I did that contained ‘foo’ in its command”.

    So, as long as there are hackers who have ever learned emacs, emacs will never die. And shouldn’t!

    New learners? I dunno – we will have to leave the answer for that to some of the “true believers”. :-D

    1. >IIUC, emacs does this extremely well.

      That’s correct, as far as it goes. But there’s a power advantage as well as a muscle-memory advantage. Nowhere but Emacs can I type a single key sequence and have it mean “do the next operation in the version-control cycle on this file”, and have that work regardless of the version-control system I am using.

      The Eclipses and IDEs of the world have some of this kind of power, but they’re are all tied to a single workflow and a single selection of tools. There is no other way but Emacs for me to embed procedural knowledge in my editor and then carry it with me across multiple toolsets, operating systems, and the span of a quarter century.

      For software engineers doing the most demanding kinds of work – the kind that demands knowledge accumulated over periods best measured in spans of at least five years – there will always be demand for a tool like this. Whether it it will be called Emacs is a contingent detail.

  45. @John D. Bell “The key idea was that a new programmer would “imprint” on his/her first serious editor …
    …As an anecdote, every computing environment I have to use for more than about a day I find some way to get ViM …”

    This is no doubt a true premise, but there are exceptions of course. I learned Vim at age 45 after 2 decades of using typical Windows mousey editors. Now I get irritated at anything that doesn’t follow vim keystrokes.

    My impression from places like reddit, etc. is that very few late-model programmers are using anything other that very modern stuff (e.g. Visual Studio, SlickEdit, Eclipse, XCode). I spend most of my time in Wing Pro for Python, but use a vim keyboard personality – best of both worlds.

    Emacs will never die, but it’s day of major relevance has passed, IMHO.

  46. FWIW, here is the Windows perspective… one piece of software that Microsoft does well is development tools. Visual Studio is light years ahead of any other tool I have used, especially when supplemented by the various plug ins like resharper. A while ago I did a little work on XCode and it was laughably bad compared to what you can do in Visual Studio. And of course Eclipse is the usual steaming pile of poop that you would expect from any desktop Java coded application.

    Not only is the basic Visual Studio tool excellent, but it is highly customizable using a language that programmers are familiar with, and an API that exposes pretty much everything (including a semantic model of the program you are editing — hence the ability to build add ins like resharper and code rush.)

    The built in refatoring tools and generative tools are fabulous, and flawless in execution, and the plug ins add a whole slew of new capabilities.

    Part of the reason why is that the primary language it is designed for — C# — is much more tractable that the C and C++ that tend to be edited in emacs, meaning that the tool can have a very deep understanding of the program being edited.

    You Unix-y people should hold your nose and try doing some windows dev using this tool — it’ll show you what you are missing, and it will help you set a goal for what your tools should aspire to.

    1. >You Unix-y people should hold your nose and try doing some windows dev

      You misspelled “You should try being spit-roasted in the pit of Hell – it starts being fun when your skin crackles.”

  47. BTW, i have read some papers on Ada development environments, which also look excellent, however I have never personally worked in that environment, and I’d be interested in the experience of those who have, especially if they have worked in both.

  48. We’ve always attributed the dominance of Windows to network effects. I find myself wondering, however: if Visual Studio is that good – and I have no reason to doubt her, especially given the level of love the programmers I deal with at work have for it – is it, perhaps, the bit of pig lipstick that keeps Windows from falling into the abyss?

  49. @Jay Maynard –

    is [Visual Studio], perhaps, the bit of pig lipstick that keeps Windows from falling into the abyss?

    Which brings the discussion full circle – how do we make the development toolset and methodologies for *nix as good or better than Windows?

  50. (Has? (Lisp syntax))

    Ahhh… false. I’ll go false.

    Clojure’s syntax is Lisp-ish? But since arrays and hashmaps are top-level constructs, Clojure just can’t help but incorporate them into the language’s syntax in weird ways. Lambda formals, for instance, are enclosed in brackets rather than parens.

    So it depends on who you ask I guess. Some people even consider Ruby “an acceptable Lisp”, but people who’ve been working with Lisps a long time and a lot tend to see Clojure as outside the circle of Lispiness inhabited by CL, Emacs Lisp, and Scheme.

  51. Which brings the discussion full circle – how do we make the development toolset and methodologies for *nix as good or better than Windows?

    For starters, 86 the slavish adherence to “the Unix way” — which isn’t even what developers prefer anymore. In fact, Windows encourages a method of system design which better reflects the best practices of how complex software is actually built: instead of sockets, pipes, and text files, components communicate with well-specified, structured, discoverable APIs — across a systemwide bus if the messages pass a process boundary (that’s COM in Windows, dbus in Linux). This makes adding a component as simple as wrapping and publishing its entry points through the provided mechanism.

    Secondly, Unix development tools have historically targeted just hackers, or people who want to be hackers. That needs to stop. New workflows need to be developed which make even the least talented developer more productive, and tool developers must work closely with and solicit feedback from these developers. For whatsoever you do to the least of today’s developers, that you do unto tomorrow’s l33t hax0rs.

    For example, at a bare minimum, it should be possible to start from nothing and build a complete application without ever touching a terminal window. Terminals and command lines are death. If you want to be a Linux developer, you are still “expected” to be conversant in bash first, which adds something else to the long list of things to learn if you want to get started. That’s toxic to a thriving developer community. Maybe it wasn’t such a big deal in the 1970s, when the alternatives were mainframes that only specialized personnel could develop for, but in the 2010s we have nine-year-olds writing apps for the iPhones they got for their birthday and we want our development suite to be just as accessible to those as it is to Unix old hands.

    It also helps if the graphical tools you do develop aren’t mazes of twisty little passages. I’m looking at you, Eclipse.

    Note that while Microsoft has famously provided tools like Visual Basic and Microsoft Access to help nonprogrammers develop applications (albeit simple task-specific ones), they haven’t neglected the high end — even John Carmack swears by Microsoft’s toolset these days.

  52. @Jeff Reed –

    Terminals and command lines are death. If you want to be a Linux developer, you are still “expected” to be conversant in bash first, which adds something else to the long list of things to learn if you want to get started. That’s toxic to a thriving developer community.

    That’s very interesting – but wrong. Our esteemed host has described his preferred screen layout, and the majority of it is a command-line tool or two. (I admit to ignorance as to whether Eric used one of the X-windows emacs variants, or just runs emacs in an xterm-clone.) I wouldn’t discount him from being a member of “a thriving developer community.”

    I’m even going to teach a class at Penguicon 2014 (you’re welcome to attend) teaching CLI use of Linux, starting with bash. Real world example – I’m in Toledo, OH; I’m supporting a server in greater Detroit (at least 75 min. drivetime away), in an after-hours emergency; the firewalls between us only permit port 22 traffic, and not X (even with tunneling). Now what do you do?!

    (I am reminded of that famous rejoinder by 2nd-A supporters which I could misquote as “You’ll get my command lines back when you pry them….” Not what you meant by death, I suppose.)

    1. >“You’ll get my command lines back when you pry them….”

      I shit you not; I once actually said to Guido van Rossum, when he had expressed his dislike for a certain feature he had previously put in Python “You’ll take away my lambdas when you pry them from my cold, dead fingers.”

      Then, because he was not at the time as naturalized to the U.S. as he is now, I had to explain this.

      He never removed them.

  53. @esr
    > You should try being spit-roasted in the pit of Hell – it starts being fun when your skin crackles.”

    Perhaps, but even Dante brought back some useful information from his trip to the realms of the Dark Lord. Think of it more as being a CIA operative infiltrating a Wahabbist terror cell….

  54. @John D. Bell
    > Real world example – I’m in Toledo, OH; I’m supporting a server in greater Detroit … in an after-hours emergency; the firewalls between us only permit port 22 traffic,

    Of course the problem with this argument is that Windows has a much better command shell than bash anyway — powershell. It isn’t used much by programmers, because it isn’t needed, the other tools are much better (BTW including the excellent interfaces to a wide variety of source control systems with a fairly consistent user experience, to Eric’s point.)

    However, one place where powershell is used commonly is in these sorts of system maintenance tasks you mention John. I doubt you are doing much development over that connection after all, and if you are, I’d suggest that something is broken in your process.

  55. Part of the reason why is that the primary language it is designed for — C# — is much more tractable that the C and C++ that tend to be edited in emacs, meaning that the tool can have a very deep understanding of the program being edited.

    Visual Studio is not primarily designed for C#. It is inherently a multilanguage tool; it started off as basically an amalgam of the former products Visual C++ and Visual Basic, and I seriously doubt the core components of the tool have changed so drastically since. While it got caught up in the .NET branding maelstrom of the early 2000s — being renamed Visual Studio .NET — Microsoft eventually realized that, while fine for LOB applications, C# and the .NET runtime have limitations which prevent them from being optimal for various applications still of interest to Windows developers and backed away from pushing them as the successor to the Win32 and COM runtimes. This is why Windows 8 Modern applications are still based on the C++/COM-based WinRT, rather than committing to a .NET-only runtime.

    That said, your statement about the advantages of the code editor knowing the language deeper than Emacs still applies — even for C++.

  56. “There is no other way but Emacs for me to embed procedural knowledge in my editor and then carry it with me across multiple toolsets, operating systems, and the span of a quarter century.”

    @esr: Kill The Buddha.

  57. Jessica: “Windows has a much better command shell than bash anyway — powershell”

    Merciful $DEITY, NO!!!

    What bash can do in 25 keystrokes takes, if you’re lucky, 250 in powershell. If you’re unlucky, it takes 2500. When it comes to COBOL fingers, COBOL is a piker compared to powershell. Powershell comes across as some CompSci professor’s theoretical exercise instead of something in which to get real world work done. I’d rather install emacs and code up what I need in elisp than work in powershell again. Hell, I’d go so far as to install Hercules and a bootlegged MVS/SP with ISPF just so I could code in REXX and use an editor I can handle than use powershell again.

  58. There’s an immense amount of wisdom in Henry Spencer’s quote, “Those who do not understand Unix are condemned to reinvent it – poorly.” The entire Windows API is a case study in how right Henry is. The same goes for powershell.

  59. Of course the problem with this argument is that Windows has a much better command shell than bash anyway — powershell.

    What the fuck. I was willing to dismiss your previous comments about Visual Studio as mere idiocy, but this one just puts you straight into the troll category in my view.

    PowerShell is what you get when some MS employee looks at Perl for a day, and decides to reinvent it while mashing it together with Ye Olde COMMAND.COM. The results are not pretty. Sure, it might be better than the old shell on Windows, but it has nothing to compare with Bash whatsoever.

  60. > Ours has always been a more complex relationship than most people understand.

    No, we’ve oft imagined you two actually enjoy each other’s company.

    After all, you’re both hand kissers, you both claim “love but not monogamy” as a credo, and you’ve shared girlfriends.

    Why would it not be that the natural conclusion would eventually be true, elimination of the, er… “middle woman”.

  61. @esr
    > Point me at a replacement tool with the right properties, first.

    The closest thing I can think of would be Acme, but it’s a little too mouse-heavy to be a comfortable switch from Emacs. Easily as extensible and not bound to a specific language for extending, however all commands (custom or otherwise) require mouse interaction rather than key chords.

  62. @Jessica Boxer “Of course the problem with this argument is that Windows has a much better command shell than bash anyway — powershell.”

    Jessica, totally agree with your view that some cross pollination between unix and windows tools could be beneficial – to both camps (and Eric ought to know better than to so quickly dismiss it).

    I live deep in the Ozark foothills of Arkansas but I do application support on a creaky Windows box in the Canary islands off the northern coast of Africa. My choice tools: a cygwin sshd daemon on the far end which I log into using Putty on this end. Yes, windows-to-windows I prefer ssh. It’s the only thing that is still responsive and powerful on a long skinny network pipe and doesn’t disturb the console user. When I have to do something mousey I use teamviewer and lots of patience.

    As to powershell, if you can point me toward an introductory tutorial for it that is less than 500 pages, I might be interested. I learned bash from reading stuff not much longer than a leaflet. The powershell designers should acquaint themselves with words like “brevity” and “simplicity”; frankly, I think it was designed by over-educated morons.

  63. To respond to a few comments, esp. as it pertains to the question about the Windows development environment, I’d first like to reference some of the major problems with gcc. Take a look at the following video on clang from 1:25 to 4:00 (I recommend the whole talk, but this explains some of the major ways in which gcc really sucks):

    http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Clang-Defending-C-from-Murphy-s-Million-Monkeys

    A related problem is figuring out where something you’ve defined is used, or where something you are using is defined/implemented. There are tools out there to do this, and everything I’ve run into in Linux land is pretty crummy. It doesn’t handle different symbols with the same name well. It doesn’t handle working inside of macros very well. Etc.

    @esr:
    > You misspelled “You should try being spit-roasted in the pit of Hell – it starts
    > being fun when your skin crackles.”

    My first “real” software development environment was Microsoft Visual Basic 5.0 back ~1997. I made that software do stuff that it sure as hell wasn’t designed to do. (Among other things, I wrote something which was sorta like IRC/NetMeeting, designed to enforce Robert’s Rules of Order for large group meetings … it seemed innovative at the time – I was 16). The only thing which I’ve run into which is nearly as easy to get working with is the Qt library and GUI designer.

    Most Unix-y software and libraries are poorly documented and difficult to use. There certainly isn’t any good GUI support for automating the workflow you want to do when it comes to them. Seriously. Even the man pages for the standard C library are lacking. Once again, the best counter-example I can think of being the Qt library.

    Another good example: data processing. The Unix workflow is designed to work on streams of data with a single record per line. Grep, gawk and sed are the workhorses here. This is extremely powerful when looking at log files. However, if you are attempting to work with non-single-table-tabular-data, you’re pretty much doomed. There are many other data layouts in practice. The tree (most well-typified by an XML document) can be processed with tools implementing XPATH or similar, but those are the exceptions, not the rule. Multi-table documents (like a database) are pretty much unsupported unless you want to write your own wrappers around the CLI for SQLite or some such thing. If you do that, you no longer have ASCII source, so you’ve lost any benefits to that whole field of thinking.

    I’ve not worked with Microsoft’s debugger, though everybody I know who has says that it is much *much* better than gdb. It wouldn’t surprise me.

  64. Regarding Windows Powershell, I don’t think it is perfect by any means, but it has one huge advantage — it is typed. Unlike pretending that everything is a string, as Bash does, it actually treats data as typed information and consequently allows structure and semantics to the data in a way that Bash has to bend like a pretzel to do.
    Most of the day to day stuff people do with Bash, I’d do with a GUI, so the volume of keystrokes doesn’t seem all that relevant to me.

  65. “Point me at a replacement tool with the right properties, first.”

    That’s the problem, right there…you have the one tool, and you’re dependent on it.

    1. >That’s the problem, right there…you have the one tool, and you’re dependent on it.

      Yes, being dependent on it would be a problem – if the tool weren’t open source and highly portable.

      Would you criticize any other kind of master craftsman for carrying his own tools with him through multiple jobs?

  66. “Would you criticize any other kind of master craftsman for carrying his own tools with him through multiple jobs?”

    In the case of physical tools, like a hammer, no. The difference here is that you’ve loaded emacs with your knowledge, your style, your approach to writing software. You don’t get to consider other approaches that might possibly be better. You also might start forgetting why you configured emacs the way you did, and get lost when you are finally forced to make changes in the face of new software concepts.

  67. > Windows has a much better command shell than bash anyway — powershell.

    Seconded, to a limited point. I used powershell extensively at my old job. I was the only person on my team that understood it intuitively or used it for non-trivial tasks, including people that I otherwise consider more competent than me. There is just no culture in the Windows world of using the command line to get things done, even when that’s the best way to do things. The concept of the administrator as a specialized semi-developer is absent. PS is what you use when you have no other alternative.

    PS is baroque. It’s better than bash, IMO, but I attribute my ability to use it at all to my previous exposure to bash in particular and the Linux command line culture in general.

  68. @esr

    So, what would it take to get you to seriously consider dropping your exsting investment in emacs and switching to something else? Of course, that something else might not exist yet, or the answer might be “nothing, I’m too invested” but it would be useful to have some idea of the magnitude and directions of what hackers think would constitute the right thing before someone starts yet another editor project.

  69. @John D. Bell: “@Jay Maynard –
    is [Visual Studio], perhaps, the bit of pig lipstick that keeps Windows from falling into the abyss?

    Which brings the discussion full circle – how do we make the development toolset and methodologies for *nix as good or better than Windows?”

    We don’t. We can’t. The development model doesn’t support it.

    In the earliest says of *nix, tools were written to scratch a particular programmer’s itch. This meant a lack of commonalty in interface and options, and you had to remember things like “Does this command require – as an option delimiter?” Too much of that still remains. Things get done because a developer feels like doing them. User demand may be irrelevant.

    Visual Studio is a commercial product, that developers *pay* for. Programmers maintaining it don’t work on what they feel like adding: they work on what they are paid to do by MS, based on MS’s idea of what the developer community needs VS to be, and everything I’ve heard indicates MS tries to listen to developers.

    For *nix development environments to match Windows, the developers who use them would have to be seen as *customers*, the programmers who maintained and enhanced them would have to be focused on what the *customers* wanted, and there would have to be a way to *pay* those programmers to write that code.

    Absent “Do this,and we will pay you money to get it”, I don’t see the underlying issues being addressed.

  70. My experience may be relevant. I was a professional .NET developer for over a decade, doing a mix of contracting, permanent employee work and eventually co-founding a software company targeting .NET (but with server infrastrcuture built on Ruby on Rails).

    I’d worked with Rails since late 2006, using it extensively since mid 2007. Before that, my first dev role was hacking wildly cross-platform C code for mail servers and the like, back in ~ 2000. My experience with Linux dates back to high school, when I started using it as my preferred personal OS in 1995 (I still have a copy of the Walnut Creek Linux Developers CD-ROMs on my shelf).

    Back in 2011 I switched careers away from the Microsoft stack, and now I focus on development in Ruby, Ruby on Rails, and I’ve never looked back. Yes, Visual Studio has some features, especially surrounding online documentation and refactoring, that I miss from time to time.

    But everything else is better.

    * The licensing is sane, and much easier to understand, and there is far better support from the community than from MS. Have you ever tried calling Microsoft to enquire about the specifics of SQL Server licensing? Don’t. The experience will make you hate humanity more, and it will rot your brain, and it will waste half a day.

    * The communities are more knowledgable (on average … there are some very smart cookies on the Microsoft stack).

    * Bugs are fixable; there’s no more ‘submit a bug to Microsoft Connect and pray’, I can fix problems myself (don’t underestimate how liberating and how much more productive this is).

    * My dev environment is configured with a few lines of script from a vanilla Mint installation. Shit doesn’t keep crashing on me day in, day out (ask me about how stable ReSharper makes Visual Studio).

    * I don’t get the Microsoft bait & switch (“WinForms! No, WPF! No, Silverlight! No, Metro!”) every couple of years. Don’t underestimate how toxic this is, both from a career perspective, and the perspective of trying to get shit done ( see Spolsky’s Fire and Motion: http://www.joelonsoftware.com/articles/fog0000000339.html ).

    * The working environments are better (my observation is that, in Melbourne at least, Microsoft and Java shops are more likely to be Dilbertesque sausage factories than other places).

    * The scripting environments are better. If CMD.EXE is a joke, Powershell is worthy of Monty Python.

    * I can if I need to work meaningfully on an ancient netbook (long story). Microsoft dev tools require *all* the latest hardware for an experience that approaches pleasant.

    So … what do I miss from Windows, now that I’m running Linux and Emacs as my do-everything environment? Not much, actually. I’d like:

    * Better refactoring / visualisation tools for my language of choice (fast becoming Racket) and my mainstay languages (Ruby, CoffeeScript).

    * Some kind of tighter browser integration … like a CoffeeScript SLIME that interacts with a running browser? I don’t know exactly what this would be like but I’d know it and like it if I saw it :) Perhaps it’d be a lot like Guiser.

    * Handy remote debugging for non-Lisp languages. In Lisp there’s always Swank, but it’d be nice to be able to easily attach to a running Ruby process, say, and interrogate it. Ties into the above point, perhaps they’re not actually separate at all.

    * Useful multi-monitor / presentation support in StumpWM (my window manager of choice; the mainstream Cinnamon and Gnome choices already handle this with better grace than Windows from what I’ve seen from presentations at the local Ruby meetup). It’s easy enough to set up with some Lisp that calls xrandr, but guys, it’s 2014 …

    * … errr, that’s it really.

    Speaking as a Windows refugee: don’t let the Microsoft apologists here fool you. The grass *is* greener on the other side, and it’s Free too. Flee! Flee while you still can!

  71. Regarding possibly choosing a new doc source format, some great things about markdown:

    1. just about everyone already knows it (it’s used on stackoverflow, github, reddit, etc.)

    2. it looks nice in source form

    3. it’s easy to write

    4. [Pandoc](http://johnmacfarlane.net/pandoc/)

  72. “Well, Powershell fans, explain to me why this script is needed to recursively take ownership of files and folders in Windows.”

    Not a fan; PS sucks, it just sucks less than some alternatives. That said, assuming that nightmare of a script is needed (I’m not sure if there’s a better way off the top of my head), I’m going to answer: “a gross lack of decent, or even basic, command-line tools.” That’s not the shell’s fault, any more than a linux system missing /bin/chown and /bin/chmod would be bash’s fault.

    On linux systems I tend to use ipython when I have something non-trivial to do. I think it sucks less than either.

  73. Well, Powershell fans, explain to me why this script is needed to recursively take ownership of files and folders in Windows. The rest of you, read it and weep, or barf, or throw grenades, or something else appropriate.

    It is intended to solve a complex and interesting problem: how to take ownership of files you don’t have read access to. Remember that the file permission system in Windows, based on ACLs, is more sophisticated than traditional Unix file permissions; and in particular you can’t read the ACL of a file you don’t have read access to.

    The standard PowerShell tools for claiming ownership of a file attempt to modify the ACL in-situ by reading the file’s ACL and then writing it back with the changes made. This script just clobbers the ACL with a new, single ACL that sets the owner and other desired permissions, and does so recursively over a directory tree.

    Note the use of an embedded C# program defining a class which does the ACL clobbering.

    In all fairness to Unix, Unix admins are perfectly willing to drop to a more powerful language like Perl, Python, or C if they find a task that’s too heavy for bash to handle. But PowerShell is supposed to be a more powerful and easier to use shell that doesn’t require you to drop to a “real programming language” to get shit done; if you have to embed a C# program in your shell script to do a routine admin task, that may mean that PowerShell is only effective against certain use cases that Microsoft considered; for the overlooked use cases you’re almost SOL.

    Unix tools tend to be a lot more open-ended than their counterparts on other operating systems. Which is why advanced users gravitate towards Unix. It’s a Schelling point for what a set of power user tools should look like; and that’s why “those who do not understand Unix tend to reimplement it poorly”. It’s the same for Visual Studio: it’s great for most developers, but the moment you have an unusual workflow (and unusual workflows are exceedingly common), the nice prepackaged solution becomes a ball and chain.

  74. @John D. Bell – computational strength or expressive power is often simply not needed, or not
    so much that a few handy generator expressions containing lambdas would not handle them.
    I loved the LISP approach for a while and was put off when Reddit was rewritten in Python,
    nowadays I understand better why. I was struggling to form this into something resembling
    theory, here is what I have so far:

    Theoretically, we use computers to compute. Practically, we use them to process data, we use
    them to automate repetitive work. To do the same operation many times over, as it is easier
    than doing it manually.

    Thus if you look at the evolution of an imperative language, the basic structure is the loop. A
    machine that can do nothing but loops is still useful. A good example is an industrial machine
    drilling a hole in the upper left corner of a metal sheet, repeating it endless times. Loops
    are the difference between manual work and automation.

    Making our industrial machines more intelligent, we introduce conditions. A good example is
    email filters. IF sender = Bob THEN put it into the Friends folder. An email filter is not a
    programming language, but combined with loops – applying a filter to a large number of existing
    mails – begins to resemble one.

    Then we improve it by variable assignment and subroutines and basically we have a useful
    imperative language for data processing, for automating work.

    How to improve such a language? Starting at the basics, you get rid of the loop. You replacing it by mapping, map-car-ing a dataset to a function, lambda function or code block. This is what Python generator expressions using lambdas are for. If anyone remembers the CA-Clipper: this is why we loved the aeval (evaluation of a code block to every element of an array) and used it everywhere.

    By doing so, we have made 95% of the improvement that makes sense for a usual kind of data-processing script, one that is mainly made for automation, not computation.

    Granted, Python fails at one feature: you cannot write multiple-line lambdas, code blocks. Ruby improved it.

    Looking at the Rails codebase, it being fairly elegant and expressive, it seems to me that replacing loops by multi-line code blocks evaluated over sets of data is simply expressive enough for the the usual kind of data-processing script that does not compute much.

    And the rest of what LISP has to offer seems to be a luxury at this point, that does not improve productivity or reliability much.

    And all this because we mainly automate repetitive manual work, and not compute so much. Because in practical life we all end up being closer to industrial or data-processing automators than mathemathicial computer scientists.

  75. @John D. Bell @Jessica Emacs and Visual Studio seem to be based on the same assumption: the programmer has a carefully set up and customized home environment, which is different from the user environment the program is released for. This seems to be changing.

    Consider your average web developer, who logs in to an web server that may be hosted half a planet away and has to debug a problem there. There is no time for the usual process, i.e. downloading the files, replicating the problem etc. as it has to be fixed yesterday, so to the hell with it, he has to do it live. Or in any case in a test environment on site, not at home.

    So in Windows environment the work will be done through an RDP desktop, the Linux folks will probably scoff at VNC and use a text-only editor through SSH.

    Now in the Windows environment this really sucks. Of course you don’t want to burden a server with Visual Studio, you use something lightweight, you RDP in and you suddenly have French keybindings, you can’t find the goddamn , and so on. The best solution is to standardize to something light and simple, like Notepad++ or the FAR Manager (spasiba, Eugene Roshal!), where you can perform basic editing tasks with Ctrl, Alt, cursor keys, Del etc.

    I don’t know if Linux environments are better at this, such as using your local keyboard layout through ssh or even using your local Emacs but editing files through SFTP AND still be able to test them executing them remotely, perhaps if such problems are solved then the separate programmer and user environments can be kept.

    I know for example Microsoft is supporting remote execution of PowerShell scripts, but I was always to lazy to learn how, just running the editor through RDP was always easier. Especially that it is better to save files on the server as the server admins are much more careful about regular backups than I am with my laptop.

  76. If strong typing in the shell is a priority, you could try running Hell, a Haskell-based shell REPL. It integrates well with shelly, a library for shell-like scripting in Haskell.

    The catch is that the overall OS is still *ix-like, so the strong typing property is perhaps less than useful – you end up working with text strings a lot (Although it’s quite possible to ameliorate this a’la PS by relying instead on structured API’s and even IPC systems such as dbus). But it does make for very good integration with “real powerful programming”, much like PS is sometimes supposed to – and Haskell’s lightweight yet intuitive syntax is especially helpful here.

  77. @John

    > 4. [Pandoc](http://johnmacfarlane.net/pandoc/)

    This is an example of what’s wrong with Markdown, not what’s right:

    1. Link markup requires two kinds of brace syntax AND correct ordering, giving the writer at least two ways to get it wrong. Mediawiki syntax to do the identical job is just [URL text].

    2. The required ordering is wrong. If the URL were specified first, it would be trivial to make a bare link by omitting the link text markup. Instead, you have a situation where different markdown implementations sometimes parse missing link text this way, and some barf. Moreover, most Markdown parsers fix this problem by looking for anything that might be a bare URL and making it a link to that URL – which means that they didn’t really need the Markdown syntax in the first place. (And good luck posting an intentionally unlinked URL!)

    3. The required brace syntax breaks silently on URLs that end with ‘)’, even though this is very common in wiki URLs. A huge number of links from Reddit to Wikipedia are posted broken specifically because of this. Needless to say, the problem would be just as bad in library documentation.

    4. Markdown parsers in actual use (which are the real standard for the language syntax) seem to think that underscores in an URL mean that you want broken italics in the middle of a broken link.

    5. It’s very common for sites that use Markdown to abuse the URL markup for local extensions (image inclusion, collapsible sections, “spoiler” markup), so in practice links might not even be portable. That frustrates the basic goal of just sharing documentation on the Web.

    Even if you like other features of Markdown (and some of it is sensible), link formatting is a huge blight on basic usability. Better to take the good ideas and incorporate them into a language without the basic problems.

  78. @Jessica

    >Of course the problem with this argument is that Windows has a much better command shell than bash anyway — powershell. It isn’t used much by programmers, because it isn’t needed, the other tools are much better

    This is a clear example of how open source and Microsoft developers are out of touch with business reality. Seriously it is as if there was a wall between practical business and the programming world.

    Consider the following, extremely common example: write a script that generates a basic sales report, sales per customer per item, every week from an SQL stored procedure formatted nicely into Excel and SMTP emails it to the boss. Given that it is a literal, step-by-step automatization of what the average “junior controller” does at work, it is more common than sliced bread, at leat if you subscribe to my view of programming as an automatization of manual work.

    Another (internal) requirement is that do it fully programatically, use no report designer tools because 1) they usually cost money 2) spreadsheet output is usually an afterthought in them 3) they are never fully flexible, there always some things they cannot do.

    Given that this is something that should be extremely common, you would think there is some common, canonical programming language and library for this. Turns out it is not the case – and PowerShell is the closest substitute. Even when using PowerShell, I have to roll my own library – both in PS and in SQL, for even T-SQL, after, like, 27 years of business usage, (Sybase 1987) does not have built in functions like last_day_of_last_month() as if reports with columns like “this month”, “last month”, “difference” would not be the most basic features of management reports since about forever!

    So not only open source developers don’t understand basic business needs, even Microsoft doesn’t. even Microsoft thinks you should either use inflexible tools like Reporting Services or if you want to do it programatically feel free to reinvent all the wheels.

    In this crapfest PowerShell still belongs to just about the only correct solution at least in the Microsoft world. At least it works after a week spent on reinventing the wheels, and you can write a script that puts the output of a a stored procedure into an Excel file and sends it out with about 1 hour of work afterward.

    Before PS, I must imagine most firms did it with VB. Now that is something I don’t even want to know about *shudders*

    Half of the world is glued together with VB, Excel and Access, it is horrible, and it took the genius people in Redmond 20 years to make a better tool for that, PS, and then they marketed it as a sysadmin tool – when actually it should be marketed as a business reporting tool, to throw out all the horrible VB-glued systems.

  79. @Shenpen
    > Now in the Windows environment this really sucks.

    I can only assume you are talking from what you read on the web rather than having actually done this. Visual Studio is excellent for this sort of scenario. A small proxy called remote debugging is installed on the server and your local Visual Studio connects to this directly across a low bandwidth wire (like it works over a POTS modem.) It gives you full debugging capabilities against applications installed in a live production web server. What you can do here is nothing short of super cool.

    > write a script that generates a basic sales report, …into Excel and SMTP emails it to the boss. …Given that this is something that should be extremely common, you would think there is some common, canonical programming language and library for this.

    Again, you are mistaken. There are a million ways to do something like this. Excel can do it directly with a scheduler to do the emailing, Excel automation access via C# makes a programmatic solution simple, but the correct way to do this is with InfoPath, a tool designed specifically for doing exactly this sort of thing.

    But regardless, my goal is not to defend MS here, they have many, many faults. My claim is the Visual Studio is the best development environment I have used, and that PowerShell is a much better shell than bash because it is typed (and, to some of the comments above, due to the strong reflection capabilities of C# it is also very, very easily extensible while maintaining the strong typing.) Let’s face it. bash has worse typing than Javascript, and that is saying something!

    And really, you all don’t see how cool it is that you can embed a chunk of code from a strong programming language directly into your script, have it compiled and executed dynamically, so that you get the benefits of scripting, and some of the special syntax that script language provide combined with the benefits of a strong programming language seamlessly, in one package? Bash can do some of that for sure, but it is beautiful and seamless in powershell.

    You might not like Windows, and it is certainly not without flaws, but like I said to Eric, even Dante escaped the firey pit of hell with some useful actionable information.

  80. @Jessica compiled code for scripting is really a bad idea, as if you lose the source code because you thought it is a one-off or whatever, you cannot modify the executable anymore. I never understood why people in the C# or Java world write compiled code for things that are not systems programming or suchlike, but are merely business logic scripting. Even in open source Jörg Janke of Compiere (first open source ERP product) writes business logic in Java. WTF? Any time there is a small change in logic recompile the client? And why should people who write business logic – who are typically not very technical – care about stuff like classes, types and inheritance? Even for technical people, whenever I am having my accountant hat on, my programmer hat is only halfway on so I don’t really want to think too deep about technical matters whenever I write business logic. Thus even the game developers got smarter than that and separate the compiled engine from the AI etc. script which is interpreted and usually lightweight, duck or weakly typed etc. etc. Because even for people who have no problem with explicit typed compiled code they just don’t want to deal with that when they write AI logic or suchlike.

    InfoPath is meant for forms, not reports, as far as I know. I haven’t checked the mail from Excel path, but if this exist, it must mean you preformat a sheet first and pull in the data later, instead of putting the formatting on top of the data.

    I have investigated ideas like that and it ended up messy, not an Excel problem but rather a basic problem of tabular spreadsheet logic itself: if you don’t know at the time of formatting how many lines you have, you must put stuff like totals in the top row, not the bottom row. So the proverbial “bottom line” becomes the top line. Not good.

    Even if you compromise, preformat most things, except totals, and add only totals programmatically, you have the problem of not knowing the row number of the totaling row, so calculations with totals (executive summary page) cannot be preformatted anyway. This all suggests to me it is better to generate the formatting after the data is read and not preformat anything.

  81. @Jessica Boxer “…and that PowerShell is a much better shell than bash because it is typed…”

    It seems to me this is a misnomer. I don’t see PowerShell as a shell any more than I see Python as a shell. (i.e. Just because something has a REPL that I can type into doesn’t make it a shell). Bash is a shell and so is cmd.exe.

    Now PowerShell seems like a worthy replacement for VBS. But I can scarcely imagine anyone actually using it as their go-to tool to do quick admin tasks. I’ve never seen any demo or tutorial that really showed it in this light.

  82. Before PS, I must imagine most firms did it with VB. Now that is something I don’t even want to know about *shudders*

    This was doable in Windows Scripting Host, or from a macro inside Excel itself. And some shops generate Excel reports from Python scripts on Linux boxen. And even Visual Basic wouldn’t be that bad — it got its terrible reputation precisely because it is a tool aimed at non-developer business users. As soon as you break out of the “automating simple tasks” mold and need to do something at a deeper level, VB falls down and breaks.

    (Hypothesis: Microsoft is so beloved by enterprises and so loathed by everyone else in part because it’s run by people like Shenpen, who feel in a position to tell the rest of us what programming really is and which use cases are really important, dismissing all the rest as irrelevant academic piffle. The result is tools that work fine and may even be simpler to use as long as you color within the lines; the moment your needs diverge from the expected norm, disaster!)

    The deeper problem here is — as you said — this sort of repetitive task constitutes the entire job of a “junior controller” or somesuch. The reason why humans are doing this sort of job is because bosses don’t even understand that it can be automated in the first place. So they hire a twenty-year-old intern to do the work, and if he is a clever sort of twenty-year-old he crufts up a script in an hour and spends his work days fucking off and playing World of Warcraft. Easy money.

  83. @Shenpen
    > compiled code for scripting is really a bad idea,

    It can be, which is why embedding the source in the script and compiling it dynamically is such a powerful solution. For example, I wrote a program that lets you double click on a C# program text with a particular extension and it compiles it and executes it. Powershell allows you to do precisely this too.I imagine you could do this with bash too with here document piped into the C compiler, but this is built in to these tools. For example the tool I wrote doesn’t even write the compiled code to a file, it just compiles it in memory and executes it.

    > InfoPath is meant for forms, not reports, as far as I know.

    It is a workflow engine. It is specifically designed for the type of situation you describe.

  84. @Jeff Read
    > practical extraction and reporting language.

    Indeed, the Internet glue. One has to admire it though, it takes a huge amount of skill to come up with a worse language than Visual Basic, or a language more obfuscated than minified javascript. It is indeed a perfect illustration of why keystroke count is a very poor measure of a programming language.

  85. Now PowerShell seems like a worthy replacement for VBS. But I can scarcely imagine anyone actually using it as their go-to tool to do quick admin tasks. I’ve never seen any demo or tutorial that really showed it in this light.

    I have — but it was depressing. I attended a meetup on Microsoft’s campus for Windows Server admins that had an intro to PowerShell. These guys make more money than I do as Windows Server admins and consultants, and they all seemed a bit out of it. Shell scripting seemed a new revelation to them, a boon from the gods in Redmond. I think some of them had a hard time understanding what variables were.

    And really, you all don’t see how cool it is that you can embed a chunk of code from a strong programming language directly into your script, have it compiled and executed dynamically, so that you get the benefits of scripting, and some of the special syntax that script language provide combined with the benefits of a strong programming language seamlessly, in one package? Bash can do some of that for sure, but it is beautiful and seamless in powershell.

    Seriously, Jessica? Seriously? Scsh is beautiful and seamless; embedding entire C# programs in-situ into a shell script is a messy hack, and nothing any Unix admin who’s written awk, perl, or python scripts inside a shell script to munge data isn’t already familiar with. It still requires you to be familiar with two programming languages, which is too much for a junior Microsoft admin’s poor little brain to cope with. If you’re trying to upsell the benefits of PowerShell as an easier and more powerful shell, showcasing the same sorts of hacks Unix admins have been using for decades as “beautiful”, “seamless”, and superior is not the way to do it.

  86. @Jeff Read
    > Shell scripting seemed a new revelation to them,

    Hmmh, I know lots of sys admins who do excellent work with PS, and make effective, maintainable scripts some even write unit tests with high code coverage. There is this idea that glue stuff is not real software and isn’t subject to the same rules of all other software development, which is why most glue stuff if flaky and broken, and also why people put up with things like perl.

    > Scsh is beautiful and seamless;

    I am unfamiliar with this tool. I’ll check it out.

    > embedding entire C# programs in-situ into a shell script is a messy hack,

    Being able to write functions within a program in the suitable tool is hardly a hack. I’m afraid you are “managed software” gestalt.

    > showcasing the same sorts of hacks Unix admins have been using for decades as “beautiful”,

    Sorry Jeff, big fan and all, but really, if you think programming in awk is anything like programming in a serious, properly typed, powerful language like C#, then I’d respectfully submit that you are wrong. I’d say to you that one of the scariest things

    I have heard of is the growing leakage of javascript into the server side of code, in things like Azure and node.js. I really don’t get this idea that giving less information to the compiler (like types for example) is somehow a good thing. In fact it is part of the thing I mentioned above, the idea that “quick and dirty” software is somehow not subject to the same challenges of regular software, or the idea that this piece of “glue code” won’t have a painful, maintenance heavy lifetime of decades.

  87. @Robert:

    > > 4. [Pandoc](http://johnmacfarlane.net/pandoc/)

    > This is an example of what’s wrong with Markdown, not what’s
    > right: {snip}

    Note, for clarity, I was referring to Pandoc being a reason for
    using Markdown. :) But for the record, I think the Markdown syntax
    for links is great:

    1. The part important to the reader comes first

    2. the part in square brackets looks a bit like a button. It
    looks “linky”.

    3. The url part *is* parenthetical:

    “[import to reader](and also, here’s the link)”

    and so, it’s sensibly put in parentheses. :)

    BTW, your notes about parsing problems with urls and so forth
    are largely solved by Pandoc. These days, I think of Pandoc as the
    “standard Markdown parser”, if it can be said there is such a thing.

  88. @Jessica, my problem with powershell has always been exactly what you have touted. I don’t want a stream of objects with types, I want text. Partly, this is because I have written several lexers and parsers, so I understand how simple and powerful text streaming is. Having written things in MongoDB’s aggregation pipeline as well, I prefer text to object streams.

    Really, it comes down to having a required format for input and output, or simply accepting bits. For simple things the overhead is annoying, and for complex things it is too verbose, too generic, and too limiting.

    Finally, reading a text filter program is simpler and easier to understand than an object filter.

    That deals with the differences between bash and powershell, but the other advantage bash has is that all the unix tools speak the same language, and it is assumed that new tools will as well. On windows, many applications live in their own bubble. There is no way to get useful output to powershell at all, so powershell languishes in a “when I need to automate only a Microsoft utilty, and not talk to anyone else, ever” space. The whole point of a glue language is to stick different things together, and powershell fails.

  89. Sorry Jeff, big fan and all, but really, if you think programming in awk is anything like programming in a serious, properly typed, powerful language like C#, then I’d respectfully submit that you are wrong.

    I didn’t say that at all. I realize that they’re quite different. Awk is designed to work with shells to enable solutions to be pulled together quickly; C# is designed as a standalone programming language. If what you are doing is beyond a shell’s capability and you need to drop to C#, write a fucking C# program, compile it into an .exe or .dll and make it a dependency of your script. If you really expect this code to have a lifetime of decades, it should all be version-controlled anyway, so why not go so far as to separate the concerns the right way instead of stitching everything into a Frankenstein-style monstrosity and calling it beautiful and seamless?

    One of the problems is that C# suffers from what I call the “public static void main” problem: the fact that to get it to do anything at all, you have to have all this syntax and all these concepts like namespaces, classes, and public static methods. In order to do anything, you have to write a lot of code. That script has a solid page of C# right in the middle of it, and all it does is change file permissions in a rather simple and braindead way. Whereas with a tool like awk or perl, what you need is often a one-liner. Python occupies a sort of happy medium between compactness and readability, which is why many hackers favor it and entire admin tool suites are written exclusively in it.

    I’d say to you that one of the scariest things I have heard of is the growing leakage of javascript into the server side of code, in things like Azure and node.js.

    Node is comedy gold — because it’s designed like a Microsoft tool: completely circumscribed in its abilities, optimized for certain use cases and nothing else. So that it’s holy hell to get it to do what you want when what you want isn’t what its creators anticipated. And a lot of new development tools are like that — they’re created by this Y Combinator crowd who think the web is the platform and so don’t know how to Unix, nor do they feel they should ever need to. The entire Unix stack should therefore be reimplemented, poorly, in JS and Ruby.

    In fact it is part of the thing I mentioned above, the idea that “quick and dirty” software is somehow not subject to the same challenges of regular software, or the idea that this piece of “glue code” won’t have a painful, maintenance heavy lifetime of decades.

    Right, because entire, long-ish chunks of C# code embedded directly inside shell scripts are so much easier to maintain. As I said above, if your code really has a lifetime of decades, it should live in a directory of its own under version control and not in a single mashed-up file.

    And if hacky shell scripts are your undoing, tear them out! Better to enter the kingdom of heaven having rewritten them in a sensible language than be thrown with your code as-is into Gehenna, where the bugs die not and the firefighting never ends.

  90. Sigivald said, “Also, isn’t it already not really getting new users?”

    As a data point, I’m a relatively recent Emacs convert (for about 2 years). Mainly through being forced to use Windows for too long, I’ve used Visual Studio, Eclipse and Source Insight, even when developing code that is to be run on an embedded Linux machine.

    I prefer living in Linux, and writing Bash scripts, but it took me a while to wean myself off of IDEs that would navigate easily through the code.

    Eventually I decided to get Windows off my desktop for good, and that meant learning about Emacs, Ctags, CScope and friends. And it was quite a learning curve, but worth the effort because I am more productive and able to cope better with different workflows, unlike these other IDEs which seem to fail, or at least make life hard, as soon as you stop doing thing their way.

    But I would not want to go back, and probably will never have to, because Emacs will work anywhere, even on a text console over ssh. And now I know what I am doing inside Emacs, I find there is nothing I can’t make it do.

    Lately I have added autocomplete and Flymake to my Emacs config and I don’t think there is a Windows IDE that can give this functionality in the arcane embedded environment I am currently working in.

    I’m glad to see this post and see that Emacs has some life in it yet.

  91. @ Jeff Read It is intended to solve a complex and interesting problem: how to take ownership of files you don’t have read access to.

    Note the use of an embedded C# program defining a class which does the ACL clobbering.

    That complex and interesting problem is entirely solved with “C:\Windows\System32\SetACL.exe” (maybe this one : http://helgeklein.com/setacl/ )
    so that powershell script is essentially a wrapper to walk a directory tree and call setacl.exe on each directory.
    Even allowing for error traps e.a. it’s somewhat remarkable that this takes ~200 LOC.
    Unless I’ve missed something there, one could do this in between 15 and 25 lines of vbs, (and maybe even slightly less in cmd.exe batch) – and it would probably be a lot more human-readable because you don’t have to look through all the boilerplate to see where the real work is done.

    So i think Jay Maynard still has a point there.

  92. Interestingly, recent versions of emacs have been adding more and more IDE-like features, including, yes, project management, source code navigation and smart auto-complete. It’s not unreasonable to expect that these features will eventually be also available in the terminal – indeed, there’s no reason why emacs should not be able to replicate the well-known ‘feel’ of MS-DOG text-mode environments for i.e. BASIC, Pascal and C/C++, some of which have open-source re-implementations already (RHIDE, FreePascal).

  93. > Emacs will work anywhere, even on a text console over ssh

    Instead of using Emacs over SSH (in text console or via X Window forwarding) I use TRAMP – transparent remote access (via ssh / scp-ing files back and forth, and editing them locally).

  94. @Jeff Read
    > C# is designed as a standalone programming language.

    Regardless, it is still quite as expressive and functional as a scripting language. Certainly a “program” has about half a dozen lines of boilerplate, but usually it is more of a class thing that a program you are building, and FWIW, the tool built automatically inserts all that boilerplate for you.

    The advantage, as someone else pointed out, of using dynamic compilation is that, as in a script, the source is readily available. For a lot of small problems that is cool, eliminating a lot of infrastructure, boilerplate stuff. Simple, in one package type of deal. I agree that anything that is significant should be in a source control system, but that doesn’t eliminate these one package advantage.

    Of course big programs need bigger solutions, but not all problems are big.

    > Right, because entire, long-ish chunks of C# code

    Longish chunks of anything are difficult to maintain. Shortish chunks of C# are easy to maintain, because C# is intrinsically a better language (for many purposes) than most script languages.

    > And if hacky shell scripts are your undoing, tear them out!

    Nonetheless, if our domain of discussion is shell scripts, your point is moot.

  95. Jessica,

    You’re missing the point. I actually don’t think C# is terrible. And I don’t think a dynamically compiled, full-featured programming language is a bad thing for admin tasks — Python is dynamically compiled and full-featured, and lots of admins use it![0]

    The point is — if you’re regularly embedding pages of a more powerful language in your shell scripts to compensate for the deficiencies in your shell script language, then a) claims that your shell scripting language is more powerful than Unix scripting, where this has been the norm for decades — and the languages that are “up a level” from shell are compact while still remaining powerful and readable — are suspect; and b) you really do need to think about how to refactor your code into sensible modules to make it easier to debug and maintain, and you may need to rethink if shell is what you should be coding in at all.

    [0] You’ll actually find agreement from — of all people — Richard Stallman. Back in about 1995 he advocated that people adopt real programming languages over “scripting languages” because while scripting languages were easy to write in and speedy to implement, eventually people would want the power of a real programming language so it makes more sense to implement that from the get-go. He summarized these thoughts in an essay called Why You Should Not Use Tcl. His proposed replacement for Tcl was Scheme, provided in a GNU project library called GEL, which eventually became Guile. I honestly believe that Guile may have been Stallman’s best idea since the GNU project itself.

  96. Oh, come on, Jessica. if you’re going to hammer a language for keystroke count being the wrong measure, why stop at Perl? Get thee to APL! :-)

    Keystroke count is only one measure of a programming language’s usability and maintainability. To be sure, there are plenty of terrible languages, concise and verbose both. There are also plenty of elegant ones.

    The problem with powershell’s verbosity is that it’s there for no good purpose other than to make programmers type more.

    And for the kinds of things scripting languages are good at, type fulness is a hindrance, and a pain in the ass, rather than a help. I don’t care what type something is if I’m scripting a task. I just want the damned thing to work, as easily and as unobtrusively as possible. If the scripting engine can’t handle whatever conversion is needed, then it can bitch at me. Otherwise, just do the damned job.

    (Anyone want to turn FORTH into a full-fledged scripting language? Shouldn’t be that hard…)

  97. For what it’s worth, I don’t think there’s anything at all wrong with C# that a native code compiler wouldn’t fix. .NET needs to be stood up against the same wall as the info browser and shot, preferably with the same bullet to preserve ammunition.

    Anyone who disagrees with me can try getting Mono programs running on an Itanium.

  98. type fulness is a hindrance, and a pain in the ass, rather than a help. I don’t care what type something is if I’m scripting a task. I just want the damned thing to work, as easily and as unobtrusively as possible.

    This is not a very far-sighted attitude. The whole point of type checking is to prevent your tasks from going wrong in some way. With modern features such as type inference and well-designed ad-hoc polymorphism (using type classes or interfaces), you don’t need to care about your types much – the language support does most of the work. By contrast, type conversion _is_ the sort of thing that’s best done manually

  99. Jakub Narebski wrote: “Instead of using Emacs over SSH (in text console or via X Window forwarding) I use TRAMP”

    Yes, either can be better, depending on the circumstances. It is this flexibility which makes me confident that however confined I am by resources, I can make Emacs work. Which means I can use the same editor all the time.

    It’s the same with Git: it doesn’t matter if the project uses some other source control, I can generally use Git at my desk for my own purposes.

    Incidentally, I have started using i3 after our host blogged about it. I am detecting a pattern.

    1. >Incidentally, I have started using i3 after our host blogged about it. I am detecting a pattern.

      Please describe the pattern you are detecting, for my entertainment if for no other reason.

  100. @Jay Maynard
    The problem with powershell’s verbosity is that it’s there for no good purpose

    Verbosity in the right measure leads to clarity. I reject the idea that the core syntax is super verbose, though the object names tend to be descriptive (and consequently long.)

    But I consider that an asset. You have to remember that you are talking to a person who considers using the variable name “i” as a loop counter as a sign of both laziness and poor programming. Give your loop counter a name so that you know what it does and what it means. Really, the days of 80 character wide terminals are done. Variable names are supposed to convey the semantic content of the data they store.

    Of course excessive verbosity is damaging too. Like most writing great code is neither obscure nor voluble.

    > (Anyone want to turn FORTH into a full-fledged scripting language? Shouldn’t be that hard…)

    Oh yes, thanks for reminding me, FORTH does indeed manage the almost impossible task of being worse than Javascript.

  101. BTW, I think I have reached my limit of Microsoft advocacy here. I can hear Eric’s gag reaction 800 miles away…

  102. Verbosity in the right measure leads to clarity. (It’s why I avoid some common C idioms, such as “if (!foo)” instead of “if (foo==0)” .) It is possible to be too terse as well as too verbose. The problem is that excessive verbosity leads to obfuscation from sheer density. Powershell suffers from this in spades.

    Those 40-foot-long names, five classes deep, also lead to greater error from simple likelihood of typing things incorrectly, as well as simple programmer fatigue.

    Yes, variable names need to be descriptive. They just don’t need to be descriptive in excruciating detail. Naming a loop counter “i” is lazy (though if it’s a one-off, it’s excusable; I often do bash one-liners of the form “for i in *.cpp; do […]; done”). Naming it “counterForTheCurrentInventoryItem” is perverse, and yet it seems to be exactly what Powershell loves to see.

    As for typefulness in scripting, if you need it to write a script, you’re doing something wrong.

  103. Forth is very easy to abuse into write-only-ness, but it needn’t be write-only. Those who make write-only, and many others too, need to keep clarity and factoring in mind as well as simply write less and chuck moore.

  104. And I wanted to address the bit about “the days of 80 character terminals are done” separately.

    I, probably more so than anyone else in this discussion, can be fairly accused of having an 80-column mind. I still use 80-column terminal windows, and not just for programming. Some of that is just habit. However, a lot of it is to maximize my ability to take in information in one glance, instead of having to study over it. ISTR reading somewhere that 80 columns turns out to be the optimal width for taking in information quickly. Wider than that, and you have to start to work at it. I have no trouble believing it; when I do use terminal windows wider than 80 characters, the perceived workload goes up.

    You shouldn’t disdain FORTH. There is much to be learned from an understanding of the principles it fosters. One of them, ubiquitous in the classic FORTH environment and not so much any more in these days of OS-level files of FORTH code, is that if your word (FORTH function) takes up more than a 16×64-character block, the native unit of the standard FORTH text editor, it is too complex and needs to be refactored. (Though the FORTH community didn’t use the word “refactor”; they were doing it long before the term was invented. The good news is that FORTH is not just helpful in that process; it’s built for it, all the way down.) Bigger than that, and you can’t take it in all at once.

    How easily can you take in a 1600-line C++ function with indents stretching out far enough that you run off the edge of a 200-character window? Or, for that matter, Powershell? Programmers these days have forgotten how to write for readability, and thus screw their cow orkers – or, more importantly, their future selves – out of laziness.

    “‘I have only made this letter longer because I have not had the time to make it shorter.” That goes for programming, too.

  105. @Jay
    > I, probably more so than anyone else in this discussion, can be fairly accused
    > of having an 80-column mind.

    I’m right there with you, Jay. I get twitchy when my code exceeds 80 columns; out to and beyond 120 I better have a damn good reason for it.

    My reason is purely code readability. I scan code in two-dimensional visual blocks, and long lines break my flow something fierce. I’m also able to fit a whole bunch more buffers on the screen without line wrapping.

    I try to keep to this even in Java, but it is a language seemingly designed to scoff at such attempts.

    I have a good developer friend who is the avatar of the exact opposite; newlines are anathema to him, and his C++ code regularly reaches 240+ columns. Mind-boggling.

  106. > I have a good developer friend

    Do you mean that he’s a good friend of yours who is a developer, or a friend who is a good developer? I’d doubt the latter if he’s writing 240 column lines ;)

  107. @Duncan Bayne
    > Do you mean that he’s a good friend of yours who is a developer, or a
    > friend who is a good developer?

    Hah! Good friend, good programmer. Real sharp cookie; fully groks C++ kind of guy. But very lone wolf, never worked on a large project in a medium-to-large team, so he has antisocial code habits and a deep resistance to adaptation.

  108. Though the FORTH community didn’t use the word “refactor”; they were doing it long before the term was invented.

    Actually there’s a lot to suggest that the term comes from the Forth community; Chuck Moore himself used to speak of “factoring” a word into two or more smaller words. at least as far back as the 80s and certainly well before Martin Fowler appropriated the term. The “factor” terminology is most suited to Forth, where the formerly long word is replaced directly by a composition of the smaller words. It also gives its name to Factor (programming language).

  109. Ooh, thanks for the pointer, Jeff. That’s about what I was thinking in terms of when I suggested FORTH as a scripting language.

    I was somewhat disappointed to see the latest development version dates back to last July, but the blog does seem to be active.

  110. As for typefulness in scripting, if you need it to write a script, you’re doing something wrong.

    I think Jessica’s ideal scripting environment would be something like Mono’s “csharp” command, which fucks the shit and just gives you a C# repl and scripting environment of sorts.

  111. “(Anyone want to turn FORTH into a full-fledged scripting language? Shouldn’t be that hard…)”

    PLEASE!!! Moore invented FORTH so he could control a telescope….He was the LAST person to ever use the language to actually do something useful with it. (The 5,000,000 programmers who learned the language after him spent all their time with it diddling aroung with the environment, adding cool l words and playing with the results.)

  112. Someone is obviously utterly clueless about the utility of Forth in embedded systems, where I made a living coding Forth for a decade or so. Ever used something shipped by truck or rail? Guess what, many of those shipments were weighed on scales running Forth. Do you eat? Many ag scales run Forth, too.

  113. >Because Lisp doesn’t have implementations that are truly production quality for the kind of work I need to do. The range of library support in Lisps is very weak – batteries are, as Python people like to say, not included. There’s also a persistent problem with them lacking a full set of bindings to ANSI/POSIX C facilities.

    I urge you to look at newLISP. I have found it to be an extremely “batteries included” version of LISP. It could use a larger community, but it has met my needs very well over the years. It has all those things you mentioned as lacking. http://newlisp.org/

  114. More on newLISP:

    It is the only LISP I know where we have one-liner competitions. Like the Perl folk. In fact, if perl was a clear, readable LISP, it would approach newLISP. newLISP anaphoric variables are similar to those of Perl. regex are built in, natch.

    The documention for newLISP is very complete. It has beginner tutorials, a comprehensive manual, and all sorts of examples of how to do various practical and fun things with newLISP. The CodePatterns document especially.

    http://www.newlisp.org/index.cgi?Documentation
    http://www.newlisp.org/CodePatterns.html

    newLISP is a LISP designed with a very Unix sensibility. Function names are terse. The format function works the same as printf. The date function works the same as that in C. Many functions are just direct layers over the C library functions. There are functions to ease file IO, but in general, a C programmer can dive right in and not have to learn new string formats, etc. Oh yes, string formats use all the escapes of C strings. Regular LISPs use Common LISP conventions, which are alien to C and Unix.

    The philosophy of newLISP is very pedal to the metal, let you do what you want. It lets you embed assembly code easily. How long has it been since a LISP offered that? Linking in a C library is so easy it is amazing. Example here, a packet sniffer using libpcap:

    http://www.newlisp.org/syntax.cgi?code/sniff.txt

    Here are some of the “batteries included”:

    http://www.newlisp.org/code/modules/

    A few years ago there was no postgres module. I hacked one up in an hour. It really was that easy.

    Check out the Tips and Tricks section too:

    http://www.newlisp.org/index.cgi?Tips_and_Tricks

    Plus, newLISP is small and fast. Very comparable to python.

    newLISP is reliable, and has repeatable run-times due to its unique garbage collector.

    newLISP follows the principle of least surprise, if you are a Unix person. But it doesn’t leave Windows or Java programmers out in the cold. It can even compile to JavaScript these days, so kids can play with it in their browsers. newLISP has excellent Java integration.

    The community is very friendly and responsive; problems get fixed and features implemented amazingly quickly.

    newLISP incorporates concepts and ideas from the entire history and spectrum of computer languages. Don’t be surprised if you see one or two winks and nods to rexx and algol in it.

    So, batteries included LISP? Give newLISP a try. I liked it. You might like it too. I don’t think you’ll ever see an OpenGL tea-pot demo coded in fewer lines of LISP code than the one in newLISP.

    And last? I love how cross-platform it is. I develop on Linux, and my code runs on Windows, Mac, AIX, Solaris, OS/2, and Unix.

  115. @esr “Please describe the pattern you are detecting, for my entertainment if for no other reason.”

    That I seem to end up configuring my working environment the same as you do.

    I’m not just following all your advice, either. I chose Emacs and Git on my own. I tried tiling window managers a long time ago but didn’t get on with them, until I tried i3 at your recommendation. I bought a Happy Hacking keyboard shortly before you started the keyboard page on G+. And I started using Python about the same time you did (I think you influenced me somewhat there).

    I don’t know what it means – possibly we are both discovering the same maximally productive tools. In any case I am inclined to pay attention to other tools you are using that I might not have discovered for myself yet.

  116. @Jeff I actually wondered a decade ago what the Reporting in PERL might mean. I ended up assuming it is the kind of reporting a sysadmin would generate from server logs. Not the accounting kind of reporting, because originally PERL was meant to work from text files, databases were an afterthought, and invoices and inventory entries and suchlike went into databases much earlier than say logs did / do. Anyway the term kept on confusing me for a while.

    Of a historical interest, does anyone know the earliest or one of the earliest scripts in PERL that were actually used for generating some kind of a report?

    Your jab about me being a bit too pushy about defining what is relevant and what is not is well deserved. Although I tried to emphasize that I meant only one subset of programming…

    I had the idea a few years ago that programming should be separated into three distinct disciplines, professions, precisely to avoid people being pushy with ideas that work in one field that don’t work so well in other fields. I was annoyed with people pushing unit testing and functional transparency on me because I am used to the fact that my inputs are basically a 10GB database as a whole, in the sense that any function, unit can look up a setting or a piece of data in any table and behave accordingly. So it was clearly not for my field.

    Anyway I came up with three fields, three professions:

    – Programmers who care mostly about the alogrithm iself, its complexity, who really need algorithimical expressivity, are in the computer science / math / academic subset. They need tools like LISP or Haskell.

    – Programmers who care mainly about the technology, be that the computer hardware itself or the peripherials or the various kinds of libraries based on them, or various kinds of communication specifications like SMTP or HTML5, are in the field I woul call technical programming. They mainly need a batteries included approach, like Java, C# or Python.

    – Programmers who aren’t actually programmers, in the sense that the majority of their knowledge is actually the domain knowledge, the application area, and the programs they write are more like algorithmically simple scripts where even types are often just a hassle, whose major competitive advantage is the lack of need for specifications in a given domain, like myself, don’t really have a name but this is what is commonly called business logic. But I would put also those into this category who write the scripts in computer games that the game engine interprets. What we need is tools that are fully programmatic, because nothing else is flexible enough, yet already optimized for the common case, and the database and presentation rather closely tied together (this is debatable, I may be too old school in this), we are the kind of people who loved the CA-Clipper, FoxPro, and inversion of control environments like Rails. (Not COBOL, I guess that is one generation earlier and somehow different kinds of people – I know some old, 55+ COBOL guys and they are actually closer to the hacker phenomenon than to my domain-oriented subset, at least superficially: they play musical instruments, they never wear ties or jackets, have big beards, and have and are generally interested in deep technical knowledge… may just be generational though.)

    (BTW I consider this domain-driven career approach – major in Foo, minor in programming, work on automating Foo, replacing people who do Foo manually – a very safe and career choice, recommendable to young people in high unemployment times, but it can be sometimes boring…)

  117. @Jessica

    >I have heard of is the growing leakage of javascript into the server side of code

    You mean stuff like Wakanda? (I love that, it was clearly designed by and for people with a similar mindset as mine – already optimized for the common case.) The idea is to blur the separation between model, controller and presentation a bit. I know, I know, all the “real programmers” are adamant about separating these with with mile high walls, but consider this: the common case is that when ever I define a record or model for storing say book titles for a public library, I will also want to provide a form for entering such a record, and a list form for listing and filtering and searching them. These, although in different layers, go together. Sure, the “real programmers” are right in that that they should be separated enough that it can also be presented on a smartphone or to another program that queries it through a web service, all right. But this is not the common case, the common case is the employee sitting down and keying in the data, and this is the presentation that will be created first, usually. So it is a good idea if I don’t have to switch between languages and paradigms when I define the model, the logic and the basic, common presentation, the form, the list. That would just slow me down and introduce tiny, annoying mistakes. Thankfully we got to the point where we can kill the HTML or the XML in the sense that it is now possible to define a good basic web based presentation entirely in JavaScript instead of mixing that with HTML or XML, which is good, one less thing to worry about and get confused by. Now isn’t that great if the data model and the server side logic are in the same language? So I can sit down, and define model, logic and basic, common presentation in one fluid go. I guess in large projects different people may care about the model, server side logic, and client side presentation, but we don’t all work at such big enterprises.

  118. @Jeff

    >The reason why humans are doing this sort of job is because bosses don’t even understand that it can be automated in the first place.

    I ended up giving up trying to explain the difference between configuring, setting up and making use of an existing feature vs. developing a new one to mine. I.e. “Can you set up a…?” I no longer try to explain that there is currently no such thing to set up, but I can whip one up in a few days so basically yes. Previously I used to say “requires some development” but it turned out he took that as a “no” as for him “development” was a black magic only done by weird wizards in far away places and it should take months and years and should be an official release with a version number, and not a quick 50 line customization by a rather normal office guy wearing a sports coat, having a business school degree and practicing speaking at Toastmasters. Because the kind of programming projects he oversees, like satellite radio stuff, are like that.

    Oh BTW have I told you how much I hate the captains of the ERP industry who wrap everything into the vague mystique of “industrial best practices” and “business process knowledge” when in reality it is rarely more complicated than read some shit, add it up, and either output the sum or tell the user “no, you may not do that” ? One day all this fraud will come crashing down and I will be right under it.

  119. I’d rather code in Clojure than newLISP, and I’m allerjic to Clojure.

    newLISP manages to take the quaintly retro feel of Emacs Lisp and add some regressions on top of that. For example, its “unique garbage collector” basically consists of freeing objects that go out of scope much like C++. But the downside is, you can never, ever, take a reference to an object. Ever. Some objects, like lists and strings, may internally be treated as references by the runtime, but these are always copied when you enter a user function. Lists cannot have cycles, and cannot share a tail. This is to prevent aliased pointers in the runtime so that a variable which goes out of scope is guaranteed to not have aliases floating around elsewhere. But the net effect of this is to have overly restrictive, slow memory management. At least C++ lets you manage object lifetimes as you see fit. And at least Ada, while imposing constraints on aliasing, lets you explicitly mark a pointer as aliased.

    Add to that no lexical scope, therefore no closures and no true lambdas, and you have something which should be properly called notreallyLISP.

    [text]Also, this is a string in newLISP. I’m not even kidding.[/text]

  120. @Jay Maynard
    > You shouldn’t disdain FORTH.

    I don’t disdain it, and I don’t think that it has nothing to teach us. I just think that its gestalt leads to a level of obscurity even worse than the dreadful javascript.

    BTW, just as an example of what I am talking about (wrt Javascript), I was writing this thing that put a table in pager with next and prev buttons. There was a variable declared as “var pageNum = 1” that stored the page currently displayed and a Next button to get the next page. (As you see, I am on the *cutting edge* here.)

    So the Next button onclick says something like “pageData = getPageData(pageNum+1);”which makes an ajax call to get the data for the next page. However, I start up the page, click next and the server keeps getting a request for page 11. It took me a while to track it down. Anyone wanna guess why? It is the essence of javascript crappiness.

    > is that if your word (FORTH function) takes up more than a 16×64-character block,… it is too complex and needs to be refactored.

    To me this is an interesting argument. It is kind of like saying that Hilter reduced the cancer rate amongst European Jews, which is undoubtedly true. Finding a small benefit from a gross restriction is not a net benefit.

    For sure, keeping functions small has benefits, but it also has horrible disadvantages. and 64 is far too small a line length, and 16 too few lines. It leads to dense, unreadable code. In terms of reading code, I think it is important to think about how code is really read. Generally speaking when you are trying to understand a function, you don’t read every character. Firstly you want to understand the overall control flow shape, and then you focus in on the individual part that you are interested in. So really, for the first part you are just looking at the first few characters of every line (and the whitespace in particular), and then when you are microscoping, you look at the line as a whole, and so have much higher capacity for longer lines.

    In regards to refactoring; based on the etymology I suppose what you describe in forth is strictly “refactoring”, but in the context of a strict agile model refactoring absolutely demands high coverage unit testing. You need to be able to demonstrate that your refactor did not change the functionality of the software being refactored, and you need to be able to show that it has semantic equivalence even in the various boundary cases and other special cases. So “reorganize the code, run the program, it still works” is not the same as refactoring at all.

    And, I might add, to get high coverage unit tests, you need a proper testing architecture in place, including a facility to do various IoC type tests for proper isolation. Something that is shockingly hard in languages like Forth, C and C++.

    > Programmers these days have forgotten how to write for readability, …out of laziness.

    I do agree that readability is nowhere near high enough on the measure of code quality. And I do agree that many programmers write almost impenetrable code. But I don’t think it is worse today at all, it might be better. When I look at old Linux Kernel code it is horrible with p’s and i’s and all sorts of obscurity everywhere. And I’m not sure it comes from laziness either. I think that is part of it, but I also think it comes from an infused culture, a belief that dense is better.

    Actually I am surprised I agree with you as much as I do Jay, I thought we’d be at opposite ends on this one. I think clarity should be close to the top of the goals when coding, and I think that there is a happy medium between too concise and too verbose.

    In Visual Studio there is a refactor called “extract method” which pulls the selected lines of code out and puts it in a separate method (deriving the parameters that need to be passed from the context.) Refactors like this are awesome for simplifying code.

    > “‘I have only made this letter longer because I have not had the time to make it shorter.”

    Indeed, but making it shorter so long as doing so does not make it more obscure.

  121. Don’t discount Markdown just because there are problems.

    asciidoc never got much traction for whatever reason, but markdown has. And you *want* those people to help write your docs, etc.

    If you discount Markdown you risk losing a chunk of your potential contributors.

  122. @jsk

    >I try to keep to this even in Java, but it is a language seemingly designed to scoff at such attempts.

    The language or the culture? IMHO the culture of Java is that if you have an object relational mapper you are going to call it ObjectRelationalMapper, the culture of Python is that you are going to call it ORM.orm, because that culture does not assume other people are stupid. (The later is not a hypothethical example but taken from an earlier version of Fabien Pinckaers TinyERP / OpenERP.)

    A culture of verbosity often bothers me more than actual language issues as it makes harder to figure out what other programmers actually mean to do.

  123. Jeff Read, when you come up with a Scheme interpreter that is as fun to use and batteries included as newLISP, let us know. Half the things you said about newLISP are wrong; the other half are irrelevant to getting in and getting the job done.

  124. for speed, newLISP is very comparable to python, even beats python in quite a few benchmarks.

  125. (Not COBOL, I guess that is one generation earlier and somehow different kinds of people – I know some old, 55+ COBOL guys and they are actually closer to the hacker phenomenon than to my domain-oriented subset, at least superficially: they play musical instruments, they never wear ties or jackets, have big beards, and have and are generally interested in deep technical knowledge… may just be generational though.)

    Ah, COBOL. The great promise of COBOL was that it was supposed to be just what you asked for: an English-like language for nontechnical businesspeople to write business logic in, “optimized for the common case”. And yet — here we are now, when only wizards with arcane, long-forgotten skills and the un-businessmanlike personalities to match will touch it, and only then for princely sums. So much for that dream, eh?

    Thing number one is that a tool which is optimized for a perceived common case is not optimized all other cases and may, in fact, be pessimal for an unanticipated case.

    Thing number two is, there’s no such thing as a universal “common case”. People seated at PCs punching data into keyboards may be what you’re optimizing for now, but in a few years they may be eclipsed by something else. That public library may one day attempt to OCR the ISBN number of each book and look up its title and author, to save on HR costs because the economy is shit, there’s a Republican or Conservative in power and funding for public knowledge resources is being slashed in favor of national security and hunter-killer drones.

    So when you’ve invested in a database that is optimized for one case and the requirements change, you have two options: attempt to migrate the data into something else, or extend the tools you have for the new case. And the latter option may strain the tools well past the breaking point. Suddenly, a nontechnical businessman is out of his league with this tool that was supposedly meant for him, and an expert who knows the tool’s ins and outs, sometimes down to the bit level, needs to be called in to make the thing work. That’s where COBOL is today. I’ve seen the same thing with Microsoft Access — a friend of mine made hundreds of thousands of dollars customizing Access — this tool supposedly optimized for the common case, again, of nontechnical small business users needing to build simple apps for their LOB — to do some powerful, arcane, insane shit. The requirements blossomed into something Access wasn’t built to handle; a specialist in its internals was therefore needed in order to make it work.

    So much as you would like to put mile-high walls around each type of programmer, that’s not the way it works in the real world. Really, except for maybe demoscene coderz and Oleg Kiselyov, we are all intermediaries between human needs and machine capabilities. That means we need to be intimately familiar with both — sometimes more than we’d like to be. And in order to be able to produce adaptable systems, we need adaptable tools. Which is why systems like Unix and Emacs tend to “win” over the long haul — they are not built for a common case, but let the end users determine their own common cases and fiddle with the dials to turn them into just the system they need.

    And that’s why I harp about separation of concerns: once data is in a database it’s likely to be there for the long haul. Better that it be easily retrofit with different interfaces than commit to one interface and find itself obsolete as business requirements change. Fun fact: I once prototyped a tool, written in Scala, that could automatically generate forms from an XML schema. The idea was to turn a crank and generate both the SQL DDL and the GUI layout for a particular record type, in such a way that you could keep the database and presentation layers separate while retaining some synchrony between the two. It was also supposed to be able to generate HTML and Swing forms from the same XML spec.

  126. Jeff Read, when you come up with a Scheme interpreter that is as fun to use and batteries included as newLISP, let us know. Half the things you said about newLISP are wrong; the other half are irrelevant to getting in and getting the job done.

    Were you paying attention? Upthread I mentioned guile, which has about Python speed and comes with enough bindings to make it a practical scripting tool. I’ve used it to admin systems, connect to network services and maintain web pages. Racket can achieve even faster speeds and comes with even more “batteries included”. And then there’s Gambit, which can have C extensions bolted onto it fairly easily and comes within a base 2 order of mangitude slower than C for compiled Scheme code.

    “Fun to use” is a different matter; personally I find Scheme a lot more fun to use than a language which lacks true closures and adopts the bug-prone dynamic-scoping semantics; since Eric is an old Lisp hand, I assume he’d feel likewise.

  127. @Shenpen
    > because that culture does not assume other people are stupid.

    There is a huge difference between stupdity and ignorance. Not knowing something is not a deficit, nobody knows everything. Of course common concepts like an ORM it is reasonable to assume people will know that jargon, but most objects within a system are domain level, and assuming people know some obscure abbreviation is not stupidity, it is just not knowing everything.

    The real truth is that using decent variable names, and clear code is primarily of benefit to the original writer. Because it is the original writer who reads that code the most, and maintains it the most, and finds, three months later, that the things they knew about the code, the meaning of obscure variable names seems to have miraculously dropped out of their brain.

    So not stupid, just not knowing everything.

  128. Jeff Read says:

    ” basically consists of freeing objects that go out of scope much like C++.”

    not true, it is a little bit more involved, read here:
    http://www.newlisp.org/MemoryManagement.html

    also says:
    “you can never, ever, take a reference to an object. Ever. Some objects, like lists and strings, may internally be treated as references by the runtime, but these are always copied when you enter a user function.”

    not true, read here:
    http://www.newlisp.org/downloads/newlisp_manual.html#pass_big

    also says:
    “Add to that no lexical scope, therefore no closures and no true lambdas, and you have something which should be properly called notreallyLISP.”

    There are lexically separated namespaces and mechanisms to write lexically separated functions with static variables and replace closures. Read here:
    http://www.newlisp.org/index.cgi?page=Closures

    1. >not true, it is a little bit more involved

      Yes. I think ORO is the most interesting single feature of newLisp, by far. It’s a very clever way to get unlimited-extent types without the problems of garbage collection. I get that Lisp purists hate it, but I’ve been a Lisp-head long enough that I believe I’m allowed to laugh at purism.

      Reading the stuff on memory-management makes me suspect that compilation of newLisp to native code ought to work rather well. Lutz, is anyone trying to implement this?

  129. About “Batteries included” in newLISP:

    Networking, get/put/post/delete webpages, XML processing, JSON processing, parallel and distributed processing are all built into the newLISP executable. There are no external modules to import. These capabilities are builtin.

  130. > Don’t discount Markdown just because there are problems.
    >
    > asciidoc never got much traction for whatever reason, but markdown has. And you *want* those people to help write your docs, etc.

    The problem wit Asciidoc is that instead of one program like Markdown (originally Perl) it uses set of tools which is a bit of PITA to install (I know because Git uses AsciiDoc). OTOH this toolchain-based design allows to have manpage and info as target output formats.

    Markdown dialect proliferation is not much of a problem nowadays when most use GitHub-flavored Markdown dialect.

    Anyway lightweight markup languages, be it Markdown, AsciiDoc, reStructuredText or other more exoteric are quite similar, and have the advantage that you don’t really need to know markup to contribute.

  131. Jessica Boxer asked: It took me a while to track it down. Anyone wanna guess why? It is the essence of javascript crappiness.

    And this is why I like strongly-typed languages, and always have enforcement turned on in my VB.NET code.

    (If I accidentally write code that tries to add an integer to a string, I want a compile error that tells me I’m Doing It Wrong, not a guess as to what I might want and silent conversion to string concatenation…)

  132. @ESR – It occurs to me that the one combination of people who might actually be able to force a canonical markdown spec is ESR + RMS.

    Think about it: shoot texinfo in the head and reunite a divided community all in one master stroke.

    1. >@ESR – It occurs to me that the one combination of people who might actually be able to force a canonical markdown spec is ESR + RMS.

      Possibly, but screw that. I think asciidoc is better on all levels, and with buy-in by the Linux kernel and git nobody can claim it isn’t meeting the test of serious use.

  133. On the plain-text-ish documentation formats: I personally find Markdown far too limited for far too many tasks. It seems designed around outputting a very small subset of HTML and nothing more; you’re even expected to break out into HTML for certain complex constructs. I struggle every time I want to use fancy Markdown stuff in, for example, GitHub comments. I’ve still never figured out how to paste terminal output in it nicely. In AsciiDoc I just wrap it around ——— blocks and I’m good to go.

    AsciiDoc itself isn’t perfect, but I find it far more approachable. The claims about being “markup free” I believe is a lie, but it’s markup that blends itself in with how humans tend to usually write plain text documents. It provides an almost perfect translation of all the DocBook constructs, without requiring the use of XML, which is a huge benefit, in my opinion. The normal asciidoc program (which just needs Python) outputs HTML directly and skips the DocBook step, but there is a2x to go through DocBook as well (or output DocBook *.xml files for your later processing, if you wish). It’s not tied to HTML any more than LaTeX is tied to PDF, it’s very flexible.

    AsciiDoc also has the pecular feature that you can manipulate its syntax, especially to retrofit old text files into AsciiDoc output. This is somewhat similar to the Markdown dialects disadvantage ESR had mentioned, although from real world usage, it seems that most projects using it just modify their existing text file documentation to match AsciiDoc’s default instead of changing out AsciiDoc works.

  134. > (If I accidentally write code that tries to add an integer to a string, I want a compile error that tells me I’m Doing It Wrong, not a guess as to what I might want and silent conversion to string concatenation…)

    Is the problem one of adding an interger to a string and not giving an error, or is it one of using the same operator symbol for both addition and concateration?

  135. Vakkotaur on 2014-01-21 at 01:18:32 said:Someone is obviously utterly clueless about the utility of Forth in embedded systems, where I made a living coding Forth for a decade or so.

    There was an entire subculture of medical administration software in FORTH. Dozens of hospitals ran on it, and there were companies whose business was providing related tools and training.

  136. Yes. I think ORO is the most interesting single feature of newLisp, by far. It’s a very clever way to get unlimited-extent types without the problems of garbage collection.

    Many of the problems of garbage collection can be ameliorated by giving the programmer explicit control over the garbage collector. As an example, there have been high-speed iOS action games written in Gambit-C; GC overhead is kept to a minimum simply by forcing a collection on each frame by calling (##gc) in the main game loop. The added time cost of collecting garbage each frame is offset by the fact that not enough garbage accumulates to cause gc pauses later on down the line.

    This is a hack around the tradeoffs of the traditional Lisp memory model, but imho it sure beats a hack around the fact that your language doesn’t have sane scoping. Algol’s lexical scoping was such a huge win that it became a sort of Schelling point: other languages gravitated toward it (or as much of it as they could support, for example in the case of C). And the developers of the first Lisp to break with tradition and implement lexical scoping — that’ll be Scheme — paid tribute by humorously naming Scheme’s defining document after Algol’s.

    Such languages tend to either not last long, not attract broad developer communities, or morph into something completely different. As an example of the last category, consider Python: originally, Guido decided that two scopes ought to be enough for everybody. Probably for reasons similar to why newLISP has its erm, unique scoping rules: it’s easy to implement, and it covers the common case. But as people actually started to use Python for serious work, the limitations of the old two-tiered scoping regime became apparent, and Guido relented and implemented something like full lexical scoping in Python. I say “something like” because the semantics of assignments and the “global” keyword does not interact with the scopes in the way you’d expect. But it’s mostly the right thing.

    And no, contrary to Lutz’s responses, pass-by-global-name-in-a-namespace-you-created-to-hack-around-the-fact-that-your-language-lacks-proper-references is not an adequate substitute for proper references. References are anonymous; you are supposed to be able to pass them around without affecting the global namespace — any global namespace.

    Nor are expander hacks an adequate substitute for true closures, the power of which lies in the fact that the locations referenced by the closed-over variables remain “live” and accessible by the closure, while being hidden from anything outside the closure (except inasmuch as the closure permits access to them). As an example of the kind of power and flexibility this permits, I submit to you Jon Rees’s capability-based security kernel where the capabilities — are lambdas!

  137. Such languages tend to either not last long, not attract broad developer communities, or morph into something completely different.

    By “such languages” here I mean “languages that don’t have sane scoping”, above.

  138. @Jeff but obviously optimization for the common case should be built on top of a properly generalized and open layer, so later optimizations can be added to it when the times change. What I meant was just basically do it already, don’t just ship put the general, open layer and consider the job done.

    Microsofty tools of the Access type tend not to optimize for the common case but are _built_ for that exclusively. That is a different story.

    COBOL was never even in theory or direction close to what I meant. I did not mean enabling programming “in English” for clueless people, but more like not having clueful people wasting their time having to write boilerplate for common things. So basically having the stuff that are general requirements already built in, or in other words, never having to tell a client that something that is really a standard in their industry needs to be specified as a requirement because it makes you come across as clueless in the industry. My example was functions like last_day_of_last_month() or last_day_of_month(-1) or whatever. Everybody with a half a brain can figure out how to write one, that’s not the problem, but if we know it is a general requirement it looks really stupid when doing stuff like that has to be included say in a report development quote.

    I mean for example I talked with the Django folks on the IRC and they did not get that I am evaluating their auto-admin as the GUI of the application itself, not as a background admin for a completely custom built web frontend. They live in a world of custom built website like web apps, and have no clue that the same kind of GUI they meant as a background admin only could be a good candidate for the whole GUI of a business app, if they would add a few more things. Apparently in the open source web programming subculture everybody wants to build the next Reddit like website-app and not the next payroll type app that merely has its forms displayed inside a browser for ease of installation… they did not even understand what I mean.

    A classic example of instead of optimizing for the common case building exclusively for it is that we can’t extend the development environment of Dynamics NAV and they only do it when they themselves want to make a new feature with it. It is both funny or tragic, depending on your mood, that you can generally point out that this built in function was added because they wanted to make that feature with it so now I can make this feature with it. This is not optimization of the common case but more building it for the common case only.

    I am not that convinced about the long life of data in a database. First of all I am used to shitty database engines and shitty intermediate layers that make large DBs slow, the ugly but effective solution being having detailed transactions in the current year only and past years archived, and kind of zipped into some OLAP kinda approach for analytical comparison only, and secondarily because people tend to throw out their software and then have to migrate their data into a new format, which often means being picky about what to migrate and what to archive and forget. Third because I saw so many companies go out of business…

    There is nothing inherently wrong with the Unix or Emacs approach when and if the optimization is added at the appropriate time and not considering it done until that is added. The problem is merely considering it finished before that is done. Negative example: pretty much every distro before Shuttleworth came around. Positive example: the distro specially tailored for music-makers. Positive example: whoever who had the idea to have Emacs automatically start in cua-mode when installed in Windows. Advanced users will turn that off, but it does not scare away people so much when they try it first…

  139. > I think asciidoc is better on all levels, and with buy-in by the Linux kernel and git nobody can claim it isn’t meeting the test of serious use.

    Err… I think that asciidoc was chosen as a format for git documentation not because it is worse or better than Markdown, but because it provides out of the box (if via fragile tool chain) manpage output.

    1. >Err… I think that asciidoc was chosen as a format for git documentation not because it is worse or better than Markdown, but because it provides out of the box (if via fragile tool chain) manpage output.

      You say this like it’s some sort of counterargument. I use that feature heavily, myself – I no longer write man pages in [nt]roff, ever.

  140. And providing out of the box manpage output is a feature compared to info, which for the longest time totally ignored that manages are the standard Unix documentation format.

  141. And providing out of the box manage output is a feature compared to info, which for the longest time totally ignored that manages are the standard Unix documentation format.

    Indeed. In BSD, ‘man ‘ is likely to provide useful, detailed documentation, and ‘man -k ‘ will get you pointed in the right direction if you are not sure of the exact page. How infuriating to go back to some Linux distros (looking at you Debian), where it will produce a stub page that tells you to look at the info page. Didn’t install the *-doc package? Too bad.

  142. @Rob Fisher
    AFAIK, our host continues to use a Unicomp keyboard, not the Happy Hacking Keyboard. All the HHKs use either rubber dome or a “Topre” rubber dome/tactile hybrid. Nothing beats buckling spring, not even the various Cherry switches, IMHO.

    BTW–I also tried i3 and I’ve decided that I simply can’t stand tiling window managers. Also I’ve been hacking in Python since before I read Why Python?, written by our host, but probably not much before. I think late 1999/early 2000 was about the time Python started becoming really, really useful, IIRC.

    1. >AFAIK, our host continues to use a Unicomp keyboard, not the Happy Hacking Keyboard. All the HHKs use either rubber dome or a “Topre” rubber dome/tactile hybrid. Nothing beats buckling spring, not even the various Cherry switches, IMHO.

      Exactly. The layout and compactness of the Happy Hacking keyboard are nice, but I won’t give up my crunchy buckling springs for it.

  143. ESR says:
    “Reading the stuff on memory-management makes me suspect that compilation of newLisp to native code ought to work rather well. Lutz, is anyone trying to implement this?”

    Yes, ORO would be easy to realize in a compiled language. I have not heard of any effort compiling newLISP. To make tasks faster, newLISP communicates well via two different FFI systems, a very simple and an extended one. But for me it’s not so much about the speed we would gain, but the flexibility we would lose. So when I think about compiling it, it would not be in the traditional way but would have to leave its dynamic features.

    What people attracts to LISP and especially to newLISP, is its flexibility to describe data and algorithms. It’s the programming language which least boxes you in into certain type of thinking. You would loose much of newLISP’s dynamic features when compiling the traditional way.

    Regarding purity: lexical scoping and closures by itself are just two interesting models of many more models possible. What counts are pragmatic aspects, like making multiple programmers on a project not step on each others code (scoping issues), or allowing to package data and algorithms in isolation together and capture the state of environments (closures), all for better overall design of a program.

    Those pragmatics can be achieved in many different ways. The fact that ALGOL was the first to implement lexical scoping for function variables, doesn’t mean it’s the only “true” method to achieve lexical separation. Closures as implemented by Scheme are not the only “true” way to capture the environment in which a function is defined. newLISP has mechanisms to achieve the same pragmatic goals.

    newLISP even has in-place modification. Code can modify itself, even a running function can modify itself. This wasn’t done to make some Computer Science purists laugh or cry, but to give users one more model of thinking and ways to solve problems in new interesting ways. Problems like modeling living organisms and realizing more of en.wikipedia.org/wiki/Organic_computing.

    [text]Also, yes this is a string but a “special” one, it doesn’t
    need any escaping of certain characters, like quotes, line feeds
    or any thing else. Good for including HTML or any other language
    in your code. What you see is what you get. But you have your
    traditional “quoted string” too, which is what is most used in
    newLISP. No kidding ;-)[/text]

  144. @Morgan Greywolf:

    Also I’ve been hacking in Python since before I read Why Python?, written by our host, but probably not much before.

    Me too. I had been playing around with Python for personal stuff for a couple of years, but I vividly remember the first time I used it seriously for business, in the fall of 1999. My boss at the time had written some verilog to implement 8b/10b encoding/decoding on an FPGA. It took him around 4 weeks to write it, and he had spent about two weeks debugging it before he finally acquiesced to my standing offer to help him.

    This was 1999, and while I didn’t find the encoding tables directly on the web, I found a datasheet for an integrated circuit that implemented the encoding/decoding, and cut and paste the tables out of the datasheet into a text file. Then I wrote a Python script that would read the text file and generate verilog. Then I spent some time massaging the Python to generate verilog that would run faster. When I was finished, I had correctly operating verilog that was running much faster than my boss’s incorrectly operating verilog. My total time investment was about 3 hours. My boss was simultaneously very happy and very unhappy. Unfortunately he was unhappy mostly for the wrong reasons, and I left there a few months later.

    I think I first noticed “Why Python?” around 2003, after typing “why python” into a search engine after being asked that question multiple times. I found it very useful and subsequently forwarded it to several people, creating several converts in the process.

  145. @Patrick Maupin
    @Morgan Greywolf
    Also I’ve been hacking in Python since before I read Why Python?, written by our host, but probably not much before.

    Since 2004 I’ve been using Python where ever I can based in great part on that same article. Similarly for his quote about wxPython.

  146. “””
    I, probably more so than anyone else in this discussion, can be fairly accused
    of having an 80-column mind.
    “””

    Don’t bet on that.

    “””
    It is the only LISP I know where we have one-liner competitions. Like the Perl folk.
    “””

    You brag like a frat boy who thinks that the essence of manhood is being able to drink until he throws up.

    “””
    to save on HR costs because the economy is shit, there’s a Republican or Conservative in power and funding for public knowledge resources is being slashed in favor of national security and hunter-killer drones.
    “””

    Progressives just can’t fucking help it, they have to inject their idiot politics into every discussion, no matter what.

    “”” ObjectRelationalMapper, the culture of Python is that you are going to call it ORM.orm,”””

    How many expansions of ORM have been in play over the last 10 years?

    Over the next 10?

    “””
    The real truth is that using decent variable names, and clear code is primarily of benefit to the original writer. Because it is the original writer who reads that code the most, and maintains it the most, and finds, three months later, that the things they knew about the code, the meaning of obscure variable names seems to have miraculously dropped out of their brain.
    “””

    I once ran across a piece of code I’d written 9 years previously. I hadn’t even worked for that company for 6-7 of those years.

    It was puking an error to email–it was essentially a monitoring/reporting script.

    I didn’t recognize the file until I opened it. I not only able to recognize my code, but I could tell what I wanted to do, and I could see where other people had fixed it over the years. I was also able to fix the problem and get it running again in about 20 minutes.

    It made me happy.

    And yes, it was python.

  147. @Jessica

    >The real truth is that using decent variable names, and clear code is primarily of benefit to the original writer.

    Obviously, but over verbosity is one of the biggest detriment to readability as it makes it hard to extract meaning, actual logic from boilerplate. By the time to you get to the next meaningful operation you forget the previous one because of having to read all the junk.

    My favorite horrible example is MS CRM customizations.

    ConditionExpression cityCondition = new ConditionExpression();
    cityCondition.AttributeName = “address1_city”;
    cityCondition.Operator = ConditionOperator.In;
    cityCondition.Values = new string[] { “Redmond”, “Bellevue”, “Kirkland”, “Seattle” };

    All this crap of course means WHERE adress1_city IN { “Redmond”, “Bellevue”, “Kirkland”, “Seattle” } but when you have a longer query like this it really makes it awkward to figure out the intent, the meaning.

    I guess people who work a lot in verbose Java / C# cultures learn some kind of internal filtering, same way as people who read bureaucratic reports learn that, i.e. they learn to skip over unimportant stuff, they read like “blah blah blah IMPORTANT blah blah blah IMPORTANT” …

    Anyway this is what I mean by that not only the technical details of programming languages matter, but the general cultures they attract and generate.

  148. All this crap of course means WHERE adress1_city IN { “Redmond”, “Bellevue”, “Kirkland”, “Seattle” } but when you have a longer query like this it really makes it awkward to figure out the intent, the meaning.

    And that’s why Microsoft invented LINQ. Does their CRM package not use LINQ yet?

  149. @Shenpen
    > Obviously, but over verbosity is one of the biggest detriment to readability

    That isn’t true at all. I agree that excessive verbosity can reduce readability, but excessive brevity is obviously more impactful. The example you gave is obviously bulky, but perfectly understandable. The separation of parts of the expression is the biggest impact rather than the long variable names.

    But as Jeff pointed out, LINQ provides a very simple way to do this kind of thing. And even in the absence of LINQ there are much clearer ways to write some thing like this. Comparing the worst abuses of verbosity against the most pristine examples of concision hardly makes for a fair comparison.

    And I might add, like many things, it is what you are used to. If you follow the sloppy habit of anonymous variable names in your for loops (which is to say, ‘i’, ‘j’ and ‘k’, or ‘p’ for pointers or ‘s’ for strings) you will not see the benefits of properly named variable names because the small barrier to entry will deter you. Try it for a month. I dare you.

  150. And even in the absence of LINQ there are much clearer ways to write some thing like this.

    Oftentimes, you don’t have a choice. Microsoft system APIs are often… bureaucratic in the exact manner Shenpen described. C# and LINQ are actually exceptions in the Microsoft world, probably because they were designed by people who weren’t enculturated in the Microsoft way of doing things.

  151. @Duncan Bayne:

    > sounds like your story would make an excellent Daily WTF submission.

    I don’t want to start doing that. I have _years_ of similar stories — could take me weeks to regurgitate them all :-)

  152. > This is a brief heads-up that the reason I’ve been blog silent lately is that I’m concentrating hard on a sprint with what I consider a large payoff: getting the Emacs project fully converted to git. In retrospect, choosing Bazaar as DVCS was a mistake that has presented unnecessary friction costs to a lot of contributors.

    By the way, it looks like there are two main contestants left in the are of OSS DVCS: Git and Mercurial. I don’t think either of them would vanish, because they have different architectures (Git: build around “filesystem”, scriptable, developed bottoms-up, with different reimplementations like JGit; Mercurial: build around API, with changeable engine but without Bazaar insane number of abstractions and “indirections”, extendable via API including inner works), with different tradeoffs, which are good for different things.

    For example: JGit reimplements Git in Java with different license; Facebook extends Mercurial to be able to continue with big ball of repository approach.

    IMHO both are here to stay, so chosing Git over Mercurial because of popularity…

    1. >IMHO both are here to stay, so chosing Git over Mercurial because of popularity…

      …makes sense if one of your main concerns is inviting new talent into a project that is overburdened relative to its manpower. You want the one that will present the lowest bar to the most people.

  153. On the matter of long vs. short names, Kernighan and Pike (The Practice of Programming, 1999) recommend descriptive names for globals, short names for locals. You need both. Get a copy of their book and read it all on page 3, The Old Masters had it nailed a long time ago.

    Of course, I realize how hard it is to actually follow their excellent advice. Thinking up really good names is one of the most difficult things to do when you are coding.

  154. @LS
    > Of course, I realize how hard it is to actually follow their excellent advice. Thinking up really good names is one of the most difficult things to do when you are coding.

    Big fan of both K an R, but not a fan of this specific advice. In fact one might say they are mostly to “blame” for some of the common bad practices in C (along with the fact the originally variable names only had about 8 significant characters.) I think that is true, but I could be wrong. I suppose arguably K&R are to “blame” for everything in the C culture.

    But I did want to comment on your second point. I think a lot of times the reason why we have a hard time naming something is because we are not very clear on what it is. Oftentimes coming up with a clear name for something (preferably a short name, but longer in a pinch) is a way to help you understand what it actually represents.

    “i’ control variables are a perfect example of this. What does that iterating variable actually mean? What does it represent? Being able to answer that question, and doing it with the simple expedient of a good name, is an excellent way of understanding the code, and consequently putting in less bugs.

    I think this is PARTICULARLY true when it comes to naming larger things like classes, interfaces and so on. Having a clear, simple, short name for something is a strong motivator to keep the class cohesive in function.

    I’d say though that the attributes are, in order of importance, clear, simple and short. Better to sacrifice “short” for “clear” or “simple”.

    I should also say that the whole world of design patterns revolves around this concept. Design patterns are not new, they were not invented by the GoF. The innovation was giving them systematic names, with clear semantics. Doing so allows us to have a vocabulary to discuss these things and have everyone mean the same thing.

    Which is to say names are important. They help us understand. And choosing good names is neither wasteful nor futile. It is a mechanism for thinking and understanding.

  155. > “i’ control variables are a perfect example of this. What does that iterating variable actually
    > mean?

    Usually, although not always, it means that your language is insufficiently extensible or expressive.

  156. Yes, ORO would be easy to realize in a compiled language.

    It already has; it’s called value semantics in C++. But C++ is flexible enough to allow full reference semantics also.

    To make tasks faster, newLISP communicates well via two different FFI systems, a very simple and an extended one.

    By that you mean an incredibly unsafe FFI and a safer one based on libffi. The safe one is not substantively different in practice from the FFIs available in Common Lisp or Scheme implementations. Which is probably why your site appears to encourage use of the unsafe FFI — it’s fast, easy, and “fun” because it’s unsafe.

    But for me it’s not so much about the speed we would gain, but the flexibility we would lose. So when I think about compiling it, it would not be in the traditional way but would have to leave its dynamic features.

    Compiling newLISP is going to be a stone cold bitch because you do not really have a procedure abstraction; lambda expressions are self-evaluating and so their bodies must be eval’ed again every time they are invoked. Also, newLISP “macros” are actually fexprs which must call the evaluator in order to get useful results out of them. newLISP really is optimized from the ground up to be an interpreted language and I don’t see efforts to compile it to native code succeeding any time soon.

    Regarding purity: lexical scoping and closures by itself are just two interesting models of many more models possible. What counts are pragmatic aspects, like making multiple programmers on a project not step on each others code (scoping issues), or allowing to package data and algorithms in isolation together and capture the state of environments (closures), all for better overall design of a program.

    Let’s get one thing straight: I don’t give a shit about purity, okay? newLISP is arguably more “pure” than Common Lisp or Scheme, because its evaluation semantics very closely resemble McCarthy’s Lisp 1.5. But there are reasons why Lisp moved away from those semantics: other semantics enabled the construction of more performant and tractable programs.

    Those pragmatics can be achieved in many different ways. The fact that ALGOL was the first to implement lexical scoping for function variables, doesn’t mean it’s the only “true” method to achieve lexical separation. Closures as implemented by Scheme are not the only “true” way to capture the environment in which a function is defined. newLISP has mechanisms to achieve the same pragmatic goals.

    No, actually it doesn’t. A newLISP context is a dynamic namespace; it is not a lexical scope. Lexical scopes by definition are delimited by the program text which defines them and their variable bindings are inaccessible outside that construct. The scope introduced by a LET or LAMBDA form in Scheme is valid only within that form. This makes Scheme programs easier to reason about simply by looking at the program text.

    The bindings in a newLISP context, by contrast, are accessible at any time. It is really no different from a top-level namespace of global bindings. It does not have the advantages that real lexical scoping has; other procedures may go in and clobber that namespace to alter the behavior of the programs which use it. The active variable bindings still depend on the dynamic context of the program’s execution — because it’s still dynamic scoping.

    newLISP even has in-place modification. Code can modify itself, even a running function can modify itself. This wasn’t done to make some Computer Science purists laugh or cry, but to give users one more model of thinking and ways to solve problems in new interesting ways.

    HOFs are a more tractable approach to enabling dynamic program behavior than is self-modifying code because — again — they are easier to reason about from the program text alone. Earlier we were talking about the importance of code clarity; there’s nothing more obscure than code that actually changes as it runs. There’s a reason why Lisp macros supplanted fexprs (which newLISP “macros” really are).

    That brings me to another thing about newLISP. I think it’s great that you wrote a toy Lisp that’s useful for small jobs. But this blog is not the first online forum to be beset with newLISPers preaching the gospel as if newLISP were some profoundly revolutionary thing when it’s not — it’s a recapitulation of design mistakes that Lispers had already discovered and decided to avoid early in Lisp’s history when they started using it to get real work done. And unlike the communities of languages from similar humble beginnings like Perl or Python, whose BDFLs eventually implemented reference semantics and lexical scoping as the demands of their user base grew, you and the other newLISPers wax oddly defensive when these deficiencies are pointed out. You also play word games with CS terms of art like “lexical scope” and “macro”, in a handwavy attempt to convince people from other Lisp backgrounds that newLISP is at least just as good as, say, Common Lisp or Scheme.

    It’s… weird, and, quite frankly, cult-like. It has the same snake-oil-salesman whiff that Isagenix does. I got the same vibes from the Clojure community as well, but they emanate particularly strongly from newLISP. And to me, that is reason enough to stay far, far away from it for serious work.

  157. On Git and Mercurial: I personally find Git superior for its branches alone. The Mercurial equivalents are just… odd. Actually to be even more precise, I also believe that Git’s allowance (and encouragement) of history rewriting to be one of its most powerful features. With only a git rebase, it’s trivial to slice up commits, collapse them, etc, which is excellent when you need to convey yourself better.

    All that being said, Mercurial is nowhere near as bad as any other non-Git VCSes, it’s not a huge loss. Plus I can use git-remote-hg to clone Mercurial repositories with full bidirectional support…

  158. @Lutz Mueller: “What people attracts to LISP and especially to newLISP, is its flexibility to describe data and algorithms. It’s the programming language which least boxes you in into certain type of thinking. You would loose much of newLISP’s dynamic features when compiling the traditional way.”

    You compile to native machine code because you need speed and efficiency. The trade-off tends to be that traditional languages designed to be compiled to native code impose complexity because of that. The code must be correctly written and understood by the compiler to be able to generate native code, and is likely harder to write in the first place because of it.

    newLisp’s flexibility in how you think and express your code would be a win here. If you are *going* to compile it to native code, it means you’ve successfully solved whatever the problem was, but your solution needs to be faster and more efficient than an interpreted version. Well, fine. Compile it. You need the dynamic flexibility when you are crafting your solution. Once you *have* a good solution, you don’t. You just need something you can run.

    Should the surrounding circumstances change, and your solution needs an update, you can always change your code in newLisp, and recompile.

    A newLisp compiler might be a fine thing indeed.

  159. To the newLISP guy, one-liner contests and pretending dynamic scope plus namespaces is adequate sounds like you’ve just reinvented perl 4, except with parentheses rather than dollar signs. Passing “references” by name too. There’s a reason that even the Perl people eventually realized that those are all bad ideas eventually and why Perl 5 has lexcal scope and proper anonymous references (though the GC still sucks).

  160. “That isn’t true at all. I agree that excessive verbosity can reduce readability, but excessive brevity is obviously more impactful. The example you gave is obviously bulky, but perfectly understandable. The separation of parts of the expression is the biggest impact rather than the long variable names.”

    Jessica, let me ask you a question: Does your favorite programming environment do autocompletion?

    Mine doesn’t.

    Long variable, especially long variables that are glued together with dots to denote class memberships, are an unmitigated pain int he ass.

    And no, I refuse to change my programming environment to one that is Windows-only.

  161. Jessica, let me ask you a question: Does your favorite programming environment do autocompletion?

    Both vim and emacs do. In its most basic form this takes the form of selecting completion candidates from a list of words that the editor has already seen you type or found in your buffers. This is actually far more useful than I thought it would be, though it still doesn’t have niceties like showing me parameter number and types.

    But then again you did say “your favorite programming environment”, which in your case I’m not sure if it responds to editor commands until the SEND key is pressed ;)

  162. @newLISP isn’t writing the “batteries” or at least C wrappers for them for every programming language that gets invented _so_ 20th century? Why not just utilize either the Java or the .NET / Mono ecosystem?

    Wouldn’t it be a better direction to simply reduce the importance of language choice, and make it the important choice that only that this is a JVM based project, and the parts of it can be written in Java, Scala, Clojure and whatever else you want to?

    Interesting, though, that Microsoft, or more precisely Hejlsberg more or less explicitly tried to do this, to make languages an option, secondary choice, anything is fine as long as it compiles to the .NET bytecode and can interoperate with other .NET bytecode and it did not work out. The reality is that the vast majority of .NET programmers would never touch anything but C# and I think it is not for technical reasons, but partically because of CV / job application optimization, and partially because of the ease of using code samples, examples from others, tutorials into new and new libraries, replacing or shuffling around people on a project.

    I think there is a lesson to be learned from the unexpected language dependency of the .NET environment: programming languages are also used for human communication, if indirectly, so it helps if everybody speaks the same language. The same way pretty much everybody regardless of their native tends to write variable names in English, which is communication with each other and not the computer, the same way they chose the same programming idiom as well, for communication.

    I don’t know what final lesson to draw. Clearly the importance of ecosystems, bytecodes, virtual machines has risen and will rise as opposed to programming languages, idioms. But still it seems there will be a few major idioms, and the rest stay exotic, because human communication benefits from network effects.

  163. @Jay Maynard
    > Does your favorite programming environment do autocompletion?

    Yes it does, and it had a dramatically positive effect on my ability to write software. You might think that it is just a way of typing stuff quickly, but it is not. Excellent autocomplete (as is available in Visual Studio, and even better with some plug ins) changes the way you write and think about software. It isn’t so much for typing quickly (though it is that) but it is also how you look stuff up in software documentation — which is a very large part of modern programming.

    Visual Studio has much poorer autocomplete in Javascript (though it has some, and of course it is understandable since Javascript is so crappy and intractable.) When I code JS, I feel like I am running through mud.

    > Long variable, especially long variables that are glued together with dots to denote class memberships, are an unmitigated pain int he ass.

    Yes, I understand that better now. You need better tools. However, I’d also remind you that you read code, including your own code, far more than you write it.

  164. @Morgan Greywolf: “our host continues to use a Unicomp keyboard, not the Happy Hacking Keyboard.”

    I know. But very few people seem to care enough to get *any* sort of improved keyboard, so I felt justified in stretching my pattern a little.

    @esr: “The layout and compactness of the Happy Hacking keyboard are nice, but I won’t give up my crunchy buckling springs for it.”

    It feels very nice, nonetheless. I won’t give up the layout for crunchy keys. The pattern is broken.

    But the HHKB people should make a version with buckling springs.

  165. @Rob Fisher “But the HHKB people should make a version with buckling springs.”

    And all of them should make a version that is ergonomic.

  166. “Jessica, let me ask you a question: Does your favorite programming environment do autocompletion?

    Mine doesn’t.

    Long variable, especially long variables that are glued together with dots to denote class memberships, are an unmitigated pain int he ass.

    And no, I refuse to change my programming environment to one that is Windows-only.”

    Eclipse does autocompletion but not as well as what I remembered in VS. Its weaker in C++ though. I’ve never liked Eclipse for C++ but I don’t do much C++ coding these days.

    If I had to do a lot of C++ again I’d look at QtCreator which has some form of autocomplete.

    I liked Borland’s IDE Kylix but that died when nobody bought it.

    Nothing seems to work quite as well as VS Intellisense but most will make long variable names a non-issue.

  167. @newLISP isn’t writing the “batteries” or at least C wrappers for them for every programming language that gets invented _so_ 20th century? Why not just utilize either the Java or the .NET / Mono ecosystem?

    C FFIs are necessary if you want to fully interact with the OS. Unless you like living in the Java bubble, they’re a part of life. For scripting and admin tasks, you want to have the whole OS available to you.

    Part of the problem — in the case of Unix — is that C libraries embed no parameter or return type information. That’s why newLISP’s default FFI is so dangerous — it only loads DLLs and gets the addresses of functions out of them, it doesn’t take into account type information. A safe C FFI would require you to specify type information, recapitulating it from the C header file.

    Windows corrects this problem by making a sort of universal FFI which does specify interfaces including type information — COM — an intrinsic part of the operating system. Many programming languages for Windows — including Perl and Python — have modules to support loading and using COM objects.

  168. I come from a Unix, C, and C++ background. I’m comfortable with the level of “risk” newLISP poses. It is fast to program in, fast to run, and small system footprint. And I’ve never had an rm -rf moment in newLISP. So I think the “risks” you speak of are more theoretical than actual.

    Scheme actively defangs and neuters LISP, with its enforced lexical scope, hygienic macros, baggage from Common Lisp, and other such things. The slightly increased “risk” from fexpr’s and controllable scope beats the annoying limitations of hygienic macros hands down.

    I happen to like self-modifying code. Takes me back to the old FORTH days. I haven’t yet seen a Scheme that lets me embed assembler code. Yes, self-modifying code CAN be hard to read. So far, in newLISP, it never has. Something about the design of newLISP encourages good clean coding style, on par with Python. Even though we don’t have enforced indentation.

    And unlike FORTH, our short functions are very readable. Chuck Moore, bless him, was compensating for his poor eyesight. Had he better eyesight, I think FORTH would have been a bit more readable.

    I hate any language that “protects” me from myself at the cost of giving me direct access to the system, whether that is a C api, a system level facility (pipes, sockets) , or the raw assembly instructions themselves.

    I use newLISP to do bit-banging over serial lines and for embedded devices. It is a dream for such work.

    newLISP is real-world.

  169. Shenpen: “Why not just utilize either the Java or the .NET / Mono ecosystem?”

    Because they’re write-once, test-everywhere? The JVM and CLR impose a layer of cruft that, in practice, makes the problems of portability they’re supposed to solve worse, not better…or haven’t you had applications break with a Java update? And don’t get me started on trying to run stuff under Mono. Reference my earlier comment about Itanium.

    Jessica: “However, I’d also remind you that you read code, including your own code, far more than you write it.”

    I fully agree. Long variable names glued together with dots are a hindrance, not a help, in this process. There’s a happy medium, and it’s somewhere well south of 40 characters (half a screen line).

  170. >> IMHO both are here to stay, so chosing Git over Mercurial because of popularity…

    > …makes sense if one of your main concerns is inviting new talent into a project that is overburdened relative to its manpower. You want the one that will present the lowest bar to the most people.

    Hmmm… if the selection criterion is lowest participation bar, then where repository is hosted (or rather where it is mirrored) is also important. For small changes GitHub web flow is almost zero effort: edit file in browser, automatically send GitHub pull requests; I guess you can configure GitHub repository so that pull request gets sent to project mailing list (or is it already done?).

    AFAIK neither Launchpad (for Bazaar), nor BitBucket (for Mercurial) provide this…

  171. … though I forgot that Emacs is a GNU project, so non-free infrastructure is a big no. Perhaps for a mirror (like with Git itself)

  172. Because they’re write-once, test-everywhere? The JVM and CLR impose a layer of cruft that, in practice, makes the problems of portability they’re supposed to solve worse, not better…or haven’t you had applications break with a Java update? And don’t get me started on trying to run stuff under Mono. Reference my earlier comment about Itanium.

    You should always test on every platform you support anyway so I have no idea how this is a downside of Java.

    Java updates do not break apps very often. In any case, for critical apps I deploy with my own JRE as part of the bundle. It costs some disk space but if I’m not certain of the target environment I make sure my own bundle/exe (using launch4j) is self contained. If I had a more complicated deployment problem I’d pay for InstallAnywhere.

    CLR wasn’t meant to be cross platform although C#/Mono is the best cross platform language at the moment for all the major platforms (Android, iOS, OSX, Windows). Linux on Itanium is not a significant platform so I’m not surprised Mono is meh there.

  173. @MycroftJones
    > I hate any language that “protects” me from myself at the cost of giving me direct access to the system,

    I strongly dislike this attitude. Part of the purpose of a programming language is to do precisely that — protect you from yourself. Why? Because we are humans and we make mistakes all the time. Programming languages that pick up on that at various points are extremely effective for reducing bug counts, and consequently the amount of time it gets to complete the software. The thing I have been banging on about here is type systems. They are one of the most effective tools for doing this.

    When you type some code into your editor for the first time, the likelihood is that your first cut will be wrong, and have several bugs. It is plain programmer arrogance to turn off these hedges of protection, not far from the idea of sending out a document without running spell check. This is also why programmers who don’t write deep unit tests are only doing half their job.

    That isn’t to say that there isn’t a reason for accessing the machine at a more bare metal level. However, a good language provides facilities to allow you to do this in an isolated fashion, where most of the time you get protected from your own mistakes, but you can, if you explicitly say so, throw off the limiting shackles. Two good examples of this are optimizations that turn off array bounds checking, and the “unsafe” block in C#.

  174. Nigel: “Java updates do not break apps very often.”

    For you, perhaps. For me, it seems that every time I mistakenly let Java update, one of the two apps I use regularly under it – Mercedes-Benz’s Electronic Parts Catalog and EMC’s Networker Management Console – breaks badly, and it’s hell getting things working again. Because of that, I try very, very hard not to ever update Java.

    Jessica: “Unix was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things.” – Doug Gwyn

    The same goes for programming languages. I want a nanny state programming language exactly as much s I want a nanny state government.

  175. @Jessica Boxer:

    Your comments about type systems and unit tests remind me of the arguments about code re-use. Yes, it can be beneficial. But seldom do the proponents of code re-use, type safety, or unit tests ever talk about the _costs_ associated with these things. These costs can be substantial.

    Do you ever write unit tests for your unit tests? If not, why not? They are error-prone code, too, you know.

    Sometimes, I actually do the equivalent of writing unit tests for unit tests. I might write a testbench that gets synthesized into an FPGA alongside a chip design. The process of synthesizing and placing and routing the FPGA can, in some cases, take several hours. In this instance, it makes sense to do a few basic simulations to insure that the testbench actually stands a small chance of working before I commit to the time to build the FPGA image. But I don’t do that for every small change, for the simple reason that if you added up all the time it would take to build simulation tests for every change, I actually save time by not doing this and accepting the occasional bad build. So, there is a cost/benefit analysis that depends on the size of the change, where it is, and, although politically incorrect in some corporations to bring it up, who is making the change.

    “When you type some code into your editor for the first time, the likelihood is that your first cut will be wrong, and have several bugs.”

    No, as I mentioned above, for many classes of problems the likelihood is that my solution will work perfectly. But even if we accept your assertion, the question is, will type safety and unit tests help me catch the bugs quicker than just running the damn program? The answer is not static. It varies according to the programmer, the language, and what is being attempted.

    Also, I will note that premature unit tests can actually hinder code refactoring, because then you have to rewrite the tests too.

    Most of the coding I do falls into one of two categories: (1) RTL designs, written in Verilog, because ADA-derived VHDL is far too verbose; or (2) code to (somehow) help test chip designs, usually written in Python, sometimes also written in Verilog.

    In the case of the RTL designs, we use all sorts of tools to help insure correctness, including a gazillion hand-written unit and system tests. Because the cost of a broken chip coming back after 6 weeks and $500K mask charges is too high.

    But when I write a wrapper module that allows me to interface with the chip over USB through an FTDI interface chip, do I unit-test that wrapper thoroughly? No. I just try to use the damn thing. Because the cost of failure is that I need to figure out what I did wrong and fix it.

    When I design the board that contains the FTDI interface chip and our silicon, I write a lot of Python to check the schematic netlist before we fabricate the board. If we screw that up, it’s just a 2 week hit and a few thousand dollars, so it’s just me checking it, not a team of 10. I have a library that I wrote and have reused multiple times, and additionally wind up writing a lot of code (sometimes over 5000 lines) that is specific to each board design. Sometimes I add features to the library. Do I have specific unit tests for the library? No. I might go back and make sure a few previous checkers work with it. Do I have specific unit tests for the board-specific code? No. I might make a temporary change to the design file or to the test code to insure that tests are being executed properly, but I don’t save unit tests for the tests.

    Would type safety help? Hell, no. I’m just winging it. The cost of failure is that I get a unexpected exception when I’m running a test. The cost to fix it is minimal. The biggest of my board checkers probably takes two seconds to run. They aren’t long-running processes that could fail catastrophically two weeks later because of a type problem.

    So, unit tests can be good. Type safety, or at least systems that let programs reason about types, such as lint checkers, can be good.

    But they are both overkill for a large class of problems that are best solved by someone who is an expert programmer, so I would argue that part of the job of programming is to make informed decisions about the level of testing required in any effort, and further argue that, far from it being universally true that “programmers who don’t write deep unit tests are only doing half their job”, sometimes the correct amount of testing to apply is zero.

  176. For you, perhaps. For me, it seems that every time I mistakenly let Java update, one of the two apps I use regularly under it – Mercedes-Benz’s Electronic Parts Catalog and EMC’s Networker Management Console – breaks badly, and it’s hell getting things working again. Because of that, I try very, very hard not to ever update Java.

    /shrug

    The fact that two programs break for you does not mean that most Java apps are that sensitive to JVM changes. In your case I’d put wrappers around them that specified a specific JRE and forget about them if they are THAT fragile. Or use Java Web Start.

    There’s some breakage between major revs like moving to Java 7 from Java 6 or from a 32bit to 64bit JRE. The jar files included might be incompatible in the later case. A JRE can be slimmed down to around 90MB.

  177. Jessica: “Unix was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things.” – Doug Gwyn

    Leading to the widespread use of unix on the desktop. Oh wait…the only unix in widespread use is OSX…

    The same goes for programming languages. I want a nanny state programming language exactly as much s I want a nanny state government.

    Lol. Too bad that the so called nanny languages are widely popular and useful. Your loss…those nanny features save a lot of time and effort.

    /shrug

    The young coders I see today easily transition from Java (Android) to ObjC (iOS) to Javascript to C++ to C# (Unity scripting) to whatever. The fact is good coders don’t make such silly assertions of not wanting nanny languages because each language has its use and it’s like saying “I don’t want cordless drills”.

  178. I cite those two programs as examples. The fact is that whenever I speak of encountering problems with programs breaking because of Java updates (yes, even minor versions), the universal reply is “Yeah, that’s part of the cost of doing Java. Live with it.” It just so happens that those are the only two Java programs I use, personally.

  179. @Jay Maynard
    >The same goes for programming languages. I want a nanny state programming language exactly as much s I want a nanny state government.

    This kind of reminds me about an argument gear heads have about anti-lock brake systems. There are circumstances in which anti-lock brakes are not as effective an a highly skilled driver operating at his peak. However, anti-lock brakes still save thousands of lives every year.

    If something works really well 99% of the time, one should not throw it away for the sake of the 1%. On the contrary, you provide a switch to turn off the anti lock components when you are in the 1% zone.

  180. @Patrick Maupin
    > But seldom do the proponents of code re-use, type safety, or unit tests ever talk about the _costs_ associated with these things.

    Hearing the cost of writing unit tests is a constant drumbeat, hearing the costs of code reuse is common, and the cost of type systems is the subject of massive debate in the programming community. So I think you are just plain wrong here.

    > Do you ever write unit tests for your unit tests? If not, why not?

    No, for two reasons, unit tests, by their nature, test themselves. A failed unit test is either a failure of the code under test, or the test infrastructure. Second, unit tests are usually very simple, straight through code. They don’t have the normal points you would test, which is to say loops, branches, and boundary conditions.

    > will type safety and unit tests help me catch the bugs quicker than just running the damn program?

    Running the damn program with what input data? Certainly if you run the damn program in such a way that it covers all code paths then you are right. But the number of test cases necessary to do that is exponential on the complexity of the program, which is why good unit tests isolate individual parts, so that the test suite is additive rather than multiplicative.

    Running the damn program usually leads to that nightmare “the program is right because it works on my machine.”

    > The answer is not static.

    The answer is static for all programs larger than tiny. So if you are writing tiny programs, I entirely agree. And there are many tiny programs that need to be written. My context is large program that are hard to get right.

    Which is to say “it runs on my machine” is fine if the only machine it ever is going to run on is your machine.

  181. This kind of reminds me about an argument gear heads have about anti-lock brake systems. There are circumstances in which anti-lock brakes are not as effective an a highly skilled driver operating at his peak. However, anti-lock brakes still save thousands of lives every year.

    There’s an easy way to do threshold braking…apply brakes until you feel the ABS pulsing, ease off and you’re pretty much at the threshold.

    Now to drag this analogy kicking and screaming and somehow apply it to memory management…

  182. So if you are writing tiny programs, I entirely agree. And there are many tiny programs that need to be written. My context is large program that are hard to get right.

    I have a SE degree and for the most part I roll my eyes with many SE techniques because large programs are fairly rare. Test driven design/test driven coding I roll my eyes less but because I agree in principle but it is still hard to apply for some types of software (notably ones that are mostly UI) to assert universal applicability.

    However many hackers dramatically underestimate the huge increases in complexity when you have to do a large system and these techniques (and language choices) don’t just make sense but makes delivering a useful product vaguely on time at all possible. My guess is because Linus has been irrationally good at herding cats and being a benign dictator.

  183. @Jessica Boxer:

    Hearing the cost of writing unit tests is a constant drumbeat

    I didn’t say the information wasn’t available, just that the proponents don’t discuss it often. Your previous comment is an exemplar of such behavior — try this magic elixir and it will fix all your problems!

    No, for two reasons, unit tests, by their nature, test themselves.

    Really? I beg to differ. But then, when I sit down to write unit tests, it’s a serious (if rare) affair.

    A failed unit test is either a failure of the code under test, or the test infrastructure.

    The problem isn’t tests that report failure. It’s tests that falsely report success.

    Second, unit tests are usually very simple, straight through code. They don’t have the normal points you would test, which is to say loops, branches, and boundary conditions.

    Wow. Jaw drops. We live in completely different worlds. In my world, the test code is often 20 times the size of the the code being tested, and infintely more complicated in behavior. We would ideally like to achieve 100% coverage, but since that’s impossible, a lot of thought and test code is applied to the process of developing useful constraints for randomized vectors.

    will type safety and unit tests help me catch the bugs quicker than just running the damn program?

    Running the damn program with what input data?

    Whatever data is at hand. I explicitly said the answer to this question is highly context dependent. But as one example where the right answer is just to run the damn program, I am often handed data and asked to do something with it. The data is produced by other tools, and the results are fed into other tools. The only spec that matters is the input file itself, and the description of why the next tool in the chain is barfing on the data. How would you even begin to unit test that? Either it works or it doesn’t. When it doesn’t work on the next chip a year later because the CAD vendor modified the format slightly, it’s usually a 15 minute fix.

    BTW, in case you’re wondering how I know that my massaging of the data in the pipeline didn’t muck the data up, there are tools for that. Plus we re-run all the unit tests on the final netlist.

    Certainly if you run the damn program in such a way that it covers all code paths then you are right. But the number of test cases necessary to do that is exponential on the complexity of the program, which is why good unit tests isolate individual parts, so that the test suite is additive rather than multiplicative.

    But if it would take 30 hours to write good unit tests, and prior experience tells me that any bugs can be figured out and fixed in 15 minutes, then testing would be 30 hours ill-spent. I explicitly acknowledged in my post that there are different kinds of programming. This kind of programming is for in-house use, where any failures will be noticed and dealt with in a reasonable fashion.

    Running the damn program usually leads to that nightmare “the program is right because it works on my machine.”

    I very seldom find that to be the problem. My code works for my coworkers on several versions of Linux and Windows. But then, I mostly write in Python.

    The answer is not static.

    The answer is static for all programs larger than tiny.

    No. It’s not at all just about the size of the program. It’s about the size and character of the userbase as well. That is one of the main points I was trying to make that you seem to be missing or ignoring. I don’t just write code. I write code that writes code. I write code that tests code as well, and as I mentioned, sometimes I write code that tests code that tests code. As I mentioned a few lines up, we actually use tools that verify that multiple representations of the same code do, in fact, behave identically.

    In my environment, programming is a meta problem. It is not at all “here is the program, and here is the program that tests it.” I cannot assume that all production code needs to be tested, or, conversely, that no test code needs to be tested. The first assumption would lead to schedule slips, and the second assumption would lead to bad chips.

    My context is large program that are hard to get right.

    At one point I was the chief maintainer of a modem codebase that was around 200K LOC of pure assembler. I got tired of the macro assembler and linker not giving me reasonable error messages, so I wrote my own macro assembler and linker. I didn’t bother with unit tests for the macro assembler/linker (which totalled IIRC around 10K lines of Python) because the existing assembler codebase was more than adequate. That doesn’t mean that the entire thing was particularly small or particularly easy to get right, although Python helped a lot.

    But this is one reason the Unix way eschews particularly large programs. My current context is an environment with 20 developers, using tools that we pay millions of dollars a year for, churning out reasonably large designs that absolutely have to work right THE VERY FIRST TIME. Don’t for a minute think that I don’t understand either scale or the necessity of unit tests. But that understanding also includes the understanding that sometimes spending time testing is simply counterproductive.

    Which is to say “it runs on my machine” is fine if the only machine it ever is going to run on is your machine.

    And a lot of my programs are like that. But a lot of them have multiple users. They are not polished for sale, but they certainly help us polish the things we do sell. And the code that I occasionally write that actually gets baked into the chip as RTL has the bejeezus tested out of it.

    There is a huge middle ground between “my machine only” and “cannot possibly fail anywhere”, and for a lot of that middle ground, unit testing is superfluous. When it comes to the “cannot possibly fail anywhere”, I would wager we actually do at least as comprehensive a job of unit testing as you do, because for chip companies, the cost of failure on the chip itself is extremely high, both in direct dollars and in opportunity cost.

  184. Wow. Jaw drops. We live in completely different worlds. In my world, the test code is often 20 times the size of the the code being tested, and infintely more complicated in behavior. We would ideally like to achieve 100% coverage, but since that’s impossible, a lot of thought and test code is applied to the process of developing useful constraints for randomized vectors.

    You do. In most software engineering products, the test code may be long, but it is almost always very straightforward: a bit of setup and teardown code, and then a long string of assertTrue, assertEq, assertThrowsException, etc. statements which are the actual tests themselves. If it’s complicated, you’re doing it wrong. Unit tests generally do not “fuzz” the class or method under test by attempting to generate random input; they exercise common cases and corner cases which may be known or likely to cause the program to snag if not coded properly (for example if testing an int to string function, you might test a few positive numbers but also zero, negative numbers, and the most negative and most positive integers for the machine’s ‘ini’ word size).

    Unit tests are intended to be fast enough to be easily desk-checked by a developer before he/she checks in a change.

    “Just running the damn program” is almost never an option because it’s tedious to run the damn program and then feed it data that might make it fail. You can automate running the damn program with integration and performance tests, which come later in the test cycle and are intended to catch bugs that unit tests miss.

    Unit tests are yet another of those things that the Lisp community gave us, as they’re pretty much an automation of the Lisp programmer typing things in a REPL and checking the results. It’s not a big surprise that this testing discipline came to us by way of Smalltalk.

  185. MycroftJones, what does Genera — a Common Lisp implementation running on dedicated Lisp hardware — have to do with newLISP?

    You are not building a good case for newLISP not being a cult.

  186. Symbolics LISP, (now Zeta LISP) let you program right down to the bare metal. The Common Lisp implementation came later. The essential parts of the Genera system are still written in Zeta Lisp. Common Lisp isn’t up to the job.

    The “spirit” of the original Symbolics Lisp is very well preserved in newLISP. Get results, get it done.

  187. @Jeff Read:

    Unit tests are intended to be fast enough to be easily desk-checked by a developer before he/she checks in a change.

    You realize that these days, with a dynamic language, that’s often a really low bar, right? There are a lot of mid-to-large sized packages that can have all their unit tests run in minutes or even seconds. (By contrast, chip level tests on gatelists could sometimes take weeks to run.)

    Unit tests generally do not “fuzz” the class or method under test by attempting to generate random input; they exercise common cases and corner cases which may be known or likely to cause the program to snag if not coded properly (for example if testing an int to string function, you might test a few positive numbers but also zero, negative numbers, and the most negative and most positive integers for the machine’s ‘ini’ word size).

    This approach directly contradicts the goal that the purpose of unit testing is to test things at the lowest possible level. One time about 5 years ago, a co-worker (since “laid-off”) decided for some insane reason that he had to code his own multiplier. This is an approach straight from the early 90’s when you couldn’t necessarily expect the one the synthesizer would build for you to have the right area/speed tradeoff. This was the ugliest code possible, so I decided to write my own unit test for it. This particular multiplier was 18 x 24 bits, so that’s only 4.4 trillion possible input combinations. Yes, there were subtle errors on a very small number of input combinations. No, there’s no fricking way they could have been found by testing at a higher level, or by hand-writing a few simple asserts. No, testing isn’t in my job description. Yes, I often have carte blanche at work, because helping to build a working chip is in my job description, and chip companies usually walk the walk when it comes to believing that finding bugs is a good thing.

    “Just running the damn program” is almost never an option because it’s tedious to run the damn program and then feed it data that might make it fail. You can automate running the damn program with integration and performance tests, which come later in the test cycle and are intended to catch bugs that unit tests miss.

    Again, I mentioned there are lots of types of programming, and I just gave an example at the opposite end of the spectrum, where I tested the bejeezus out of a multiplier — the smallest possible unit. But I suspect there are many more instances where “just running the damn program” works than you imagine. Kent Beck said that he “rediscovered” his TDD methodology by reading in an ancient book that the right way to test was to manually code up the expected output file for one set of input data that the program was designed to work on, and modify the program until it matched bit-for-bit. That’s basically what I did with my assembler/linker and 12 MB of DSP macroassembler source.

    Unit tests are yet another of those things that the Lisp community gave us, as they’re pretty much an automation of the Lisp programmer typing things in a REPL and checking the results. It’s not a big surprise that this testing discipline came to us by way of Smalltalk.

    IMO (both from my own reasoning and from carefully parsing the utterances of testing experts), there isn’t or shouldn’t really be a hard and fast distinction between various kinds of testing, other than whether the tests are what I term “regressionable,” e.g. capable of being run inside a regression framework. Tests are non-regressionable when they require programmer input, or when the output requires programmer interpretation (when they are not self-checking), or, in some cases, when they take far too long to run in any reasonable subset of the regression suite.

    People who think in terms of “unit”, “integration”, and “system” tests often needlessly lock themselves into a three level hierarchy that depends less on the actual testing needs of the system and more on external factors. “System” means stuff the customer cares about. “Integration” means Joe and I hooked our code up and it all works together. “Unit” is relegated to the lowest possible independent code module. Some people assign still more meanings or magic incantations to these, such as “unit” tests are allowed to be white box, but higher levels aren’t. You yourself magically associated “unit” with “must run quickly.” That doesn’t really make any sense in my universe. Where I work, we have “confidence tests” that are run to make sure that nothing major breaks when you check something in, so that developers are _probably_ not adversely affecting other developers, but you can bet your bottom dollar that most of the tests in the longer running regression are essentially testing units, because otherwise, the complexity explodes to the point that the coverage drops.

    In the chip world, companies also tend to lock themselves into somewhat false distinctions between “verification” tests and “validation” tests. The distinction is somewhat blurred when you can emulate the chip before you build it and run the validation test suite on the emulation. To highlight the fact that the distinction is somewhat arbitrary (usually based on how the tests are constructed — verification is done through simulation, and validation is done by putting an actual chip on an actual board and testing it), an ex-boss of mine used to always say “valification”.

    Wow. Jaw drops. We live in completely different worlds.

    You do. In most software engineering products, the test code may be long, but it is almost always very straightforward: a bit of setup and teardown code, and then a long string of assertTrue, assertEq, assertThrowsException, etc. statements which are the actual tests themselves. If it’s complicated, you’re doing it wrong.

    Yeah, I’ve seen and written such tests. And often they are good enough. But, as you can see from my multiplier example, sometimes this method is insufficient. Yet, sometimes it’s unnecessary. It really depends on context, and despite what Jessica has said, the context can vary considerably even within a single large system. Even Kent Beck says:


    I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don’t typically make a kind of mistake (like setting the wrong variables in a constructor), I don’t test for it. I do tend to make sense of test errors, so I’m extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

    Perhaps unlike Kent, I occasionally get paid for writing tests. It’s never in my job description, but several times in my career I’ve had my boss tell me something like “I’m nervous about X. Could you look into it and make sure it doesn’t bite us in the ass?” When I’m told that, naturally I feel like I’m being paid to write tests, but I never feel like I am circumscribed in how I approach it, so I have no qualms about digging into the code and looking for suspected pain points (white box). One time it was a UART block. I checked in a directory called “uart_dma_torture_tests” and started writing and checking in tests. The developer’s first reaction was “man, that’s an accurate directory name!” After fixing several bugs, his second reaction on one of the issues I filed was “Could that really happen in a real customer system?” to which my answer was “I don’t know, which means you have two choices. Either do the work to prove it can’t, or fix it.” Sure enough, after thinking about it and realizing how complex any possible proof would be, he fixed it.

    1. >IMO (both from my own reasoning and from carefully parsing the utterances of testing experts), there isn’t or shouldn’t really be a hard and fast distinction between various kinds of testing, other than whether the tests are what I term “regressionable,” e.g. capable of being run inside a regression framework.

      Agreed. In GPSD this is the only distinction that matters. Most device drivers we can test just by simulating an input test load. A few are must be live-tested because in normal operation there are handshakes between the driver and the device. Those are the problem children.

      >People who think in terms of “unit”, “integration”, and “system” tests often needlessly lock themselves into a three level hierarchy that depends less on the actual testing needs of the system and more on external factors.

      Also agreed.

  188. Patrick,

    I guess what I was trying to get across is that your domain of expertise has a lot of special circumstances under which a normal testing regime does not apply. But for the vast bulk of software development — people like Jessica producing software applications on commodity hardware to serve business needs — test-driven development is an important part of making sure the software gets delivered on time with a minimum of bugs.

  189. I’ll also point out that as complex as your software may be that 200k lines of assembler is at best medium sized. 200k of C++ is also only medium sized.

    This is the disconnect that low level devs make when extrapolating to actual huge software projects. The distinction between unit, integration and system tests is born from decades of experience in the development of huge DoD and NASA systems, many of which were still buggy or failed to complete despite testing by different organizations at different levels.

    As context your portion of the total system is probably just one CSC of a complex spacecraft design with custom chips, another couple hundred thousand lines of embedded real time C/C++ flight and payload software on top of them, and around million lines of code ground side for spacecraft command and control and science data pipeline.

    Unit tests are done by the contractor. Integration tests are the responsibility of the prime. System tests are generally done by the sponsor. This structure is a function of the large number of different teams with different expertise, generally working for different companies, required to deliver the final product.

    Your multiplier test is nothing special. For critical functions those are the same kinds of exhaustive tests we must write. Whether it is called a unit test or as part of the IV&V test suite is immaterial. Generally it will be part of the IV&V test because most projects do want to unit tests to run quickly so the can be part of an automated build cycle that completes in a reasonable time as Jeff points out.

  190. @nigel
    > Test driven design/test driven coding

    I would not be an advocate of test driven design. I think it neglects important parts of the software design process. However, TDD != Agile development, it is merely one instance of it.

    > but it is still hard to apply for some types of software (notably ones that are mostly UI) to assert universal applicability.

    This is mainly a matter of tools, and specifically the ability to perform IoC with mocking etc., to isolate unit tests. If you haven’t had the opportunity Nigel, I recommend you take a look at angularjs from google. it is a Javascript framework that is both very effective, declarative in principle, and has unit testing at its very core. It makes unit testing GUIs, or Web GUIs anyway, really rather straightforward. I have become a big fan over the past year.

    IoC has become more and more easy over time in C#. It is a nightmare in many languages, though I think it is pretty easy in Python too, but I am a bit out of date on Python, having spent six months trying to love it, but finding I hate it.

  191. @Jeff Read:

    I guess what I was trying to get across is that your domain of expertise has a lot of special circumstances under which a normal testing regime does not apply.

    I don’t think that’s true. See, e.g. Nigel’s comment below yours. In other words, it’s perfectly “normal” in many contexts.

    But for the vast bulk of software development — people like Jessica producing software applications on commodity hardware to serve business needs — test-driven development is an important part of making sure the software gets delivered on time with a minimum of bugs.

    The level of testing required per component is variable. I quoted Kent Beck to back this up. The required coverage can vary down to zero, or up to over 100%. (I say “over” in taking exception to Jessica’s comment that tests are always self-testing.) This is all true whether you are doing TDD or not. If I worked in a place that I imagine Jessica’s place to be like (not that it’s really like that, but…), I’d probably be looking for another place pronto.

    @Nigel:

    I’ll also point out that as complex as your software may be that 200k lines of assembler is at best medium sized. 200k of C++ is also only medium sized.

    I agree that in the context of an entire system, this is true. However, it’s actually reasonably large for a single program that doesn’t use any separate libraries (in any relatively sane world), and the interoperability testing requirements are huge.

    This is the disconnect that low level devs make when extrapolating to actual huge software projects. The distinction between unit, integration and system tests is born from decades of experience in the development of huge DoD and NASA systems, many of which were still buggy or failed to complete despite testing by different organizations at different levels.

    Sure, but I don’t actually have that disconnect. The disconnect seems to be taking jargon that is designed for a full aircraft, and applying it to smaller software-only homogenous systems, written by a single company.

    As context your portion of the total system is probably just one CSC of a complex spacecraft design with custom chips, another couple hundred thousand lines of embedded real time C/C++ flight and payload software on top of them, and around million lines of code ground side for spacecraft command and control and science data pipeline.

    Agreed. But the spacecraft design is, though a much bigger effort, entirely analogous to the scenario where we are building and testing a chip using tools from dozens of vendors.

    Unit tests are done by the contractor. Integration tests are the responsibility of the prime. System tests are generally done by the sponsor. This structure is a function of the large number of different teams with different expertise, generally working for different companies, required to deliver the final product.

    And in that context, the distinction makes much more sense. But I would argue the real roots of the distinction are more technical than political (based more on expertise differences than employer differences). The problems you describe are those of integrating completely heterogenous systems.

    Your multiplier test is nothing special.

    I agree completely. Or at least it shouldn’t be. I only brought it up to counter the arguments that “unit tests” are only a few LOC and are always much simpler than the unit being tested.

    For critical functions those are the same kinds of exhaustive tests we must write. Whether it is called a unit test or as part of the IV&V test suite is immaterial. Generally it will be part of the IV&V test because most projects do want to unit tests to run quickly so the can be part of an automated build cycle that completes in a reasonable time as Jeff points out.

    And that’s where we make a distinction where I work between the full regression suite and the confidence test subset of the regression. If you check something in without running the confidence tests and it breaks things, you are in trouble. If you check something in after running the confidence tests and it breaks something unrelated, chances are the confidence suite will get an upgrade. The confidence test subset is designed to insure that you don’t screw up things so badly that you cause a work stoppage for others, but it runs in minutes, while the full test suite might take days or weeks, even when distributed across dozens of machines.

    But in any case, the directed test that tests the multiplier is a low-level test that tests the smallest possible code unit that could be black-box tested, and that particular test can only possibly run fast enough if I only simulate the multiplier and not the full system. No matter which suite you place it in, you would be hard pressed to describe it functionally as testing integration or testing the system. No, to the extent that words mean what the average guy would expect, it’s testing the multiplier unit.

    Which gets back to my earlier point to Jeff and Jessica about jargon. I use unit test to mean “all the code that is targeted and stresses a single unit”, and Jeff and possibly Jessica and possibly you seem to use unit test to mean “what I run before I check stuff in.” Of course, one of the major reasons to test stuff before you check it in is to avoid impinging on other peoples’ work, so shouldn’t that be called an integration test? But no, integration test means something completely different.

    1. >Which gets back to my earlier point to Jeff and Jessica about jargon. I use unit test to mean “all the code that is targeted and stresses a single unit”, and Jeff and possibly Jessica and possibly you seem to use unit test to mean “what I run before I check stuff in.”

      I think Jeff and Jessica are speaking the language of JUnit, the Java test framework that seems to have popularized “unit test” as a term of art. JUnit’s angle on testing is suited to an environment in which lots of well-serialized transactions modify data representations that are relatively simple in structure, like relational databases. Because of this, the invariants you want to check are captured effectively by point tests at various positions in the data pipelines. Web dev and most business-process programming fit this model very well.

      Systems programming and the kind of bare-metal design you do, Patrick, do not fit this model. At all. Not only are the invariants difficult to check, they’re difficult to even describe. This is why you (and I) have to rely so much more heavily on end-to-end and regression testing. In our context, JUnit-style testing (Jeff’s sequences of asserts) has very limited utility and is only good for testing small components with functions that are easily isolated from the main flows of logic. The real action is elsewhere.

      I invest heavily in test code, but I don’t use the term “unit test” much at all. When I do, it’s usually because I’m trying to explain a test regime built on very different assumptions from JUnit’s in terms I think the listener may have an easier time understanding. Most of my test loads are built from problem reports. The typical sequence of events is: someone reports a bug, I capture or generate input that produces the bug, I fix the bug, then I build a test that checks that the problem input now produces correct output.

      That is, what I rely on most heavily is regressionable end-to-end testing; this is conspicuously true, for example, in both GPSD and reposurgeon. But I sometime call it “unit testing”, using a broader definition of “unit” rather like yours, because that’s the terminology people from a Java/web-dev/business-programming environment understand.

  192. @Patrick Maupin
    > try this magic elixir and it will fix all your problems!

    Funny, I don’t remember saying that. What I did say was that unit testing in half the battle in software development. However, Perhaps I should explicitly qualify that. Unit testing matters only when robustness matters, or when scale is an issue. Of course you can write one off data analysis programs without that methodology. I write a couple of little programs a week that go through and do some sort of data analysis. These programs neither scale not are they robust. But they only run a few times on my machine, often in the debugger.

    As Jeff pointed out, I am talking about development of serious software. Small software isn’t hard to write. Big software is.
    > The problem isn’t tests that report failure. It’s tests that falsely report success.

    If you don’t know what your function is supposed to do you have a specification error, not a code error. Unit tests are for discovering code errors, they can’t read the mind of the designer. Other types of testing are for these sorts of things (and in particullar, agile software with frequent deliveries to actual users are one of the best ways of finding specification errors.)

    > Wow. Jaw drops. We live in completely different worlds. In my world, the test code is often 20 times the size of the the code being tested,

    It doesn’t surprise me that you say that. I don’t know your situation, and maybe you are a special case, but most of the time when I see that it is because it is not being tested correctly. I mentioned this before — if you push data in and test everything at once you get an exponential, combinatorial expansion of test infrastructure complexity. However, if you isolate and test individual parts then test the composition of these parts, your text infrastructure is additive on size rather than multiplicative.

    This is rarely done because many environments and tools make the isolation extremely difficult, which is why I have been banging on about the importance of support for inversion of control.

    But, perhaps your situation is unique or radically different than anything I have seen.

    > constraints for randomized vectors.

    FWIW. randomized data is usually a terrible way to test. Regression tests should be reproducible, randomization screws that up. Intelligent selection of boundary cases is what is needed. Similarly, whoever suggested exhaustive testing of a hardware multiplier — I think that is also pretty crazy. Perhaps you can do it for 16×24 bits, but how to you test a 128×128 bit multiplier? The best unit tests are the most minimal that provide complete coverage. (Which is of course a goal to aspire to, though it is largely unreachable.)

    Let me give you a simple example. One methodology I use sometimes, while not perfect, works fairly well.

    I pull up the module to be tested, and I scroll through. I put a break point at every control point within the software. At the beginning of the functions, at all the if cases and loops and so forth.

    Then I run my unit tests, and whenever I hit a breakpoint, I remove it. Once I am done, I scan through the code again, and find all the breakpoints that were not hit, and add unit tests to exercise them. This misses somethings — specifically unspecified else branches, and complex short circuited conditionals, but is a good ghetto approach in my experience.

    After that you need a tool to perform real code coverage analysis.

  193. @esr:

    In our context, JUnit-style testing (Jeff’s sequences of asserts) has very limited utility and is only good for testing small components with functions that are easily isolated from the main flows of logic. The real action is elsewhere.

    Yeah, the first time I tried to employ JUnit-style testing, I wasted a bunch of time. I wanted to believe:


    As you read, pay attention to the interplay of the code and the tests. The style here is to write a few lines of code, then a test that should run, or even better, to write a test that won’t run, then write the code that will make it run.

    You will be able to refactor much more aggressively once you have the tests.

    Unfortunately, my preconceived mental concept of a unit test was already a much more comprehensive piece of code than that webpage contemplates (and than Jessica and Jeff are discussing). So while a suite of unit tests makes it easier to refactor at and below the level of the code the test is accessing, if you have a large suite of comprehensive unit tests, it actually makes it much more difficult to contemplate refactoring, never mind “aggressively”, at a higher level, for the simple reason that once you decide that your lower level primitives are not a good match for the overall program, now you have to throw out a whole lot of unit tests that you spent a lot of effort on.

    All the documentation I have seen for xUnit stuff is like this — it basically tries to take you from doing zero testing to doing a small amount of testing. It’s not written for the reader who already takes testing very seriously, and so it doesn’t have the tools to put that reader in the proper mindset to just do a small amount of testing down at lower levels. Basically, all the xUnit stuff is saying to the average programmer is “hey, you know how you manually fed some data into your new function to make sure it wasn’t total crap? Just add that same manual stuff to a repository so it gets run over and over.” So it’s unfortunate that they use the term “unit test” to discuss this very minimal testing.

    Since that time I have actually participated in a few situations where such TDD unit testing works (mainly programming sprints), so I have a glimmer of understanding of some of the value, and I have to say that IMO most of the value depends more on the size of the team than the size of the program. Unit tests can be a great way to get most of the benefits of design-by-contract in a more informal setting. The test becomes the contract, but it’s between friendly parties, so the fact the the contract is silent on a particular issue is irrelevant until it becomes clear by testing at a higher level that the developers using the unit and the developers writing the unit have a disagreement on how the unit should handle that issue. At which point somebody updates the unit test and the fun continues.

    But with only a few developers and a functioning program, the reward for reaching down into the bowels of the program to create a new test at the level of an internal interface, rather than just adding a new case to the full system test suite, can be somewhat limited, or even negative when you consider that, if you had spent that time on adding a system-level test, that test should, in fact, aid in making sure that even serious refactoring doesn’t cause problems.

    The typical sequence of events is: someone reports a bug, I capture or generate input that produces the bug, I fix the bug, then I build a test that checks that the problem input now produces correct output.

    Which, if they are being honest enough to admit that the simple sequence of asserts doesn’t always catch all bugs, is exactly what the TDD people do, as well. And is also exactly what Kent Beck says he rediscovered — have some input, and modify the code until the output is correct.

    The only question is — at what level of the system should the new test be written? My answer is usually “the highest level that is reasonable” because that makes the test actually test more stuff, and also makes it less likely that refactoring will break the test. As the multiplier example shows, though, sometimes the highest reasonable level is not very high at all. But I can understand that, if you’re working with a large team, or your code often needs debugging, or you’re not very good at debugging, testing in bite-sized chunks becomes more palatable.

    Personally for programs (even large ones) where I’m the only or one of a very few developers, I far prefer the occasional head-scratching debugging session over having to maintain a lot more lower-level tests.

  194. @Jessica Boxer:

    Unit testing matters only when robustness matters, or when scale is an issue.

    As a general guiding principle, I think we can agree on this

    As Jeff pointed out, I am talking about development of serious software. Small software isn’t hard to write. Big software is.

    Yes, and chips are even harder.

    The problem isn’t tests that report failure. It’s tests that falsely report success.

    If you don’t know what your function is supposed to do you have a specification error, not a code error.

    This completely misses the point. I’m talking about a test that is complicated enough that it could easily have an error in it that the Mark I eyeball won’t spot very easily.

    Unit tests are for discovering code errors, they can’t read the mind of the designer.

    Yeah, but discovering code errors can be unimaginabily difficult, especially when, for example, the code in question is a complicated state machine written in Verilog.

    Other types of testing are for these sorts of things (and in particullar, agile software with frequent deliveries to actual users are one of the best ways of finding specification errors.)

    From the time we decide that we actually have a chip design ready (which is a process that involves perhaps 10 distinct disciplines) until we can hold the chip in our own hands is on the order of $700K and 6 weeks. (Said chip at that point might not even work even if there are no design problems — it hasn’t been through the tester yet to insure there are no fabrication or packaging problems.) If it is a brand new chip, it’s probably at least another two months until we could hand it to a customer. For a respin, getting it to a customer might be only an extra two weeks. This means that (a) we do a practically indescribable amount of testing (because the cost of error is extremely high), and (b) no, we’re not delivering new chips to customers every week.

    Wow. Jaw drops. We live in completely different worlds. In my world, the test code is often 20 times the size of the the code being tested,

    It doesn’t surprise me that you say that. I don’t know your situation, and maybe you are a special case, but most of the time when I see that it is because it is not being tested correctly.

    Ah, yes, the old “you must be doing it wrong.” As I explained in my post to ESR, there is a disconnect between what agile people call a unit test, and an actual comprehensive unit test, and I learned this the hard way when I first encountered xUnit. In that instance, I obviously was doing it wrong. But in my normal work environment, the cost of failure is high, so the tests are more comprehensive than the ones the agile people typically write.

    I mentioned this before — if you push data in and test everything at once you get an exponential, combinatorial expansion of test infrastructure complexity. However, if you isolate and test individual parts then test the composition of these parts, your text infrastructure is additive on size rather than multiplicative.

    True. But when you put the chip together, you get an exponential, combinatorial expansion of possible bugs, although I was actually talking about the unit tests themselves. Those are typically much larger than the units (and some of the units are quite small by any measure.)

    This is rarely done because many environments and tools make the isolation extremely difficult, which is why I have been banging on about the importance of support for inversion of control.

    Verilog was actually designed as a test language, and later became an implementation language. Synthesizable Verilog (that goes into a chip) is strictly hierarchical, with well-defined interfaces. You could almost think of it as each module being a class, and the only thing that a class constructor can do is instantiate sub-modules and define the relationship (wires) between the internal module code and the sub-module code. As far as “control” goes, for the synthesizable stuff, the best analogy I can give you is event-driven programming. If you consider a transition on the CPU clock to be an event, then there are a gazillion listeners on that event that read the current state and produce a new state. That’s just how it works, and no wishing it to be different is going to help get a chip out. But the non-synthesizable testbench code can inspect and modify any value in the system at will, and it is extremely easy to write white box monitors that can generate errors on any arbitrary set of conditions. Testing a module is not typically as easy as with software, however, because you don’t just call the module. You instantiate the module, and then clock source data in and destination data out.

    But, perhaps your situation is unique or radically different than anything I have seen.

    If you had a catastrophic failure that meant nothing worked, and you had already done the work to figure out the problem and had already recompiled and tested your program with the fix, would it cost you $700K and take 8 weeks before you could get it into your customers’ hands? That’s a worst case number. If we only need to change a couple of top layers and we have had the foresight to stockpile partially fabbed chips that didn’t have those layers etched, it might be more like $100K and 4 weeks.

    Yes, in my world, we do the equivalent of patching a program using a hex editor and then handing it out to the customer, rather than recompiling it, because that could save over half a million dollars and a month of time.

    FWIW. randomized data is usually a terrible way to test.

    No, it’s not. It’s an extremely useful adjunct to the directed testing we do. But, as I described, even the randomized data is constrained (directed). But don’t take my word for it:


    UVM test benches are complete verification environments composed of reusable verification components, and used as part of an overarching methodology of constrained random, coverage-driven, verification.

    Regression tests should be reproducible, randomization screws that up.

    No, it doesn’t. Since all “randomized” tests are really pseudo-random, all you have to do is save the seed with the test results and you can reproduce at will.

    Intelligent selection of boundary cases is what is needed.

    We do a lot of that.

    Similarly, whoever suggested exhaustive testing of a hardware multiplier — I think that is also pretty crazy.

    I didn’t test the multiplier exhaustively. That would have been 4.4 trillion operations of it, which probably would have been two months in the simulator. Instead I used a combination of randomized data and what you suggested — intelligent selection of boundary cases. However, to do that case selection properly required a program. For example, let’s try all zeros, except a 1 in each bit place in succession. Now lets try combinations of two ones. Then three. Then do the same with all ones and combinations of zeros. Don’t forget all zeros and all ones. Calculate how big the test vector will be. OK, that shouldn’t take too long, let’s do all combination of 4. That looks like it should only take a few hours, so let’s feed it to the unit. That’s my unit test, which was complicated enough that I felt compelled to check that it was doing the right thing (testing the test), and combined with the custom test harness to use verilator rather than Cadence’s simulator (because the multiplier was pure combinatorial logic and verilator is much faster than the sign-off simulator) made it much bigger than the unit it was testing.

    You say that if the test code’s bigger, I’m probably doing it wrong. I say that this is but a single example where I caught a major bug and prevented a costly respin, that wouldn’t have happened with a smaller test.

    Perhaps you can do it for 16×24 bits, but how to you test a 128×128 bit multiplier?

    You’re catching on. You wouldn’t even do it for 16×24 bits unless you had a lot of time. But you’d be a fool to test even a 16×24 bit with manually coded cases. (More to the point, we’d have a broken chip if I had done that.)

    The best unit tests are the most minimal that provide complete coverage. (Which is of course a goal to aspire to, though it is largely unreachable.)

    On that we’re largely in agreement, although I will say that I seriously suspect that on the actual chip itself, we both try and achieve a coverage that is a lot closer to 100% than you do on your program.

    I pull up the module to be tested, and I scroll through. I put a break point at every control point within the software. At the beginning of the functions, at all the if cases and loops and so forth.

    I don’t think I’ve used a debugger in 8 years. Waveform viewers, yes. Debuggers, no. (Although I’m about to start, because I’m going to have to write some C code that runs on the fancy new chip we just built.)

    Minor correction: Two years ago, I interfaced with GDB and used it to show some of our software guys how to use the JTAG remote debug interface that I wrote works for the Leon core. They use GDB through my interface quite regularly now, because Gaisler research got too big for their britches and started charging way too much money for their debug interface. Interestingly, my Python code works a heck of a lot faster than Gaisler’s C code for the JTAG interface.

    Then I run my unit tests, and whenever I hit a breakpoint, I remove it. Once I am done, I scan through the code again, and find all the breakpoints that were not hit, and add unit tests to exercise them. This misses somethings — specifically unspecified else branches, and complex short circuited conditionals, but is a good ghetto approach in my experience.

    We have coverage tools that will tell us exactly which lines were executed in simulation.

    After that you need a tool to perform real code coverage analysis.

    Yeah, I’m not sure why you would start in the debugger, but even when I was a full time software developer, I seldom used a debugger. Except when I was writing Windows 3.1 display drivers, where the debugger made up for the paucity of the documentation.

    Back in 1983, I was writing assembly code for Z80-based protocol converters (which allowed ASCII terminals such as VT100s to pretend they were IBM 3270 or 5250 terminals). I did a major refactoring of the 3270 code in the first 6 months I was there, and after that it was all adding new features and working on new products, etc. Anyway, after I’d been there about a year and a half, I had a particularly pernicious bug. I asked my boss if he had time to show me how to use one of the Z80 emulators. He was shocked. But everything had been fine up until then…

  195. Symbolics LISP, (now Zeta LISP) let you program right down to the bare metal. The Common Lisp implementation came later. The essential parts of the Genera system are still written in Zeta Lisp. Common Lisp isn’t up to the job.

    Then how do you account for the fact that modern CL implementations routinely contain in-memory compilers and assemblers — written in themselves? CL is just a language spec that delineates what all CL implementations must have in common; if you commit to developing in SBCL, you can leverage the superset of CL that the implementation gives you. newLISP is not special in this regard; it is a single implementation of an ad-hoc Lisp dialect. And it can’t even compile itself, nor is it even compilable in the general case.

    I think Jeff and Jessica are speaking the language of JUnit, the Java test framework that seems to have popularized “unit test” as a term of art.

    Extreme Programming is what popularized “unit test” as a term of art. JUnit was developed to facilitate XP in Java. It and its siblings NUnit and CppUnit all came after XP hit the mainstream and people started recognizing unit tests as important.

    XP is the primordial Agile software methodology; the others were derived by removing things from XP and/or incorporating agile or JIT processes from other industries (e.g., kanban).

    Systems programming and the kind of bare-metal design you do, Patrick, do not fit this model. At all. Not only are the invariants difficult to check, they’re difficult to even describe. This is why you (and I) have to rely so much more heavily on end-to-end and regression testing. In our context, JUnit-style testing (Jeff’s sequences of asserts) has very limited utility and is only good for testing small components with functions that are easily isolated from the main flows of logic. The real action is elsewhere.

    I’ve noticed this as well. NetBSD’s filesystem code tests, for instance, work mainly by checking that filesystem images mount and unmount cleanly and that file operations on the mounted fs succeed. Some, but not all, of the fs modules have more fine-grained tests.

    The funny thing is that XP does not account for this — at all. It’s got the same cult-like whiff I described as attending the newLISP community. According to XP, you are either doing all of XP — pair programming, TDD, the planning game — or you are putting your software project at risk of failure. Any code that is written outside this framework is, by definition, broken. Some account is made for “spike solutions” that are created outside the XP methodological framework, but the rules of XP are very strict about discarding these and never, ever letting them into production. This does not account for the vast quantities of code that has lasted for decades, that wasn’t written against a set of unit tests with two people to a terminal, etc. For that reason I have always viewed XP with a fair bit of suspicion, though it has noble goals.

    A while back you wrote of how Agile methodologies are just codifications of hacker best practice. It may be that a cult-like mentality is what was necessary to reprogram legions of non-hacker professional programmers stuck in a local maximum of keeping crufty enterprise systems held together with duct-tape and chewing gum to adopt the practices that have long been subconsciously ingrained in the hacker mindset. There may be some deep lessons here that can be applied to the structure and purpose of actual cults including Christianity.

  196. @Jeff Read:

    > Extreme Programming is what popularized “unit test” as a term of art.

    Yeah, I get it. Meanings change when a particular use becomes widely adopted, e.g. “hacker”. But “unit test” had a well-known, rational, and accepted meaning well over a decade before the existence of XP. Here’s but one example of that.

  197. @Jeff Read
    > Systems programming and the kind of bare-metal design you do, Patrick, do not fit this model.

    Let’s not confuse the territory and the map here. The only reason that might be true is because you don’t have the right tools to do it. If you are building something low level like a file system, or a task scheduler, for example, you can certainly unit test it. You just need to have the mechanisms to isolate the pieces to make them testable. You can’t entirely test it by unit testing, eventually you do need an end to end test in the right execution context. However, you can certainly isolate the parts that, for example, lay out files in a disk, or determine which task will run next, by hiding the disk or task infrastructure in a mocked substitute.

    Now that isn’t to say that you have the tools necessarily to do that. But those tools can certainly be made. So it is not that system programming has some irreducible complexity that can’t be unit tested. It is just a tools issue.

    I forgot Patrick does chip layout, and I don’t know enough about that discipline to be overly aggressive, but I will say I am extremely dubious that the same techniques can’t be effective there. Complexity can be reduced by testing separate parts of it separately. And Patrick telling me that his unit tests are super complicated just tell me that the tests need to be broken down into smaller parts. But again, I may be entirely wrong about that particular area, I don’t know much about it.

    Of course, you can’t unit test EVERYTHING, unit testing is not sufficient. That is why you have things like system tests, and integration tests. They are different beasts. But most bugs can be found in unit tests, and they are crucial for the ability to make software that is refactorable.

    The big buzz I hear in soft dev is the concept of “testability” that is writing your software in such a way that it is readily testable. I think it is a good buzz. Bugs are a pernicious thing, and when you are down at the bare metal that are even more nasty, even harder to find, and very subject to caprice. All the more reason to use every tool software development has to eliminate them as soon in the dev cycle as possible.

    Which is to say, I think the idea that unit testing is not suitable for bare metal code is exactly the opposite of true. It is needed there more than anywhere else.

    I’m sure this isn’t a popular opinion with this particular crowd. But it is what I see, and I have written code from the bare metal embedded system level all the way up to fluffy web sites.

    1. >So it is not that system programming has some irreducible complexity that can’t be unit tested.

      Actually, I think it is exactly that, for definitions of “unit test” that center on JUnit style assertions. As you broaden the definition of “unit test” the irreducibility reduces. If you broaden the definition enough, your implied claim will become true, but only in a trivial sense.

      GPSD is a good case study for this. It’s pretty typical outside-the-kernel systems work. So let me tell you where I do use unit tests and where I don’t. And why.

      I have a baby JSON parser, written in C and using only static memory, that’s used two places in the toolkit. Once in the client library to parse JSON reports from the daemon; once in the daemon to parse requests from clients. I built some classically JUnit-style unit tests, in C, to verify it. They work well. Occasionally, when I made incautious changes, they’ve caught a bug.

      Another place I unit-test is the packet-getter – the state machine that recognizes GPS packets in an incoming byte stream. Again, this works well.

      A third is near my macros for doing endianness-independent bitfield extraction from incoming binary packets. That works well too.

      What these cases all have in common is that their interface with to the rest of the toolset is very narrow and very serialized. On the other hand, there are other subsystems that I can’t unit test. The dispatcher layer, for example – the upper level of the daemon that handles accepting socket connections, managing session-specific storage for each client, and activating/deactivating sensor devices as required.

      This code can’t be tested with JUnit-style assertions because it can’t be isolated. It’s entangled with the packet getter, the drivers, and the main select loop. If I tried to mock the layers beneath it, the mockup would have so much internal complexity that I could never be sure that it was occupying a region of state space anywhere near the code it’s supposed to be mocking. “Better tools” cannot solve this kind of problem. The complexity here really is irreducible; the Halting Problem comes around and bites you on the ass.

      That doesn’t mean this code is untestable, just that you can’t identify anywhere near a complete set of invariants and then express them as a set of point assertions a la JUnit. No. This is where you, necessarily, go to end-to-end testing. That is, known inputs checked against known-good outputs.

  198. @Jessica Boxer:

    @Jeff Read
    Systems programming and the kind of bare-metal design you do, Patrick, do not fit this model.

    Let’s not confuse the territory and the map here.

    Let’s not. Let’s also not confuse that it was esr that said this, and that he wasn’t railing against unit tests in this context.

    The only reason that might be true is because you don’t have the right tools to do it. If you are building something low level like a file system, or a task scheduler, for example, you can certainly unit test it. You just need to have the mechanisms to isolate the pieces to make them testable. You can’t entirely test it by unit testing, eventually you do need an end to end test in the right execution context. However, you can certainly isolate the parts that, for example, lay out files in a disk, or determine which task will run next, by hiding the disk or task infrastructure in a mocked substitute.

    Sure, you can write some unit tests, and in fact nobody was arguing that you can’t.

    Now that isn’t to say that you have the tools necessarily to do that. But those tools can certainly be made. So it is not that system programming has some irreducible complexity that can’t be unit tested. It is just a tools issue.

    I have better tools for this than any of you software guys do. I can cause an interrupt to appear at the exact CPU cycle I want it to.

    I forgot Patrick does chip layout

    Actually, I don’t. “Layout” is the term used for actually getting the geometrical shapes in the right place. Like PCB layout, only for something much smaller.

    I will say I am extremely dubious that the same techniques can’t be effective there.

    Which techniques? I already explained we do a lot of testing, including unit testing, and no, we can’t do the XP thing of releasing a chip a week.

    Complexity can be reduced by testing separate parts of it separately.

    And we do that all the time.

    And Patrick telling me that his unit tests are super complicated just tell me that the tests need to be broken down into smaller parts.

    I didn’t say they were super complicated, just that the test code for a unit is typically larger than the unit. And I gave a reasonable example of why that is, including data that I’m sure you could turn into pseudocode, (for the test for the multiplier), should you care.

    Of course, you can’t unit test EVERYTHING, unit testing is not sufficient. That is why you have things like system tests, and integration tests. They are different beasts.

    Now this is where we differ. You see different beasts, and I see a continuum. (Actually, I do see different beasts when the system includes more than just my chip, as per my discussion with Nigel. But I digress.) The things that constrain my tests are which platform they run on (simulator, emulator and/or real silicon, emulator and/or real silicon including firmware, real silicon including analog portion, etc.), and whether they are regressionable (short hand for “able to be run in an automated test regression suite”).

    But most bugs can be found in unit tests, and they are crucial for the ability to make software that is refactorable.

    I agree that most bugs can be found in unit tests. But for chips, this is only true if we write what you consider to be over-complicated, insane unit tests. Which we do, because otherwise the simulation time goes through the roof. This gets to the crux of the definitional issue. What I consider to be a “unit test” is something you consider to be too complicated.

    And if I write one of MY unit tests, it actually makes refactoring ABOVE the unit to be a lot more difficult, because I have to spend the time and energy to rewrite the test. The multiplier is a good case in point. There already was a simplistic unit test, that passed. I looked at the code and decided it was far too ugly to accept a test with only a few dozen cases, so I wrote a program that automated the generation of cases. But in either case, that unit test turns out to be worthless if we realize we can be clever and do it without a multiplier. And I’ve wasted more time than you, because I’ve written a more comprehensive test.

    The big buzz I hear in soft dev is the concept of “testability” that is writing your software in such a way that it is readily testable. I think it is a good buzz. Bugs are a pernicious thing, and when you are down at the bare metal that are even more nasty, even harder to find, and very subject to caprice. All the more reason to use every tool software development has to eliminate them as soon in the dev cycle as possible.

    Mom and apple pie. And the chip companies were there way ahead of most non-military software folk. In fact, we have two kinds of testability — the kind you think about, and the kind that says that when we are churning the chips out, the time it takes to scan for manufacturing (as opposed to design) defects is measured in milliseconds.

    Which is to say, I think the idea that unit testing is not suitable for bare metal code is exactly the opposite of true. It is needed there more than anywhere else.

    Which is why we do it. A lot. On everything going into the chip.

    Where I don’t necessarily do it is in some of the test system components. Now, I know you think that test code should be easy, so it should go without saying that this is true, but to us, sometimes it means writing scripts to control several different instruments with an aggregate cost over a million dollars, to test different analog performance characteristics. So sometimes the test code does get its own tests. But that’s decided on a case-by-case basis.

    I’m sure this isn’t a popular opinion with this particular crowd. But it is what I see, and I have written code from the bare metal embedded system level all the way up to fluffy web sites.

    I think you’ve completely missed the point and mischaracterized my writings.

  199. This code can’t be tested with JUnit-style assertions because it can’t be isolated. It’s entangled with the packet getter, the drivers, and the main select loop. If I tried to mock the layers beneath it, the mockup would have so much internal complexity that I could never be sure that it was occupying a region of state space anywhere near the code it’s supposed to be mocking. “Better tools” cannot solve this kind of problem. The complexity here really is irreducible; the Halting Problem comes around and bites you on the ass.

    And this sort of complexity is where constrained randomization comes in handy in simulation. When you have two bus masters on the CPU (one for data and one for code), and 20 bus masters for peripheral DMA, and a bus arbiter that is designed to simultaneously set priorities yet not let any peripheral starve, it’s not useful to run exactly the same 30 minute DMA/arbiter test on every regression, but it is extremely useful to make subtle changes in the order things happen so that if there is a bug in the arbiter or one of the DMA engines it stands a better chance of being caught.

    The next step, of course, is getting the simulation to update the tests itself.

  200. IMHO TDD / write-tests-firsts gets the human creative process completely wrong. We typically begin projects by open-ended experimentation, proofs of concepts, snippets, to see if our ideas are doable and if yes do they solve the original problem, in this phase it does not pay to get too precise and try to nail everything down exactly. This is a phase for being in the “flow” and flying high, fast and inaccurate, and focusing on the big picture. Later on, when we saw what is doable and how, and begin working out and nailing down the details and begin thinking about the corner cases, is the time for this.

    Maybe your experience differs. My experience about software development projects is that they begin as flying free and high, mapping uncharted terrain and enjoying the rush of discovery, and end as rather boring repetitive work making sure everything that was achieved and kinda works in concept, but is still fragile, actually always works. Tests seem to belong to this phase better.

    I think the main utility of automated tests is prevent changes from breaking something that did work before. If you flagged an idea as a potentional problem source and put it into a test, you probably wrote code that passes it. It’s usually later changes that can break it when the flag is forgotten or the number of flags is too big to manage.

    1. >IMHO TDD / write-tests-firsts gets the human creative process completely wrong.

      Agreed. The disconnect is large enough that I doubt any significant percentage of development groups actually does “tests first” – I think it’s mostly a spoken piety, ignored in practice.

      That said, having a good test suite beginning at a slightly later phase in the process is, in combination with decent VCS, creatively liberating. Good regression tests reduce the risk and cost of experiments hugely, so we can do more of it.

  201. IMHO TDD / write-tests-firsts gets the human creative process completely wrong.

    TDD is not designed as a framework for the creative process of individual humans. Remember, the goal is to get you functioning as a member of a team — a single node in a distributed cluster which is engaged in some creative activity as a whole. :)

  202. @esr
    > What these cases all have in common is that their interface with to the rest of the toolset is very narrow and very serialized.

    Indeed these are the easiest types of things to test. and are all excellent candidates.

    > This code can’t be tested with JUnit-style assertions because it can’t be isolated.

    And that is the problem precisely, your tools make this isolation difficult (it is also a design thing too — writing testable code does require code to be written a little differently than you would code you were not planning to test.

    I don’t know your code but I imagine you have a socket select in an infinite loop waiting for connects which you then dispatch. You would refactor out the dispatch part, that is to say the infinite loop would simple call a wait_for_connect function, you might do this, already, but for sake of argument, let’s refactor it out. You can then put in place something to test that the loop keeps going between connection return values.

    So the code is at three levels:

    for(;;) -> wait_for_connect -> dispatch_connect

    We test each of these levels with separate unit tests, and mock the surrounding levels to facilitate that.

    You then would mock the select to return the different values that you need to handle, and mock the part where you actually do the dispatch. This allows you to test the different types of return from the select are processed correctly. Finally you would break down and test the dispatch itself. This gives you a highly isolated set of tests that test function level components without entangling everything else.

    Of course this is REALLY hard to do in C. But that is my point. Mocking frameworks in my world work like this: during test set up you say: mock this interface. When you get called with the following values, return these corresponding values. This is very easy to do with various mocking frameworks that use reflection to read the underlying types That means you don’t actually model the mocked objects, you simply have them as a map from inputs to outputs. Which is all you need because you aren’t testing the mocked part, you are just testing it is called correctly.

  203. @Shenpen
    >IMHO TDD / write-tests-firsts gets the human creative process completely wrong.

    I agree with this too, as I think I have said above. However, there are places where it does work, specifically in mature products like GPSD where there really aren’t any new features, just tweaking to add other devices. And that is why I am a fan of the approach Eric says he has, namely add a test to his suite that breaks it, then fix it. This is a pretty pure form of TDD. However, if you need to make significant architectural changes that does not work well at all.

  204. @Shenpen:

    > IMHO TDD / write-tests-firsts gets the human creative process completely wrong.

    Yes. That’s not what it’s about.

    @Jeff Read:

    > TDD is not designed as a framework for the creative process of individual humans.

    This.

    @Jessica Boxer:

    > However, there are places where it does work, specifically in mature products like GPSD where there really aren’t any new features, just tweaking to add other devices.

    Yes. But it’s a chicken and egg problem. It’s all well and good to write a test that doesn’t work, and then fix the code to make it work. But if you don’t already have a comprehensive test suite, your change still has an excellent chance of screwing something else up.

    TDD is designed to insure that you have that comprehensive test suite, at any point in the timeline of the project where it might be needed. As I pointed out in a comment farther up, and as Jeff Read put much more succinctly, it’s not really designed for the solo practitioner. When a single programmer is allocating his time, it’s often very effective to write a little and test a little, but those tests are often not the kind that wind up in the repository, and it’s write a little, test a little, not write a little test, then write a little code. Most dynamic languages make this development pattern easy, and the big tests come later (or, in some cases, of course, you can just Run The Damn Program(TM) because the only real test is the data that you need to process).

    When a large team is programming, TDD is extremely useful as a concise recorded communication channel between the programmers. Most of the important design decisions wind up documented (and maintained!) in a self-executing form.

    But this only works well if the entire team follows it, hence the whiff of religion noticed by Jeff.

  205. Ted Walther wrote “bare metal […] Common Lisp isn’t up to the job.”

    Jeff Read wrote “Then how do you account for the fact that modern CL implementations routinely contain in-memory compilers and assemblers — written in themselves?”

    You two may be talking past each other to some extent.

    (Caution: What follows is more than some readers will want to know about the lowest levels of high level software system implementation.:-)

    It’s a common situation in bootstrapping system software that there’s some little central nugget of code that implements an abstraction and necessarily has to get by without depending on that abstraction itself. E.g., in assembler, there may be routines that implement the very basics of reading stuff from filesystems in mass storage, and even if the architecture of the rest of the system assumes that stuff can be read as files, those routines need to be careful about that assumption (and generally can’t themselves live on disk, but e.g. in a ROM somewhere). In C, your ordinary code not only in apps but in much of the kernel makes assumptions about memory flatness and access permissions and so forth, but your code that actually twiddles the bits on the MMU to cause those assumptions to work for everyone else needs to be careful about making those assumptions itself. Ordinary CL code is permeated by assumptions about GC and indirection and relocation on the heap (in ways that can be tricky to absolutely eliminate through e.g. lack of guarantee that your control structure isn’t in some corner case magically implemented as a higher order function calling a lexical closure). Then since CL has less of a portable-assembler character than C, it’s not always so easy when implementing an ultra-low-level feature to be sure that the compiler isn’t going to do something charmingly high-level-Lispy for you that will cause the world to explode when the implementation of a feature that it uses ends up depending on the existence of that feature.

    This is often nearly a nonproblem for Jeff Read’s examples of CL compilers and assemblers: they are generating code which will run at some point in the future, and the separation in time tends to break this loop. However, it is nonetheless a real problem in other subsystems of CL systems, and it’s a good part of the reason that the garbage collector in particular in SBCL contains little 4KLOC snippets of C code like https://github.com/sbcl/sbcl/blob/master/src/runtime/gencgc.c . That said, it is not an insoluble or even terribly difficult problem by any means, and “CL isn’t up to the job” is sort of misleading: the zealous support for C in modern OSes makes it not very tempting, but it would be straightforward to write such things in a variant dialect of CL, and the variant would be basically about guaranteeing that tricky stuff never leaks into the compiled implementation, not about adding any new features or abstractions. (In practice it might be done with a reduced vocabulary and a single new compiler directive.) It’s analogous to arranging (in typical C or assembler OS code) that certain core memory management code will itself absolutely never pagefault: it’s less informative to say that “C/assember isn’t up to the job” than it is to say that “you will probably have to use some special directives/pragmas/selfimposedrestrictions” (to arrange placement/alignment/loadorder guarantees needed only by such subbasement code) that are so arcane and seldom used that they’re likely to be clumsy afterthoughts in the language/library/toolchain system.

  206. Agreed. The disconnect is large enough that I doubt any significant percentage of development groups actually does “tests first” – I think it’s mostly a spoken piety, ignored in practice.

    What value there is in “test first” comes from its approximation of “automate the verification of your usage”. You might begin with a script that runs whatever *cough* unit you’re testing and prints the output. Then you vgrep stdout for verification. This is error prone, but really useful for exploration.

    Throw simple assertions into the mix, and you get an exit code representing overall success or failure. Evolve it further into structured tests, or BDD, or whatever. The benefit at the end is automated usage, automated verification. “Test first” is the autist’s dogma.

  207. There is one case where I think “test first” follows the creative process, namely bug fixing. First, write test for bug and check that it fails (perhaps marking it as expected failure, if test framework allows it) – if it doesn’t, it is wrong test. Then write bugfix, and check that it passes test (marking it as expected success). Keep test.

  208. @Jakub Narebski:

    I don’t think anybody here would disagree with that approach. But people were doing that decades before TDD was a thing. TDD says you do that for all new features, not just fixes.

    At work, the Verilog designers won’t fix a bug until they have recreated it in a failing simulation, and that’s how it’s been forever. Most of them have probably never even head of TDD.

  209. @Jeff Read:

    > http://thedailywtf.com/Articles/But-the-Tests-Prove-it-Works-Correctly!.aspx

    Yeah, those people were insane for not testing against the database when they could. But for some reason I wasn’t able to communicate my position to Jessica, and this example may be a good vehicle for trying again: what if you physically couldn’t connect, not just to the production database, but to any database, until after an additional 6 weeks and $700K NRE after absolutely all the programmers were absolutely finished? (And if any bugs found meant you got to repeat the cycle?) What would the mocked code in your test bench look like then? Would it maybe be a bit bigger? Is it possible that it might get complicated enough to deserve its own tests? That, even though it’s not your job to test the database, you might have to write a test of your simulation of the data base?

    And what level should you test at? “That should be a system-level test” paraphrases some of what I read. Which is often true. But what if your tests ran five or more orders of magnitude slower than the real world? Would that affect the level you tested at?

    To a first approximation, Jessica’s comment that this is a tools problem is absolutely correct. But, at least until we have quantum computing, there is a metaphysical problem that a simulation of a Pentium running on a Pentium is not going to run as fast as a real Pentium, and is not going to physically connect to the real world in the same way as the real Pentium.

  210. @Matthew
    > What value there is in “test first” comes from its approximation of “automate the verification of your usage”.

    My understanding of the motivation of TDD is that of minimalism. The premise is that programmers write huge architectural nightmares based on less than obvious criteria. TDD insists that any change you make to the program is traceable back to an actual feature request. This is done in a very aggressive way — test the feature — oh look it doesn’t work, now put in the minimum amount of changes necessary to make it work.

    There is a lot of value in that thought, but I don’t think it translates well, primarily because the transaction costs associated with significant architectural refactoring are pretty high, even with fantastic tools like Visual Studio. Consequently, a little forward planning is valuable, something that is discouraged in TDD.

    Again, I am not an advocate of it, but I think there are things to learn from the methodology, for example too much code is a bad thing, and if you think you should make complex architectures to support future unknown features, you will probably be proved wrong.

  211. @John D. Bell, @esr:

    >why didn’t you just exhaustively use and promote Lisp, instead of Python?

    >>Because Lisp doesn’t have implementations that are truly production quality for the kind of >>work I need to do. The range of library support in Lisps is very weak – batteries are, as Python >>people like to say, not included.

    I just wanted to point out that there is a project called Hy / Hylang ( see http://docs.hylang.org/en/latest/ ). Apparently, Hy is a Lisp dialect that is based on Python and thus allows one to use all of Python’s libraries. While I have a basic understanding of Lisp, I am not experienced enough to say whether Hy feels like a ‘real’ Lisp for an experienced Lisp hacker.
    The general idea of Hy seems interesting to me though.

    1. >The general idea of Hy seems interesting to me though.

      That it does. I was unaware of this. Crude and early-stage, but promising.

  212. Jessica Boxer said: My understanding of the motivation of TDD is that of minimalism. The premise is that programmers write huge architectural nightmares based on less than obvious criteria. TDD insists that any change you make to the program is traceable back to an actual feature request

    TDD would have the additional side effect of preventing those feature-bloating “I know we’ll need X so I’ll just implement that…” bits of code, too. Which is good.

    (I know I’ve fallen into that, and at very least it’s wasted effort; at worst it’s a bug-pump.

    Having to write tests for those cases makes them More Expensive and thus less likely to be implemented without a specific user story.

    Which is nice.)

    (I still don’t do test-first, but I’m also in such a UI-heavy legacy-code situation that realistically it’d be a several-months pause and giant refactor to even make it possible for most of our feature requests.

    So we cope with it, test as needed, and suck up the [arguable] technical debt.

    As one does, in the real world of a large application that needs to actually ship periodic builds to paying customers.)

  213. When I first started reading this series of articles about writing a Sudoku solver using test driven design (http://xprogramming.com/xpmag/OkSudoku http://xprogramming.com/xpmag/Sudoku2 http://xprogramming.com/xpmag/SudokuMusings http://xprogramming.com/xpmag/Sudoku4 http://xprogramming.com/xpmag/sudoku5) I first thought it was satire about the whole Agile programming movement. Only afterwards did I realize it was written by Ron Jeffries, one of the prophets of the Agile movement and wasn’t meant as satire.

    And if this is how test driven design is sold, I’ll stick with a more traditional approach to coding.

  214. @Sean Conner:

    That’s simultaneously hilarious and sad. The concepts of TDD are sometimes useful; OTOH if it enables people like that to write programs like that, perhaps we just need to shoot the whole thing in the head.

  215. @Sigivald
    > As one does, in the real world of a large application that needs to actually ship periodic builds to paying customers.)

    I’ve dealt with the situation a lot — where you have a large existing code base untested — and you have deadlines to meet. The solution is not a big bang approach — that is anathema to agile. The solution is to gradually evolve the software to a testable structure.

    Which is to say if the first release you put a test infrastructure in place that is largely hollow of testing, then you only add tests for the new stuff you add. Then as you add functionality to a module you add a more complete set of tests for that module, and so forth, and so forth. You never reach testing nirvana, but you get closer every release, and perfect is always the enemy of good enough.

  216. Absolutely, Jessica.

    We’re being lazy as hell about adding testing, but we have, and we do.

    (Eventually there will be a technical-debt-paying rearchitecture of the UI layer, and we’re going to dive into testable, mockable UI MVVM layers then…)

  217. @Sigivald
    > Eventually there will be a technical-debt-paying rearchitecture of the UI layer,

    In my experience, that “eventually” never comes,

  218. That’s simultaneously hilarious and sad. The concepts of TDD are sometimes useful; OTOH if it enables people like that to write programs like that, perhaps we just need to shoot the whole thing in the head.

    It took him from “durr, what the fuck is a sudoku?” to a workable solution — by following a process of identifying the major constraints to the problem, writing tests for those constraints, and then writing code which passes the tests. Towards the end he attempts a major architectural change — a pretty serious refactoring — and uses the tests to assure himself that it all will work. It was a toy example — and toy examples are often a little ridiculous.

    Fabrice Bellard is an amazing programmer who routinely pulls amazing code out of his ass. But the world is sadly short on Fabrice Bellards and long on journeyman programmers operating nowhere near that level of holy-shit brilliance. TDD provides them a reliable way to produce workable, maintainable, refactorable solutions.

  219. Whatevas good people. Microsoft Access is the ONLY IDE you will ever need. You can do SO MUCH with VBA its insane. And you’ve got your database with the most robust feature set, I don’t know why anyone even uses anything else. If you can’t get VBA to work for you then just get some .bat files going, pop out some PowerShell, and you should be good. And the add-ins are so awesome, there is a whole robust economy there. And when you reach the data cap, you can ghost DB instances and write communications protocols to keep everything synced. This is true programming at its highest level, for reals. So much capability at my finger tips!

    And you can’t even imagine the power and efficiencies to be gained when you start integrating SharePoint and InfoPath. Using MS Project and Outlook, you can even do a whole CRM/ERP system. And web extensibility too! Check out Access, you won’t be disappointed.

    What? Why no invite to Penguicon? I like penguins…

  220. Just thought I’d inject some insanity into your day…

    For those with a low-watt sarcasm detector…sorry.

    So behind this rant is a lot of visceral hatred (clearly, right?) I’ve finally made the full shift out of one set of career options and into my true love – software engineering (apparently I’m supposed to call it that). Albeit STILL for Defense but this is temporary.

    Anyway, one “project” I had (I am seriously not kidding about this) was to write an Access (look, my input on HOW to do this was…above my pay grade…seriously, they said “what” and “how”, and I was to “implement”) “application” that parses a rather complex XML data set (in VBA…cause why not, right? – Shenpen, I’m not ragging on you, but I was thinking about this with your post, which you already addressed with Jeff Read & Jessica Boxer), pulls out individual tags, separates the vales, captures full node paths, imports the results into Access tables, and then runs comparisons on different versions of the data set (tag structure and values) delivered each month. We had to detect for discrepancies in the data itself, and changes in the XML structure and tagging. XML that was written to a standard. By an industry powerhouse…renowned…for their technology…sigh…So you know, that shouldn’t happen often. But…well a 35% compliance-to-standard rate is awwwwrite…amirite? So the data AND the XML had to be rewritten every month. And as of the time I said “well I guess I do have a choice and said “see ya”, they had finally achieved a 39% compliance rate. To the standard. Which hadn’t changed. Because they made the decision to use the previous version of the standard forevers….

    And this is why the facility was carpeted with the luscious fibers of our tax dollars…

    By the way, I have since rewritten the XML handler in Python. And did it again in Perl. Cause hitting yourself in the head with a hammer is bad…and I just couldn’t let this disaster go unavenged…

  221. WCC,

    Hearing that gave me a serious case of the jibblies. Let’s just say… I know that feel, bro. Fuck Access. Fuck it so hard.

  222. @WCC your story is about people making decisions who think they understand technology, who think they care about technology (that’s why they care which tools to use), but don’t actually understand and don’t actually care.

    I tend to have horror stories revolving around people who absolutely don’t care and are proud ofit. Although the main difference is that I always managed to reject these projects.

    – One company, who was apparently aggravated by the slowness of the current system, put it into a published tender that no query or report may take more than 30 seconds to run. Unconditionally. Doesn’t matter what it does. We laughed our butts off and didn’t apply.

    – One company selling cars had a small ERP software and had tons of customizations added to it during about 10 years. The end result was awesomely efficient, you press one button that the 143 different kinds of legal paperwork needed for selling a car in that country comes out of the printer. But by these large amounts of sloppy custimizations the system became inconsistent, like inventory value not matching with the related accounts in the general ledger and so on. The solution they planned is to implement one of the larger ERP software, Navison or SAP or maybe Oracle Financials but 1) every customization they had in the old was is to be rewritten it. I asked them what the heck will prevent it from having the same inconsistency? but 2) was better: the did not want to retrain their 20 or 30 users so every form, every screen to be customized to look and work like in the old software so people can just use it without learning. We laughed our butts off and didn’t apply.

    – One thing we actually did just for the hell of it: they said their idiot employees when they don’t know the price of an item they invoiced it at a 0 price. So the software should not allow that. I said it is a classic futile case of trying to treat a people and process problem with technology. They insisted. I predicted that from then on they will sell such items for the local equivalent of 1 cent and did the customization just for the hell of it. As far as I have heard later, I was mostly right.

    One story where stupidity was actually kind of cute and surprisingly well manageble with technology was the company who had the kind of the local equivalent of the utter retarded hillbilly type of customers, who sent their acquaintances – whoever happened to own a truck – to pick up the goods they ordered, and these guys could not really answer questions like “What was the order number? What is the account number? Heck, what is at least your bro’s full name? Why the fuck you just could not bring the order confirmation paper we sent with you?”, all they could say was like “Jimmy, ya know the one living in Baker street, the one eyed Jimmy, sent me for his stuff” so we ended up making a form that searched sales orders for the street part of the customer address, and to my surprise, this actually worked out fairly well. Later on I realized the reason: there are many levels of stupid possible, but you can’t be as stupid as to even forget street names or else you get lost, so it makes sense we can count on at least that one piece of info. This was before satellite navigation was cheap.

  223. @WCC –

    Welcome back. +1 to Jeff Reed’s commiseration.

    Yes, you are invited to Penguicon. :-) And to GwG. Please send an email to guns@penguicon.org so that I can contact you.

    Everyone else (who has never sent or received an email to/from me before about the con) please also use that link. Thank you!

  224. @WCC
    You have two options available to you:

    1. A team of a dozen highly trained Delta force operatives, armed with various small arms, a helicopter for infiltration and exfiltration, and night vision goggles.
    2. A massive naval force, hundreds of battleships, tens of thousands of troops, massive amounts of air cover.

    Which team do you choose with these two instructions from the President:

    1. Raid the beaches of Normandy and retake Europe from the Nazis.
    2. Break into Osama Bin Laden’s compound and kill or capture him.

    Which is to say, Access is excellent for the jobs that it is excellent for.

  225. @Jeff Read: I’ll drink to that.

    @ John D. Bell: Thank you, sounds awesome. First round is on me – guns and grog.

    @Shenpen, RE: para 1: I like the way you put that. Agree.

    @Jessica Boxer: See, you’re making us retake the beaches of Normandy. Just give us the mission, we’ll take it from there (I was never “regular forces” material…) If I HAD Delta at the time, I wouldn’t need to do that. I’d take the inner circle and provide Rommel & the others the opportunity. You can do that when things are driven by a cult of personality, even with some formidable minds behind it.

    Yeah, okay, I’ll stop being an ass. I’ll bite – what exactly are those jobs? Here I am being ass-less, I truly don’t know what those are – there are several other technologies I would use long before that one. Given the choice. Which I’m not always. So it’s a legit question.

    You make quite a few valid points about VS et al. Most of the U.S. military industrial complex is heavily invested in MS and so I get access to some of the better MS stuff, such as it is. You get to be UNIX-centric often when you’re working on the cool stuff (of course…Assembly too, so…) with some of the alphabet soup agencies, and there is a slow but steady thaw toward open source (just examine carefully any modified code that is contributed back)…. But, security, security, security. So MS it usually is on the military side of things (I’ll wait until everyone stops laughing – hey, I’m not the decision maker here – see Shenpen’s response to me a bit upthread).

    I definitely do not prefer MS over most of the alternatives that I have used, but I’m not a rabid hater. Ideologically and experientially I’m in the open source camp. I’m not religious though – I usually try to see the world as it actually exists (well, as far as any interpretive reality allows) – good, bad, and ugly. So perhaps this comes from my previous life, but I’m ultimately a pragmatist – give me the best tools for the job, and I’ll get it done, hell or highwater. What I hate is being given the “mission parameters” and then have those parameters tell me what loadout I need. It’s my job to know and decide that.

  226. @WCC: “Yeah, okay, I’ll stop being an ass. I’ll bite – what exactly are those jobs? Here I am being ass-less, I truly don’t know what those are – there are several other technologies I would use long before that one. Given the choice. Which I’m not always. So it’s a legit question.”

    You are running Windows. You need a *single-user* relational database. It needs to inter-operate with other MS apps, like Excel, and be (relatively) easy for the end-user to deal with.

    What do you use?

    There are Windows ports of various open source database tools, like MySQL and Postgresquel. You might also look at Open Office/Libre Office Base, but that’s built in Java and requires a current Oracle/Sun JRE to function. And for any of them, you would need the requisite dependencies installed, and there would be an arguably more substantial development effort than Access would require. In the stated case, the user already has Access as part of MS Office Pro, and doesn’t need to install other things.

    I’ve used, and would use Access in that scenario. For anything else, I’d look elsewhere.

    (The key is “single-user” above, and most issues I can recall hearing of stemmed from trying to run Access as multiuser across a network.)

  227. Which team do you choose with these two instructions from the President:

    1. Raid the beaches of Normandy and retake Europe from the Nazis.
    2. Break into Osama Bin Laden’s compound and kill or capture him.

    Which is to say, Access is excellent for the jobs that it is excellent for.

    Indeed.

    The problem is, Access was designed for problems at the “get the ball back from the big kids’ playground” scale. It’s a development tool for non-developers. It’s fine for small business owners who want to slap together some basic forms and reports and enforce a few constraints with bits of VBA code. But if you try to apply any sort of programming discipline to it, you lose!

    Putting Access in the hands of someone with decent programming experience is like putting a video game controller into my sister’s hands when we were kids: eventually, they will get frustrated and try to break it.

  228. @Jeff Read
    > It’s a development tool for non-developers. It’s fine for small business owners who want to slap together some basic forms and reports and enforce a few constraints with bits of VBA code.

    Indeed, and there are probably millions, literally millions of access databases that fit in that category, and that help glue small businesses all over the world together. They aren’t pristine pieces of software, but they get the job done, just like our own friend WCC, whatever the hell secret shit he used to do.

    They say that if you have a hammer every problem looks like a nail. But that doesn’t change the fact that hammers are damn good if you actually do happen to have a nail.

    BTW, i should say to WCC — welcome to the big bad world of full time soft dev.

  229. > I’m also talking with RMS about the possibility that it’s time to shoot Texinfo through the head and go with a more modern, Web-friendly master format.

    Is there a tool that can help with converting Texinfo (which by the way is HTML friendly, somewhat: HTML is one of possible output formats) to said “master format” lightweight markup language, be it AsciiDoc, Markdown (GitHub flavored) or reStructuredText?

  230. I was in community outreach, Jessica.

    Just, in my own little way, helping to provide the terrorist assholes of the world with the maximum opportunity to die for what they believe in. Service with a smile, that’s my motto.

    1. >I was in community outreach, Jessica.

      >Just, in my own little way, helping to provide the terrorist assholes of the world with the maximum opportunity to die for what they believe in. Service with a smile, that’s my motto.

      Heh. You were either an operator or you learned how to talk remarkably like one. I get it about the security issues so I won’t request you confirm.

  231. @ DMcCunney;

    Thanks for the scenario, and I see the sense of what you say. That seems pragmatic. I’ve never done anything like that scenario, so my tools in the Windows environment have been SQL Server, Oracle, and Sybase (the latter was extremely painful too – the program they used it for made it a bad choice).

    WRT to the open source ports, I’ve used the ones you mention and several others quite a bit.

    @Jessica, thanks (I think?) for the welcome, but I hope not to be in this type of environment for long. I really like doing it as a practice, but my true love is AI, robotics, and mass data synth (I guess they are calling it “Big Data” now). The plan now is to finish up school to get my doctorate so that I can get into heavy research for AI and robotics. My dream is to play a part in making cyberization a reality. And I don’t want to do any of that for the defense industry.

  232. I said “so that I can get into heavy research for AI and robotics” – I meant “research and development”

  233. @WCC: “@ DMcCunney;
    Thanks for the scenario, and I see the sense of what you say. That seems pragmatic. I’ve never done anything like that scenario, so my tools in the Windows environment have been SQL Server, Oracle, and Sybase (the latter was extremely painful too – the program they used it for made it a bad choice).

    WRT to the open source ports, I’ve used the ones you mention and several others quite a bit.”

    I try to be a pragmatist.

    The first computer system I dealt with of any kind was an IBM mainframe at a bank in the early 80’s. It was the days when the original IBM PC with 640K of RAM, dual 360K floppies,a 4.77mhz 8088 CPU and CGA graphics running MS-DOS 2 was just starting to show up on corporate desktops as an engine to run Lotus 1,2,3, sometimes displacing Apple IIs with 80 column cards running VisiCalc. The bank ran one of everything ever made as far as I could tell, so I logged time on DEC PDP-11s running RSTS-E and RSX-11M+, VAXen running VMS, and various other things.

    Next stop was a small systems house selling AT&T systems when they were still in the computer business, and getting up close and personal with AT&T UNIX System V Release 2.

    There have been other stops along the way. At employer -1, I variously dealt with Windows 3.1, 95, 98, 2K and XP, various versions of MS Office, OS/2, Novell Netware, SCO Unix, Red Hat Linux and Solaris, plus network and telecom admin duties and some facilities management.

    I prefer open source solutions when one is applicable, but am not dogmatic about it. At the mentioned employer -1, for instance, I was able to shift them to PuTTY as standard telnet client and Filezilla, for FTP. But the company had grown by acquisition. One of my charges was an acquired Sun SPARC system running a version of Oracle, that had been bought by an end user department to run a custom database application. Neither the developer of the database nor the DBA were still with the company, and I made it clear that I adminned the box Oracle ran on, but knew just enough about Oracle to be dangerous.

    The roof fell in when the system suffered massive HW failures on both sides of the RAID array. (It should have been upgraded/replaced years back.) It cost them $10K for the services of an Oracle consultant to (mostly) recover the data.

    We were a mostly MS shop, with some Solaris and Linux as my world, and had SQL Server deployed for other purposes. I couldn’t see anything about the custom DB that required Oracle to handle it, and pushed to get the database migrated to SQL Server, where we still *had* developers and DBAs. I wanted to *reduce* the complexity of our environment.

    I’ve worked in a number of environments, and am firmly in favor of the right tool for the job. Sometimes that tool is something from MS. Sometimes the best tool is *no* tool, but rather a refactor of the client’s process. I remember a bit in Robert Townsend’s “Up the Organization”, about his days as CEO at Avis when they were a distant number two to Hertz. He said “Make sure your current manual system is clean and effective before you automate. Otherwise, you just speed up the mess.” He was writing in the mainframe days, but it’s still true.

  234. @WCC
    > I was in community outreach, Jessica.

    I am sure you made a lot of dark eyed virgins very happy to meet their man…

    Thanks for your service to our country — now go make some freaking money dude.

  235. I am sure you made a lot of dark eyed virgins very happy to meet their man…

    That or somebody in the afterlife just made a mint in raisin futures.

  236. @DMcCunney;

    Thanks for the response and the the history – that was an informative and interesting read. Agree with a number of your points, but mainly this one:

    “Sometimes the best tool is *no* tool, but rather a refactor of the client’s process.”

    Yes indeed – I encounter this one quite a bit.

    I really enjoyed your history – got into computers as a kid of about 12 or so. Some class in my school call PACES or something. They took you out of regular classes and you did things like solving problems, playing with tech, building things. One of my projects was to assemble a computer from kit and I was hooked then and there. My first computer was a 64, followed by an Apple II/c (yeah, I know, right) and I think my first language was BASIC.

  237. @Jessica;

    Y’know…I never got the whole virgins thing. I mean I get that the concept has some primal attraction on an animal level I guess.

    But 72 virgins…that doesn’t seem like much of a reward. In fact that seems like a whole lot of work for very little return. I mean, can you imagine how much training you’d have to invest in that to make things even mildly interesting…

  238. Regarding the talk about open source and proprietary, on my way in to work this morning, a gathering of angels appeared above my head
    They sang to me a song of hope, and this is what they said:

    All this machinery, making modern software
    Can still be open hearted
    Not so coldly charted
    ….
    One likes to believe in the freedom of software
    But glittering prizes and endless compromises
    Shatter the illusion of integrity

    See, what had happened is I thought that they were angles, but to my surprise
    They climbed aboard their starship, and headed for the skies

    Some of you might recognize the modified words above. And the more gracious among you might even forgive me for the cheesiness…

    Sorry guys. It’s been a long day’s night and the radio has been on entirely too long…and I was just showing Eric’s Pandora post to a colleague…mea culpa

  239. @WCC: You’re quite welcome. That history was the short form, mostly to illustrate that I worked in a variety of environments, and concluded that there was no one single solution. All OSes, languages and environments have strengths and weaknesses. You can’t solve a problem unless you understand what the problem *is*, and a preference for an OS, language, development method or other technology can impede that understanding. Once understood, the problem may dictate the appropriate solution, and it may not be the tools you prefer to use.

    As for process refactoring, I saw a charming example recently. An NYC charitable food bank had an issue, and Toyota loaned a couple of engineers to assist. The food bank manager couldn’t see what auto engineers might do to help him.

    His problem was simple: there was a 90 minute wait for needy wanting a meal to be served. The Toyota engineers looked at his process and saw the bottleneck: he was feeding people ten at a time, so ten seats had to be free before new folks could come in to get a meal. The engineers changed the process to one at a time, so the food bank devoted a staffer to watching for empty seats, and bringing in a new person to be fed as soon as a seat was available. The wait time dropped to 10 minutes.

    Examining and refining processes was what the Toyota engineers *did*, and their skills were applicable beyond auto making.

    I did a variant at the bank. I was expert in the financial modelling system my area used to generate reports for senior management. Our reports compared things in terms of budget, forecast, and actual expenditures. Budget and forecast were input by the financial analysts for the areas reporting. Actual expenses came from the General Ledger. Every month, a GL first post was run, and analyst all got a foot high printout of theirs, to be manually re-input the the financial modelling system.

    I said “Wait a minute. The GL and the modelling system run on the same mainframe. Why isn’t there an interface between the two, to automatically plug the actuals numbers into the modelling system? Why are financial analysts spending two weeks of each month manually re-keying data that already exists elsewhere? Why are you paying MBAs $30K/year to be data entry clerks? Put that way, the light bulb went off over my superior’s heads, and I got to go to Applications Programming to propose an interface be created. The AP VP was all in favor – his folks mostly supported marketing, which could never make up its mind what it wanted, and this was an opportunity to get an actual useful product out the door and in production.

    There was a little technology glue in the form of the interface, but it was essentially a process refactor.

    I logged time on C64s, too, as well as Apple IIs. There was a lot of remarkably creative stuff on the 64 side designed to get maximum benefit out of limited hardware. I had fun later doing stuff like that with my original MS-DOS, PC, and a Linux system to my right is a similar attempt with an ancient (2005) Fujitsu notebook. For that matter, I was a designer and print production guy before winding up in computers, and still do the odd hobbyist DTP project for fun. I enjoy seeing how creative I can be within a (very) limited budget.

  240. The day Emacs died for me was the day I discovered Sublime Text.

    It made me wonder, why in 30 years nobody managed to come up with anything as awesome as that. Yes I’m aware it lacks atomic save and in some situations that might suck; but as far as the text editor’s usability and performance goes, it’s superior to anything I have used. And it can be scripted in Python, the language I use every day, not making me learn some specific single-use gobbledygook (vim BASIC? Emacs Lisp?) that has no significance outside the text editor it was made for.

    Not to mention that its navigation in large projects is so superior I gladly paid 70 bucks for it and hadn’t any regrets, not even once.

    I could not believe myself, but let’s face it: I. Paid. For. A. Damn. Text. Editor.

    And its advantages are good enough for me to not give a damn if it’s closed source or not. Or that it doesn’t ship with an email client or an adventure game.

    Mind you, it’s being developed by just one guy, not a large team of greybeards. Which kinda makes me wonder if a large team of greybeards is actually any good…

  241. As said by Ryan Dahl, the only thing that matters in software is the experience of the user. This should be tattooed across the forehead of any programmer who does not yet get it, and the proof is that an increasing number of hackers are willing to pay for a proprietary editor with a comparatively limited feature set but a great experience, when they can have an open-source editor with an extensive feature set and a shitty experience for nothing.

    How many more times must Apple win before the open-source crowd understands this?

    1. >How many more times must Apple win before the open-source crowd understands this?

      Not only do Android phones outsell iOS phone, Android tablets now outsell iPads.

      I do not think the word “win” means what you think it means.

  242. @esr

    Not only do Android phones outsell iOS phone, Android tablets now outsell iPads.

    Not at similar price points they don’t. At the $500+ price point Apple has about 70% marketshare and growing. It terms of tablets at Apple’s price point the Android marketshare is negligible. Moreover unlike the smartphone case it appears the vast majority of Android tablets are being used in place of televisions rarely doing the sorts of internet shopping type things that iPads do. 81% of all Android phones sold in 2013 were $215 or less with the ASP dropping below the $300 mark by 2Q2013. Conversely Apple ended the year with an ASP at $635.

    Bicycles outsell airplanes and always will. But among customers who can comfortable afford the Apple experience they not only are choosing Apple over similarly priced Androids they are choosing it over lesser priced Androids. The USA being a perfect case in point which has proven that.

    Finally of course the purpose to run a business is margin not sales. So even if we were to ignore price points and create one giant market, unlike what’s done with just about any other consumer good:
    Unit sales: Android is incredibly in the lead
    dollar sales: Apple does well but Android has a lead
    total profits: iOS does about double Android

  243. @CD-Host
    ” The USA being a perfect case in point which has proven that. ”

    In the USA, iPhone users mostly do not pay the price difference with the Android phones due to (cross-) subsidies and overpriced data plans. That price distortion drives up the market share of the iPhone.

    Nowhere else are the subsidies that big.

  244. @Winter

    In the USA, iPhone users mostly do not pay the price difference with the Android phones due to (cross-) subsidies and overpriced data plans. That price distortion drives up the market share of the iPhone.

    Yes, exactly. The USA because of the pricing structurer means there is a negligible $225-375 phone market allows you to isolate out the two variables of very low phone cost (prepay customers only) and Android vs. iOS (expensive phone customers only). For USA postpay you get a pure view of what $400+ customers prefer as if there were no smartphones below the $400 cutoff without a much steeper fall. Which is exactly what you want when you want to test customer preference excluding price. And it turns out that market is around 73/22/5% iOS/Android/Other.

    Jeff’s point was that Apple’s interfaces were better. ESR responded regarding Android. And then my point above was that excluding price there wasn’t a preference for Android rather quite the opposite which supported Jeff’s contention.

  245. My point was that when it comes to professional creative work including development, source availability is a negligible concern. What matters is the ease of getting from your current state to a state where you have deliverables in hand ready to turn over to the client. That’s why most professional creative work is done on the Macintosh, and virtually all of it outside certain software development activities is done with proprietary tools. And it will ever be so, until the open source crowd starts regarding its users as customers. That means money has to be involved.

    Mobile devices barely figure into creative work; Android devices effectively not at all. And Android is a cathedral, not a bazaar; and if you count Google Play services it’s effectively as closed as IOS.

  246. To resuscitate the topic of markdown: A group of Markdown-using websites (GitHub, reddit, Stack Exchange) has been trying to create a unified unambiguous spec. Markdown’s original author is not on board with this, so instead of calling it “Standard Markdown” or some such they’ve settled on the name CommonMark.

  247. Joel, that’s really encouraging. It’s a good path to follow: introduction, flurry of branching and competing implementations, followed by a concerted industry effort to standardise.

  248. @Joel: I’ve been following this effort on the Pandoc list. Markdown’s original author is taking what my late English mother called a “dog in the manger” approach. He doesn’t seem to be opposed to the creation of a unified unambiguous spec, but insists it not have “markdown” in the name. The people trying to *create* the spec are being respectful and acceding to his demand.

  249. FWIW, it’s looking like the AsciiDoctor superset of AsciiDoc is winning. I’m seeing massive migrations to it, even overhauling existing DocBook and Markdown documentation bases.

Leave a comment

Your email address will not be published. Required fields are marked *