I was a historian before I was an activist, and I’ve been reminded recently that a lot of younger hackers have a simplified and somewhat mythologized view of how our culture evolved, one which tends to back-project today’s conditions onto the past.
In particular, many of us never knew – or are in the process of forgetting – how dependent we used to be on proprietary software. I think by failing to remember that past we are risking that we will misunderstand the present and mispredict the future, so I’m going to do what I can to set the record straight.
Some blurriness about how things were back then is understandable; it can sometimes take a bit of effort even for those of us who were there in elder days to remember what it was like before PCs, before the Internet, before pixel-addressable color displays, before ubiquitous version-control systems. And there were so few of us back then – when I first found the Jargon File around 1978 you could fit every hacker in the U.S. in a medium-sized auditorium, and if you were willing to pack the aisles probably every hacker in the world.
A larger and subtler change, the one easiest to forget, is how dependent we were on proprietary technology and closed-source software in those days. Today’s hacker culture is very strongly identified with open-source development by both insiders and outsiders (and, of course, I bear some of the responsibility for that). But it wasn’t always like that. Before the rise of Linux and the *BSD systems around 1990 we were tied to a lot of software we usually didn’t have the source code for.
Part of the reason many of us tend to forget this is mythmaking by the Free Software Foundation. They would have it that there was a lost Eden of free software sharing that was crushed by commercialization in the late 1970s and early 1980s. This narrative projects Richard Stallman’s history at the MIT AI Lab on the rest of the world. But, almost everywhere else, it wasn’t like that either.
One of the few other places it was almost like that was early Unix development from 1976-1984. They really did have something recognizably like today’s open-source culture, though much smaller in scale and with communications links that were very slow and expensive by today’s standards. I was there during the end of that beginning, the last few years before AT&T’s failed attempt to lock down and commercialize Unix in 1984.
But the truth is, before the early to mid-1980s, the technological and cultural base to support anything like what we now call “open source” largely didn’t exist at all outside of those two settings. The reason is brutally simple: software wasn’t portable!
You couldn’t do what you can do today, which is write a program in C or Perl or Ruby or Python with the confident expectation that it will run on multiple architectures. My first second full-time job writing code, in 1980, was representative for the time: writing communications software on a TRS-80 in Z-80 assembler. Assembler, people!. We wrote a lot of it. Until the early 1980s, programming in high-level languages was the exception rather than the rule. In general, you couldn’t port that stuff!
Not only was portability across architectures a near-impossible dream, you often couldn’t port between instances of the same machine without serious effort. Especially on larger machines, code tended to be intertwined with details of individual site configuration to an extent that would shock people today (IBM JCL was notoriously the worst offender, but by no means the only).
In that kind of environment, arguing about whether code should be redistributable in general was next to pointless, because unless the new machine was specifically designed to be binary-compatible with the old, ports amounted to being re-implementations anyway.
This is why the earliest social experiments in what we would now call “open source” – at SHARE and DECUS – were restricted to individual vendors’ product lines and (often) to individual machine types. And it’s why the cancellation of the PDP-10 follow-on in 1983 was such a disaster for the MIT AI Lab and SAIL and other early hacker groups. There they were, stuck, having folded huge amounts of time and genius into a huge pile of 10 assembler code and no real possibility that it would ever be useful again. And this was normal.
The Unix guys showed us the way out, by (a) inventing the first non-assembler language really suitable for systems programming, and (b) proving it by writing an operating system in it. But they did something even more fundamental — they created the modern idea of software systems that are cleanly layered and built from replaceable parts, and of re-targetable development tools.
Tellingly, Richard Stallman had to co-opt Unix technology in order to realize his vision for the Free Software Foundation. The MIT AI Lab itself never found its way to that new world. There’s a reason the Emacs text editor is the only software artifact of that culture that survives to us, and it had to be rewritten from the ground up on the way. (Correction: A symbolic-math package called MACSYMA also survives, though in relative obscurity.)
Without the Unix-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common. It’s hard for a lot of older hackers to remember because we mostly cut our teeth on Unix environments that were a few crucial years ahead of the curve.
But we shouldn’t forget. One very good reason is that believing a myth of the fall obscures the remarkable rise that we actually accomplished, bootstrapping ourselves up through a series of technological and social inventions to where open source on everyone’s desk and in everyone’s phone and ubiquitous in the Internet infrastructure is now taken for granted.
We didn’t get here because we failed in our duty to protect a prelapsarian software commons, but because we succeeded in creating one. That is worth remembering.
Not only did RMS have to co-opt Unix technology, he bathed about it, as well.
And then he tried to reinvent it, running headlong into Henry Spencer’s aphorism.
And the Hurd still isn’t ready for prime time, what, 30 years after being launched?
No wonder he’s so eager to glom any credit for the rise of Linux he can.
“bathed”??!?!! Damned auto-correct. That was supposed to be “bitched”.
Yeah, we all knew RMS didn’t do that…
We must not forget John Lions contribution to the Unix Open Source history http://en.wikipedia.org/wiki/John_Lions
Very interesting history:
“When AT&T announced Unix Version 7 at USENIX in June 1979, the academic/research license no longer automatically permitted classroom use. Thus, licensees were no longer able to use the Lions notes for classes on operating systems.
However, thousands of computer science students around the world spread photocopies. As they could not study it legally in class, they would sometimes meet after hours to discuss the book.[citation needed] Many pioneers of Unix and open source had a treasured multiple-generation photocopy.”
and It happend before RMS
There was also the Cain and Hendrix Small C compiler, which was both public domain and (somewhat) portable, long before gcc was a gleam in rms’s eye.
It wasn’t a full C, but it was pretty useful nonetheless. I used to use variants (Deep Blue C and Ace C) on the Atari 8 bit machines. IIRC they worked by emulating the 8080 instruction set on the 6502 (yeah, language-interpreting VMs were around long before Java, too — UCSD Pascal was another example).
Though not as fast as pure asm, they were much faster than most of the alternatives (fast enough to write reasonable games, e.g.).
I’m not aware of this myth myself and I am old enough to remember the days of DOS and monochrome displays and 5.25″ floppy disks and being almost unaware of such a thing called the “internet”. I am also somewhat aware of the technological advances that have made Free Software and Open Source much easier to adopt.
From all the writings and historical pieces I’ve read of Free Software and Open Source, I didn’t get the impression that Free Software ideas preceded Proprietary Software or that it was common to develop free software alternatives to proprietary software way back then. In my mind I’ve always believed that Free Software was born as a reaction against the increasing hold of proprietary software and technology in the growing computerization of the 1980s and 90s, rather than being born as an original idea. So it is natural that I would understand that proprietary software was the only choice of software back then. And I am also aware of how much effort and time it took for Open Source software to reach the present state of quality and adoption.
If there is supposed to be an organized campaign by the FSF to distort this reality, it’s very obscure and almost imperceptible. I seriously doubt if any student of the FSF & OSS movements would buy into this myth.
>If there is supposed to be an organized campaign by the FSF to distort this reality
It’s not very organized; the people doing the “distorting” generally don’t know any better. The myth of the fall comes from overinterpreting RMS’s personal history as a paradigm of what was happening everywhere. RMS himself doesn’t push this (or, anyway, I haven’t heard him do it) but he doesn’t discourage it either.
“The Unix guys showed us the way out, by (a) inventing the first non-assembler language really suitable for systems programming…”
Though it doesn’t distract from your main point, I think the people behind the Burroughs B5000 and its descendants would be surprised by that.
>Though it doesn’t distract from your main point, I think the people behind the Burroughs B5000 and its descendants would be surprised by that.
You know, I actually knew about BALGOL at one time, because it was still recent memory when I worked at Burroughs. But that was a loong time ago, and I had forgotten it. Did they really write the OS kernel in it, or just userland? Granted the distinction was less clear, pre-Unix…
I was gonna say, Tron Guy might be interested in the pivotal role of the MCP (the actual one) in establishing a world where portable systems software was the norm. Near as I can tell from Wikipedia, the entire MCP was written from the ground up in ESPOL, a language that extended ALGOL by offering primitives for things like CPU interrupts and other machine-level concerns. Indeed, Burroughs large machines were akin to proto-LispMs, containing advanced typing features in hardware and intended to be programmed in an HLL, not assembly; I don’t think Burroughs even shipped an assembler. But they did include the source to MCP, and Burroughs user groups routinely swapped patches and extensions to the OS — creating a sort of proto-open-source culture around MCP.
>the pivotal role of the MCP (the actual one) in establishing a world where portable systems software was the norm.
Role? It didn’t have one. The B5000 was brilliant for its time, but it was even more of a technological dead end than LispMs turned out to be. At least we salvaged some software-design ideas and folk memories from the LispMs – ESPOL left basically no legacy at all and had already been obsolete for a decade when I was at Burroughs around 1980.
>But they did include the source to MCP, and Burroughs user groups routinely swapped patches and extensions to the OS — creating a sort of proto-open-source culture around MCP.
I think this claim has an error of categorization in it that is separate from the fact that ESPOL and its successor NEWP sank without leaving a design legacy to later computing.
I was going to explain this in a long comment, but I now think it’s worth a blog post in itself.
Dear Armed and Dangerous:
You used the the word “prelaparian” in your remarks about the misconception or mythology of youth (God Bless their tiny minds) concerning the Fall rather than the Rise of Open source architecture free coding. Did you not mean prelapsarian as in the state of Man before the “Fall” a’ la John Milton?
Which is by the way a very appropriate usage in the context of your post. Alas, as an engineer I as well am challenged with spelling and very much embrace the aphorism that; “It is a very dull man that can only spell a word One Way”
>Did you not mean prelapsarian as in the state of Man before the “Fall” a’ la John Milton?
I did, and I knew the correct spelling, too. That was a typo. :-)
Wasn’t BLISS invented a few years before C? And BCPL came before B, which was a forerunner of C. Couldn’t ALGOL-68 have (at least theoretically) done the trick? Or even FORTH (or its predecessors)?
>Wasn’t BLISS invented a few years before C?
Yes, and used to write a bunch of VMS utility programs – but not the OS core. Another key difference, and the reason it became a dead end, was that it was not at all retargetable; there we in fact no data types other than full native machine words, and program semantics depended heavily on its size and format.
>Couldn’t ALGOL-68 have (at least theoretically) done the trick?
Yes :-)
>Or even FORTH (or its predecessors)?
Forth almost did. But it post-dated Unix retargetable compilers by a few years.
Well, in the mid-late 80’s, there was source you could download off the net, but it was always weird, obscure or useless, and you’d wonder “why would I need that?” e.g. I had a friend who explained the pre-proto-DNS code to me: you’d run that, instead of having to manually download new hosts files every few weeks. Which, indeed, conceptually sounded pretty cool, but really, what would I need that for? I wasn’t a sysadmin ….
You could find free software if you looked, but the idea that it was actually useful for something lagged behind by a decade.
>You could find free software if you looked, but the idea that it was actually useful for something lagged behind by a decade.
That is true. Must of what traveled in comp.sources.unix, even, was pretty thin stuff by modern standards. Some of the best bits of it have survived, however.
Now I’m just a young whippersnapper who was born after most of what you’re talking about happened, and has only been paying a lot of attention to FOSS for the past five years or so, but based on what I’ve read, my thoughts are as follows:
None of what I’ve read suggests that there was any kind of codified set of principles resembling those that appear in today’s FOSS culture in the early days of computing.
A fair bit of what I’ve read suggests (and this is my interpretation of the data I’ve seen, not anything explicitly stated in what I’ve read) that at times when new machine form factors were introduced (e.g. mainframes, early hobbyist machines such as the Altair) prior to the consumerization of computing, it was fairly typical for there to be a non-codified culture of sharing for a few years (even though, granted, sharing may not have been as valuable because of portability issues), in part because hardware was expensive and programmers were cheap and the hardware manufacturers were concentrating on making money off of hardware, while most software not written by hardware manufacturers was written by the users of a given machine for use on-site, rather than for sale.
For example, I know that for a fair amount of time IBM didn’t bother copyrighting its software (and the US wasn’t yet a signatory to the Berne convention, so copyright wasn’t automatic), with the result that things like OS/360 are in the public domain.
And the brouhaha surrounding Gates’ Open Letter to Hobbyists strikes me as a conflict between a community built around coding for the sake of coding and a company trying to monetize that hobby.
Now, a lot of what came out of these early communities wasn’t close to what we now call open source, and a lot of what was shared seems to have been what we now call binary blobs, but in a lot of these early communities there seems to have been an ethic that sharing wasn’t wrong, even if it wasn’t actively practiced or encouraged.
Based on the above paragraph, I will now make an assertion: If there had never been any attempt to use intellectual property law to commercialize software, proprietary software as we know it would not exist, *but neither would FOSS*. FOSS came about as the result of a codification of the way hackers/geeks tend to think naturally, and that codification came about in response to the existence of proprietary software.
So we have a “knowledge of good and evil” issue here. If software had not been commercialized, we wouldn’t know the difference between proprietary and open-source, but the climate would be much more like open-source.
So we do have a fall, of sorts. Not from a lost Eden of free software, but from a lost Eden of proprietariness and freeness not being a concern. A lost Eden of not having mounds of philosophies and philosophical debates about whether software should be FOSS, what the proper license for FOSS is, and how strict we should be about running only FOSS. Had it continued, this Eden may have had its fair share of binary blobs and unshared and/or unportable code, but it would nonetheless have been a much less stressful place.
>None of what I’ve read suggests that there was any kind of codified set of principles resembling those that appear in today’s FOSS culture in the early days of computing.
You’re right. There wasn’t. You can see some of that kind of thinking in the writing of the early MULTICS and Unix guys. Elsewhere, pretty much absent.
>And the brouhaha surrounding Gates’ Open Letter to Hobbyists
There was very little brouhaha at the time. It appears to me that that broadside was later deemed retrospectively important by people who weren’t there. Otherwise I think your take on early conditions is largely correct, or matches my memories and research anyway.
>FOSS came about as the result of a codification of the way hackers/geeks tend to think naturally, and that codification came about in response to the existence of proprietary software
The first half of that is clearly true. I’m doubtful about the second.
I believe that even without an extension of IP law to software, something like an open-source ideology would have evolved in a quieter way, with hackers and geeks exerting pressure against secrecy and binary blobs as they came to understand that these things are simply bad engineering practice.
> There was very little brouhaha at the time. It appears to me that that broadside was later deemed retrospectively important by people who weren’t there.
My impression is that there was a fair bit of brouhaha within the Altair community, but that the Altair community wasn’t exceptionally important except in being an early stomping ground for people who were major players later, and that the brouhaha would now be forgotten if Microsoft had failed early on (maybe meriting a stub Wikipedia article with Gates and “Micro Soft” being red links).
I brought it up because it was (a) important or not, an example of the type of dynamic I’m talking about, and (b) important or not, well known enough to be an example that I could expect people to recognize.
>I believe that even without an extension of IP law to software, something like an open-source ideology would have evolved in a quieter way, with hackers and geeks exerting pressure against secrecy and binary blobs as they came to understand that these things are simply bad engineering practice.
But if hackers had been left to their natural way of thought, it wouldn’t have been a matter of explicit secrecy, it would have been a matter of people distributing binaries for convenience and only offering source if explicitly asked for it, with software then becoming de-facto closed source when an author lost an un-backed-up hard drive without having shared his source with anyone else, or as the result of some similar accident. Sure, standards of professionalism would have, over time, increased the incidence of proactive sharing, but releasing only binary blobs would merely be seen as sloppy or eccentric, rather than potentially malicious.
>Sure, standards of professionalism would have, over time, increased the incidence of proactive sharing, but releasing only binary blobs would merely be seen as sloppy or eccentric, rather than potentially malicious.
I disagree. I think that situation would be unstable in the direction of a full-bore open source ethos, only without the explicit tribalism and quite the same sort of ideological battles. Possibly a better outcome, but we can’t run the experiment to see.
I coded in Fortran in the 1980’s (whole decade, I still hate Fortran). Only userland/application code.
There was code “sharing” of sorts. We had no Internet access in the places I worked, but some network access at the end of the 1980s. I worked on some obscure 4K mini computer originally, then migrated to Data General and DEC VMS. The Fortran code was “portable” in that we did not have to rewrite everything from scratch.
Copyright understanding was extremely weak. Most code was in some sense proprietary, but we were allowed to play with it anyway. During most of the 1980’s we had to use proprietary compilers that put their copyrights on our code. OS API documentation could be difficult to obtain (as in, costs a lot of money).
The breakthrough was getting “real” internet access and migrating to SGI Irix machines.
>The breakthrough was getting “real” internet access and migrating to SGI Irix machines.
Yes. That was the point at which you hooked into the technologies that made a true open-source culture possible.
People often didn’t notice how important that transition was at the time they made it.
Is it possible to provide for an interested newcomer a brief overview of the MIT AI lab (or a link to one already existing)?
ESR,
Another thing that gets forgotten today is that todays computing culture does not entirely come from the multi-user university computer environments and the hackers working on them. There were also the “bitty boxes”, Commodore, Sinclair, and there was a bit of a similar creative, passionate, non-profit computing subculture based on them, called the Demoscene.
Unfortunately, as people used TVs for monitors, the NTSC / PAL standards meant there could not be such a close cooperation between US and European ‘scener teams. Still, interesting that e.g. Finnish ‘sceners always communicated in American slang, always “dudez” never “lads”, so there must have been a connection. I also suspect a linguistic influence influence from US grafitti culture and lingo. The insult “lamer” was tossed around pretty much they same way as “toy” in the grafitti lingo. Calling the ‘scener culture digital grafitti is not too far off.
‘sceners were in many sense the complete opposite of hackers. A bit of a crazy artist mentality. Flashy nicknames, no sense of professional communication rather full of insults “greetinx to team X, suckerz to the lamerz in team Y” as rolling texts in demos, little interest in useful projects, rather a sense of showing off their programming and artist skills. A good more contemporary example of the ‘scener mentality is http://en.wikipedia.org/wiki/.kkrieger – squeezing a 3D shooter into 96 kilobytes – useless, almost a crime by hacker way of thinking who never really liked artificial limitations, but as a tour de force simply awesome. Not much utility, but lots of bragging value.
‘sceners contribution to modern computing is naturally much more limited than hackers:
– they contributed a lot to gaming as hackers used to have one weak spot, graphics, visual arts, see the ugly projects like Freeciv. Squeezing visually stunning stuff into 4, 64 or 96 kbytes is basically what ‘sceners shined in.
– They contributed a lot to the idea that it is normal to have computers at home and to program them as a hobby
– (not so much to computing, but to computer music-making – this was also fairly closely centered on the ‘scene: http://en.wikipedia.org/wiki/Tracker_%28music_software%29 )
Unfortunately, ‘sceners never had any sense of open source. Basically their view of programming was that of a competition. Why give away your tricks when your point is to beat the other teams in a demo compo?
Anyway, conclusion: back in the 1980’s the “bitty boxes” didn’t really have much of an open source culture either.
What we had instead, by the mid-eighties at least, is already a more or less world-spanning “pirate network” where “0-day warez” (games and applications with their copy protection cracked uploaded to BBS sites the same day they were released) were normal.
I guess at this level the culture growing out from the “bitty boxes” contributed to the idea that selling software copies for money in shrinkwrap is not really a fully waterproof business idea…
>there was a bit of a similar creative, passionate, non-profit computing subculture based on them, called the Demoscene.
Similar, in some ways, very different (as you point out) in others.
Near as I can tell the demoscene was a separate development with no influence on early hacker culture in the U.S. I never heard a word about those guys between 1976 and 1990. To this day I haven’t seen any of their software, ever.
Also I notice, esr is very western centric when it comes to computer culture and adoption. I would be interested if there is any historian working on the aspects of this early computing culture in countries like India and particularly in its institutions and premier universities like the IIT and IISc. Consumer adoption of computers in India came at least a decade late, thanks to high prices of imported hardware and a general sense of mystification about computers in general.
The IT revolution in India wasn’t a boom out of nowhere. There has existed a geek culture in India as well for a long time. Unfortunately I am not sure how well it is documented.
ESR> You couldn’t do what you can do today, which is write a program in C or Perl or Perl or Python with the confident expectation that it will run on multiple architectures.
I know that Perl 6 is almost a whole new language, but I have to believe this was a slip, and you meant to name some other language. But I’m not sure what it would be.
” You couldn’t do what you can do today, which is write a program in C or Perl or Perl or Python with the confident expectation that it will run on multiple architectures.”
You still can’t do that without recompiling except for VM-based languages like Java. When I started coding in 1975, there was BASIC, FORTRAN, and COBOL (everyone shouted in those days), so your presentation is not very accurate. especially the “all fit in an auditorium” nonsense. I was a high school student and there were instructional tips for learning BASIC in the margins of my Algebra textbook! No, we didn’t have microprocessors so no “PCs” but that’s not what software is about. And lots of code was traded over ARPAnet and between universities by magnetic tape and, gasp, card decks.
ESR , intentional or not, may have had something to do with it as well. The first 3-5 chapters of A Brief History of Hackerdom, in the context of CatB, suggest that “Open Source” naturally evolved from the olden day hacker culture, with “The Proprietary Unix Era” (emphasis on ‘proprietary’) as The Fall. A reader would then assume some sort of “proto open source” before the fall …
>A reader would then assume some sort of “proto open source” before the fall …
Well, there was. But almost completely confined to the Unix world.
Ok, let me throw in my own experience from the other end of the story RMS tells (and apologies for the long post).
I started using linux in one of its very early distributions, in 1994 (i’m not even sure which distro it was!) just out of curiosity, in the labs of the CS dep at the university of Padua. There I met all the guys who introduced Linux to Italy (alessandro Rubini was maybe the most well-known figure ab), and got deeply into the gnu/free software philosophy from the start.
I installed my first linux system at home on a dos partition, then on its own partition. At the time my pc was so limited I could not run any version of X. Since the start though, I tried to follow the Right Path and use linux for my everyday tasks, including writing my undergrad thesis in tex.
As I recall, all these efforts were not at all appreciated by the “real” hackers at the CS dept: the emphasis there was very much on hardware, and making linux take control of every aspect of the hardware. Writing device drivers was what mattered, not running applications (Rubini went on to write a fairly succesful book on the topic). The very fact of wanting to run linux on a home computer struck those people as odd – and boring.
I vaguely remember a very hardware-oriented mentality, with extensive discussion of fine points of the GNU assembly language, and a matching attitude to newcomers: if you don’t know the assembly language of your CPU, you’re not worth talking to.
For the masses (that would be me) the envisaged solution was to produce finely tuned system with Linux pre-installed. And in truth, after leaving university the early Linux guys started just that kind of company (which in turn rapidly folded)
I don’t recall cross-platform portability as being a major goal either. I remember Rubini worked (lucky bastard) on a Sparc workstation, and his code was heavily dependent on that structure. I clearly remember him musing on how the Sparc bootstrap procedure was much superior that of ordinary pcs: the accepted idea was that linux had to develop independently on each major platform.
Inevitably, that left us PC users lost in a limbo. For the couple of people like me who could not use a workstation daily, and were more interested in running useful applications than making old sound sequencers (say) work with linux, the only solution was to download stuff off the net (we lucky few could do that, because until circa 1996 we “semi-nerds” were the only users of our comp lab at the faculty of arts).
So i started to download — and compile. The gnu auto-tools compile procedure became almost second nature to me. I was a slackware user by then, but their extensive collections of software did not cover the needs of linguistic undergrad: I did not have a human interface to tex (remember, no X, no Lyx), no easy way to print syntactic trees, no way of reading the doc files my prof gave me, no IPA fonts etc etc. In those years I must have downloaded and compiled the most obscure pieces of software in the history of open-source. I certainly made a conscious effort to try ALL available editors and fonts.
As you can well imagine, I became so experienced in compiler error messages, I sometimes could help the hackers at the faculty of engineering in parsing them. Auto-tools were not always used, and when they were, it was normal to have to go through make files and correct them extensively. Basically all text-parsing tools I found in our lab (and which I needed for the text analysis job I got at the English dept) were written in a combination of bash, sed and C.
You have by now guessed my point: in my experience, cross-version and cross platform portability was not really obtained until Red Hat came up with their packet manager. And despite that, we had to go through a long dependency hell before the system worked flawlessly.
More deeply, I’d say early linux grew on a tribal mentality according to which, say, a x86 system and an Alpha were completely different entities, with the people working on alphas one heaven above all the rest. I’d say it took the definite increase of clock speed in ordinary x86 systems, and the demise of PowerPc to overcome that mentality.
Just my 2 cents … thanks for the post!
>I don’t recall cross-platform portability as being a major goal either.
Oh innocence of youth (sorry if I sound too snarky, I’m not really laughing at you). Post-1994 you were already operating in a milieu that had been shaped by retargetable compilers for two decades. It didn’t seem like a big deal to you because those gains had mostly already been collected.
>there was a bit of a similar creative, passionate, non-profit computing subculture based on them, called the Demoscene.
they were all the rage when I got into linux in the early 90’s but I don’t remember interacting with any of them anywhere on the net. I also recall they were very much an Amiga tribe, at least in Italy.
>I don’t recall cross-platform portability as being a major goal either.
>> Oh innocence of youth (sorry if I sound too snarky, I’m not really laughing at you). Post-1994 you were already operating in a milieu that had been shaped by retargetable compilers for two decades. It didn’t seem like a big deal to you because those gains had mostly already been collected.
no offence taken, I take your point – but I do have a vague recollection that for those guys re-targeted compilation was a bit of a “last resort” option and that only native compilation was the “proper” way to go.
That does not detract from your point that retargetable compilation was an option, though.
> There were also the “bitty boxes”, Commodore, Sinclair
The “bitty boxes”/home computers were heavily focused on audio/video capabilities, so it makes sense that the demoscene would develop in an “artsier” direction, with little or no attention to hacker concerns such as source code releases or large-scale cooperation. (Even something as basic as data storage was often an afterthought on early home computers, so there was little utility/productivity software and few file-format standards!) Of course the multimedia chipsets were so machine-specific that cross-platform portability was also infeasible.
Now the PC and Apple Macintosh were quite different, and a hacker scene *could* theoretically arise on them, but these were also less accessible to hobbyists.
“he “bitty boxes”/home computers were heavily focused on audio/video capabilities, so it makes sense that the demoscene would develop in an “artsier” direction, with little or no attention to hacker concerns such as source code releases or large-scale cooperation.”
Except that I spent an inordinate amount of time typing in code from various magazines for these bitty boxes. There was also a decent amount of basic code floating around the BBS’s in addition to things like Fourth source floating about.
In any case, outside of a small subset of the hacker community, everybody is still dependent on proprietary technology and closed source software.
The introduction of dBase & dialects in the very early 80’s for CP/M, (& later MP/M), NorthStarDOS, and Unix (Revelation) did allow cross-platform use of the same code. It’s existence spawned a relatively large programmer and software base, esp applications even if it was considered proprietary. The language & file format is still in use today.
@esr
I read your second blog post and now understand what you trying to tell me: that the different platform-specific linux “tribes” I found when I got hooked onto linux in 1994 (alpha, sparc, powerpc etc) were there because of massive cross-compilation of the kernel, is that right?
I now realize I had assumed all this time that all those platform-specific linuxes had been (partially) rewritten from scratch!
>I read your second blog post and now understand what you trying to tell me: that the different platform-specific linux “tribes” I found when I got hooked onto linux in 1994 (alpha, sparc, powerpc etc) were there because of massive cross-compilation of the kernel, is that right?
Yes, that’s right.
@IWasThere: “”You couldn’t do what you can do today, which is write a program in C or Perl or Perl or Python with the confident expectation that it will run on multiple architectures.”
You still can’t do that without recompiling except for VM-based languages like Java.”
You’re still recompiling Java. The difference is that the compiled code is portable. Java bytecode is always the same, regardless of what architecture it’s built on. The Java runtime handles the architectural differences. (I run IBM’s Eclipse IDE here under Linux and Windows, and it’s the same binaries.) Similar comments apply to Python, and I have Python code that runs on Linux, Windows, and Mac OS/X. The programs are the same – only the installer for the intended target is different.
You’re right that type-in code listings were a thing, but there is an inherent complexity limit to these – WUMPUS is what, 300 lines of BASIC code and program data? The BASIC program culture may have been fairly open as long as one stayed away from machine-specific tricks, but it could not be much more than an introductory/educational role for new hackers – serious developments would quickly become non-portable, and quite often shift to assembly code.
Regardless, the popularity of type-in listings would seem to highlight my earlier point about those early home computers. ISTM that without easy ways to store user data, these were largely restricted to acting as glorified gaming consoles (though of course with a significant market advantage due to increased openness) and terminals for connecting to BBS’s. That is, notwithstanding the initial hype about what the “home computer revolution” would bring to the average home, productivity- and general-utility software could _not_ be a major factor on these platforms until these issues were solved, by which time it was too late to matter. Overall, a _very_ different milieu compared to the academically-focused free software/open source community.
@Jay Maynard: I do not believe that auto-correct put “bathed” in place of “bitched”. Not. For. One. Second. Freudian slip!!
In the early transition to source-available code (open source happened later), I personally think that things like Netnews and TCP/IP had a lot more impact than UNIX – at least at first. They turned the world from a very large place where you scrounged money for interstate (never mind international) phone calls into a small-and-shrinking place where global conversations happened. And what’s interesting about those particular bits of software is that they were consciously developed to run on multiple platforms. In the mid-1980s it was essentially inconceivable to work actively with international colleagues. Now it’s inconceivable *not* to. That’s a *huge* change to see in the world over a mere 30 years.
I agree with ESR that the 16-bit machines were hopelessly incompatible at the source level – even, in many cases, within a single vendor. But VAX-C, and C compilers for the IBM mainframes, PCs, and Macs, changed that in important ways. Not compatible, really, but “compatible enough”. And I really do believe that the convergence on a portable source-level software world was aided, abetted, and co-incentivized by the arrival of global, cross-platform messaging in its earliest forms. Mainly because before that very few people *cared* about compatibility – until machines connected, there wasn’t much interest outside of the world of software developers.
And perversely, I think AT&T did the world a favor by hanging on stubbornly to the UNIX code. The AT&T version of UNIX was hardly a magical work of art. By teasing it through universities and then refusing to let people run it in any reasonable way, they forced others to rewrite it. Ultimately, that has turned out to be a very good thing.
>In the early transition to source-available code (open source happened later), I personally think that things like Netnews and TCP/IP had a lot more impact than UNIX – at least at first.
That is certainly a defensible position. I don’t, on the other hand, think I’d want to try to defend the proposition that netnews and TCP/IP had more impact than retargetable compilers. As you correctly note, these were consciously designed for multi-platform operation; netnews in particular utterly relied on retargetable C.
So some of this comes down to whether you interpret “Unix” conjunctively as the entire technology package (including the Unix API) or disjunctively as a set of technical and conceptual innovations some of which had huge impact well outside of anywhere people were thinking seriously about the Unix API. Foremost among those innovations were (a) retargetable compilers, and (b) systematic pursuit of modularity in software architecture, doing in practice what Parnas and others had been advocating in theory since the early 1970s. I think we might want to add (c) the C language, which for all its flaws was a sharp enough tool to obsolesce every other compiled language in about five years flat after 1982.
Eventually, around 1990, the Unix API itself caught up with the expanding shock front from the disruption created by its enabling technologies. Thus, Linux and *BSD, and today Android.
Just thinking about this in my own personal situation. When I was 14, I first began writing code on a kit computer called the ETI-660 that was based on the RCA CDP1802. My best friends decided to opt with another kit computer from “Talking Electronics” based on the Z80. We wrote all of our code in assembler. I had the benefit of a CHIP-8 implementation. Clearly, there was no portability there. It is interesting to think about what our learning curve would have looked like today. Your typical 14 year old hackers have, among the other myriad improvements available to them, the simple fact that almost anything one student develops or does is going to be able to be used by literally ANY student that the former chooses to share the work with. The importance of being able to collaborate – easily – cannot be underestimated.
@Eric There was also SPL and SPL-II for the HP3000 stack computing machines. They *had* no assembly language at all. Because everything had to go through the stack there was no more efficient way to code on that machine than to use their Algol-derived higher level language. The code wasn’t portable only because there was only one compiler and it ran only on the HP3000 machines. I wouldn’t be surprised, though, to find out that somebody has since written an SPL compiler to run on something else.
One of my favorite constructs in SPL-II was this:
do {code} while (boolean) {code} until (boolean);
… to let you do an early escape after computing something.
@guest: Yes, type in listing were popular, and tools even got written to assist. Compute Gazette, aimed at the Commodore 64 users, published a utility written in BASIC to assist in correctly entering machine language code they also published.
But portability was not a concern, nor was it possible. Every maker of those earlier machines had their own version of BASIC, extended to support what their hardware offered, and assembler is inherently non-portable. Want to get your BASIC program for Atari working on a Commodore machine? Expect to do a substantial rewrite, assuming it’s doable at all. (And since those early 8 bit machines competed in part of graphics and sound capabilities for games, it might not be doable, because something you did on one machine simply might not be possible on another.)
Portability wasn’t even a gleam in those folk’s eyes. Different communities arose around different platforms, and while code sharing was common, it was platform specific. In addition, the various communities all looked down on each other, each believing the platform it used was the best, and pitying those unfortunate enough to use something else.
The ones I recall did have ways to store user data, though they might not be described as “easy”. Cassette tape was first, followed by floppy drives. Since part of the marketing was that they were things you could program yourself, some form of external storage was a requirement, or what you had was a pure gaming console. The manufacturers couldn’t assume that users would only ever get software on third party cartridges plugged into an expansion port.
Russ, what I suspect happened was that I typoed “bithed” and the autocorrect changed it to “bathed”. That’s my story and I’m sticking to it.
As for SPL/SPL-II, ISTR HP provided a cross-compiler for their PA-RISC-based 3000s, but that’s a pretty vague recollection. I do think MPE was one of the nicer multiuser OSes out there, at least for its time and at the user (if maybe not programmer) level.
I’ve got a 3000. Wonder if I can remember what the password for MANAGER.SYS is…?
The Enlightenment collection of software is, I maintain, still demoscene software.
Demoscene was mostly centered around Amiga, because you could do SO MUCH more with those systems.
The same was true of Business BASIC dialects dating back to at least the seventies. I’m aware of at least one commercially sold accounting system that was developed for miniconputers in the 70s and still exists today on Windows PCs — much of its UI rewritten in C++ or C# but the core databasey bits still in Business BASIC.
> I’m aware of at least one commercially sold accounting system that was developed for miniconputers in the 70s and still exists today on Windows PCs — much of its UI rewritten in C++ or C# but the core databasey bits still in Business BASIC.
Beware, you’re back-projecting again.
Systems like this, that started single-platform in a non-retargetable language and were later migrated to portable implementations based on retargetable compilers, can appear in retrospect to be more innovative and modern than they actually were at the time they were written.
The right way to think about cases like this is that they exceeded their natural lifespan and platform range because they got a booster shot of Unix-derived technology. The usual giveaway that this is what has occurred is a reimplementation in C.
Yeah, MPE not so nice at the progammer level. It was one of those operating systems that dealt in record-based I/O, like it was stuck in the punched card world. I’m sure that the Unix inventors had MPe in their sights as one of the “This must die” ideas. Still, we managed to write a visual text editor that had infinite level undo, so it couldn’t have been THAT horrible.
DMcCunney, you’re right about portability. ISTM that tape was clearly _not_ friendly enough, especially as implemented on those home computers – AIUI it was even quite easy to overwrite data by mistake when writing a file. It was just about usable enough for loading pre-existing software. Even floppy-disk drives were quite expensive, so it was uncommon to have more than one drive connected – which meant that reasonable operations such as copying data across disks could become complicated affairs, possibly involving several disk swaps.
I think one of those enabling technologies was the philosophy of *design* of the Unix API. Have you seen the Windows API? I think I’m being charitable when I describe its design as “bureaucratic”. To open a file you have to fill out a SECURITY_ATTRIBUTES structure in triplicate and get it signed by your system administrator. I’m exaggerating, but the Windows model seems to encourage a lot of structure filling, authorization token passing, and extraneous parameters.
Unix did a lot of things right in the early days, like doing enough for developers such that they could build large, sophisticated applications relatively easily using nothing but “simple hand tools” (a text editor, compiler, and sundry standard command-line utilities). Having a relatively simple and easy-to-remember API certainly helped in that regard, as did the system doing things to forcibly terminated applications that weren’t necessarily givens, particularly on micros (freeing memory, closing all open files, etc.). It’s telling that Ken Thompson once stated that his only regret in the API design was not spelling ‘creat’ with an ‘e’.
Of course the Unix API was deficient in many ways, for example, lacking extensive support for asynchronous I/O and event- or interrupt-driven programming. This has since been largely corrected, but many Unix programs still sit in select loops polling for events rather than just installing a series of callbacks and letting interrupts drive when they’re invoked. This is one of the reasons why X11 is so nasty and full of hacks, and due to be replaced in the not-too-distant.
But the aesthetic polish was considerable, and gave the brilliant hackers of the day even further reach. Of course, these days, we’re living in the age of autocomplete, so a small and easy-to-remember API is less important than one that covers all the common cases and makes lives easier for the less-brilliant developers who are shaping software and the network today.
ESR, looking back, it seems to me that the co-introduction of standardized & widely used operating systems eg MS-DOs coupled with a common more or less open PC intel architecture (Microsoft+IBM) had more impact on the development of open source / hacker culture than the UNIX / LINUX substrate it ultimately centered on. After all, that is what caused the explosion in personal computing more than anything else, IMO.
>ESR, looking back, it seems to me that the co-introduction of standardized & widely used operating systems eg MS-DOs coupled with a common more or less open PC intel architecture (Microsoft+IBM) had more impact on the development of open source / hacker culture than the UNIX / LINUX substrate it ultimately centered on.
I might buy this, but for two things:
(a) The hacker culture well predates MS-DOS and the PC.
(b) The timing of the open-source explosion was about eight years too late to support your thesis. If you were correct, we should have seen the emergence of a viral Linux-like open-source platform around 1982, resembling DOS on steroids and perhaps launching off the BBS culture’s communications links.
That didn’t happen, which tells us that what you are describing were at best necessary conditions but not sufficient.
It looks to me like the real trigger was the release of the 386 – that is, first ship of “PC” systems with the capability for a flat 32-bit address space and a VM around 1987. That is, the commoditized hardware had to reach the right capability level to host Unix before the interesting things could happen, and they began to happen with dizzying rapidity immediately thereafter.
One of my earliest claims to fame and prescience was that I saw where this was going and said so before almost anybody else. If you can find a copy of Tricks of the Unix Masters from 1985 (published ’86), read the last paper in which I commit firmly to the proposition that 386-based commodity micros are the future of Unix and will nuke the hell out of the whole variegated landscape of workstations and minicomputers as they then existed.
This doesn’t seem like a brave projection now, but that’s because we live after it happened. At the time, predicting that the likes of DEC and Sun Microsystems were on a hiding to nowhere unless they cannibalized their own businesses to ride the Intel flood-tide was…not a popular position. Eugene (“Nobody will survive the attack of the killer micros!”) Miya got it right independently of me at around the same time.
I didn’t get everything right. I didn’t foresee Linux. If I had, I might have written it myself.
That is certainly a defensible position. I don’t, on the other hand, think I’d want to try to defend the proposition that netnews and TCP/IP had more impact than retargetable compilers.
I disagree in as much as standardized languages from the 60s and 70s were about as cross platform as retargetable compiler code written in C in the 80s. Something written in C for gcc under sunos wasn’t certain to work in vxworks or hpux or whatever without a lot of conditional compile blocks even in gcc on those platforms. The fact that is was gcc across the board vs greenhills on vxworks vs sun c on sunos made little difference to the developer. The key to successful cross platform application code was successful cross platform libraries.
The fact that gcc was retargetable was useful for folks maintaining a compiler across multiple platforms but if all the compilers implement the same language spec you have about as much rewrite and tweaking as using the same retargetable compiler across platforms. So long as you have standard libraries that handle things like sockets, file access, threading, etc independent of OS.
You can argue that retargetable compilers reduced the cost for developing on marginal systems but all the major operating systems had folks that would write native compilers. And what you write about “Until the early 1980s, programming in high-level languages was the exception rather than the rule.” is completely false. COBOL was a major language and standardized in the 60s. For example COBOL 68. The reason a lot of these old languages are known with a year (like ALGOL 60 and FORTRAN 66) is because they were standardized entirely so that software WAS portable across different computer systems.
It is astonishing that you’d ignore the huge body of code written in FORTRAN and COBOL as the exception rather than the rule. It’s like you’re pretending that IBM never existed and useful software development only started happening when unix appeared. That these are not languages that you build operating systems with is immaterial and in fact more important as these were the languages used to solve actual problems and have more need to be cross platform and more likely to be shared.
Given the widespread use of FORTRAN and the existence of a spec compliant FORTRAN compiler on every significant platform I would say that it is obvious that had retargetable compilers never existed so long as you had ARPANET and net news you’d STILL likely had open source sharing in the 80s.
If you’re going to pinpoint a fundamental compiler technology that made the computing world possible beyond assembly code it would be the optimizing compiler that made hand written assembly a niche for specific needs. And that happened in 1957. And it was a FORTRAN compiler.
>It is astonishing that you’d ignore the huge body of code written in FORTRAN and COBOL as the exception rather than the rule.
Wait, now you seem to be changing the subject. Why are you talking about applications rather than systems programming?
Yeah, FORTRAN and COBOL almost sort of worked portably in their problem domains. But it was only “almost” and “sort of”. This is why LINPACK was exceptional for its time, and why large COBOL ports were so brutal that they almost never happened until they were forced by hardware EOL.
I certainly don’t agree that these compilers were “about as portable” as C, and I say that from painful experience in the late 1970s working on very late mainframe FORTRAN when that technology was about as mature as it was ever going to get.
The timing of the open-source explosion was about eight years too late to support your thesis. If you were correct, we should have seen the emergence of a viral Linux-like open-source platform around 1982, resembling DOS on steroids and perhaps launching off the BBS culture’s communications links.
There was a hacker explosion, just not an open source hacker explosion. Tons of freeware and shareware was developed during this period. Very similar to the current explosion in mobile app developers.
And frankly, the explosion COULD have happened without Linux and been much healthier. It would have been delayed until 1986 when the BSD legal troubles were resolved. Then open source would have more moved along BSD lines vs FSF lines since the FSF would have been left with nothing but HURD and Linus would have been a BSD hacker.
>There was a hacker explosion, just not an open source hacker explosion. Tons of freeware and shareware was developed during this period. Very similar to the current explosion in mobile app developers.
Yes. Now ask yourself why so little of it survived. Me, I know why – those were formative years for me.
>And frankly, the explosion COULD have happened without Linux and been much healthier. It would have been delayed until 1986 when the BSD legal troubles were resolved. Then open source would have more moved along BSD lines vs FSF lines since the FSF would have been left with nothing but HURD and Linus would have been a BSD hacker.
You won’t get any argument from me that that wouldn’t have been better. Among other advantages, “ESR” might never have been necessary – big win for me, if not for anyone else.
Wait, now you seem to be changing the subject. Why are you talking about applications rather than systems programming?
Where am I changing the subject?
But the truth is, before the early to mid-1980s, the technological and cultural base to support anything like what we now call “open source” largely didn’t exist at all outside of those two settings. The reason is brutally simple: software wasn’t portable!
You couldn’t do what you can do today, which is write a program in C or Perl or Ruby or Python with the confident expectation that it will run on multiple architectures.
Is Open Source only related to systems programming? And most of the stuff on DECUS (from vague recollection) was application stuff. Perhaps because the only things I remember from DECUS were the games…
That C is a good system language is great. That Linux is open source is great. But Open Source can easily happen on any OS open or closed and isn’t intrinsically dependent on open source system tooling.
And I worked on both cross platform FORTRAN and C code in the 80s and I can tell you that both were painful. We had a mixture of C and FORTRAN code moving from VMS to AIX.
Core language features is largely a non-issue across any ANSI language. It’s all the system dependencies and library calls that a retargetable compiler doesn’t help with.
>But Open Source can easily happen on any OS open or closed and isn’t intrinsically dependent on open source system tooling.
The historical evidence is against you. The code-sharing communities gated by proprietary tools and OSes never scaled, neither in volume nor in quantity.
Relatedly, there’s a good reason you only remember the games from the DECUS tapes. Everything else bit-rotted to unusability long ago. You might want to think about why.
@guest: ” Even floppy-disk drives were quite expensive, so it was uncommon to have more than one drive connected – which meant that reasonable operations such as copying data across disks could become complicated affairs, possibly involving several disk swaps.”
Yep. I logged time on C64s. They had some interesting features. The C64 VIC and SID chips did good graphics and sound for the day. I saw a write up not long back on on a female engineer who built an electric bass with a C64 as the body, using the SID chip for output. She had to get clever with the code, because the SID could handle 3 inputs and the bass had four strings, so she played games with round-robin sampling to get it to work.
The 64 also had 64K of RAM and 16K of ROM, with 8K for the OS and 8K for BASIC. The 6510 CPU had instructions to let you specify whether or not the two ROMs were mapped into the 64K address space, and one programmer wrote an extension that used that. He stashed code and data in the RAM normally not mapped, and used the Restore key to toggle things. Restore generated a non-maskable interrupt, In normal use, when you pressed it, the OS checked if the Run/Stop was also pressed, and if so did a warm start. His code grabbed the interrupt and used it to display a menu of things he had stashed away. He had to make sure his code wasn’t trying to call something in one of the ROMs when they were mapped out, but it worked nicely.
The Commodoe 1541 drive was relatively expensive, and hobbled by a dead-slow serial interface. A popular package on the C64 was the GEOS environment, which provided a bit mapped screen with GUI, icons, and the like, and programs meant to run in GEOS. It was remarkable for what it managed in 64K on a 1mhz machine, but loading it was “Tell it to load, go have a cup of coffee, and see it if was up when you came back.” operation.
But the 1541 had it’s own 6510 controller, and 2K RAM. It was possibly to write assembler code that downloaded and ran asynchronously on the drive, and I believe a couple of DBMS offerings for the 64 used that ability.
The C64 had neat features hobbled by design-to-cost, and the same features in a faster and more powerful machine would have been quite something.
If had the odd thought about getting a C-128, which had 128K RAM and a Z-80 CPU as well as the 6510 and could run CP/M 3.0. Hackers have developed code and hardware to let it use things like hard drives and add more RAM. Make a RAMdisk and GEOS from that, and you would have a neat little toy.
I’ll agree that UNIX made a big difference, even without code leakage. I was workong on DEC-10, and proprietary things in assembler, and just _reading_ the UNIX/C stuff in 1980 made the old age instantly obsolete in my eyes.
FWIW, I don’t think Cain and Hendrix’s Small C was that influential, and it didn’t predate GCC by all that much (a few years), because 8080’s just weren’t good enough More important were the early ’86 C compilers, which were decent enough, and those overlapped GCC development. I say this as someone who used Small C, BDS C, and Lattice/C circa ’82-’84.
-dB
“Among other advantages, “ESR” might never have been necessary – big win for me, if not for anyone else.”
That’s an interesting thing for you to be saying, Eric…I know you didn’t set out to become famous, but it hasn’t been terrible, either…
What-if question #2: “I didn’t foresee Linux. If I had, I might have written it myself.”
Would you have GPLd it?
>That’s an interesting thing for you to be saying, Eric…I know you didn’t set out to become famous, but it hasn’t been terrible, either…
Being Mr. Famous Guy has had large personal costs that I will not discuss. Parts of it have in fact been more painful than I would wish on anybody; I don’t talk about this much because I despise whiners and self-martyrs, but it is a fact. I don’t regret the choice I made to try to change the world, but…knowing what I do now, if I could reach back in time and change things so I could achieve that without being Mr. Famous Guy, I would do it in a heartbeat.
>Would you have GPLd it?
No. MIT License, most likely, or something equivalent.
@esr:
I’d put it a few years later. For example, Modula-2 was getting some pretty serious traction in some quarters until ANSI C gained enough features to be good enough. C ate its lunch from the bottom and the CPUs got fast enough for memory-managed languages to eat its lunch from the top.
> It looks to me like the real trigger was the release of the 386 – that is, first ship of “PC” systems with the capability for a flat 32-bit address space and a VM around 1987. That is, the commoditized hardware had to reach the right capability level to host Unix before the interesting things could happen, and they began to happen with dizzying rapidity immediately thereafter.
Aren’t you back-projecting a bit, to think a flat 32-bit address space and a VM are necessary conditions to host Unix? I’ve played around with simh, the PDP-11 didn’t have any of that. SCO Xenix for the PC-XT was released in 1984.
>Aren’t you back-projecting a bit, to think a flat 32-bit address space and a VM are necessary conditions to host Unix?
No, though that is a very reasonable thing for you to guess based on limited information.
When the 386s arrived, I was running Microport System V on a 286 machine. I knew exactly how badly Unix on a segmented architecture without VM sucked – I was painfully expert on the multitudinous varieties of lossage appurtenant thereto.
PDP-11 Unix had worked in its day as well as it did because application scale was ten years (that is, about six Moore’s-Law doublings) less in 1975 than in 1985. By 1984 Xenix XT was a toy, a joke. Microport was a sort of lungfish, just barely adequate as a demonstration system but only a placeholder for what would be possible when Intel shipped hardware that didn’t suck. I was very, very clear about all this at the time.
A 32-bit virtual address space and virtual memory hardware were really the sine qua non of serious computing around that time. Try programming a 16-bit machine to do anything serious; have you ever heard of “overlays”? If not, be thankful.
But a line of development from MC68000 machines (010s or 020s, at least) and 4 BSD would be preferable (IMHO) to what we have now (Intel processors and Linux). It just didn’t work out that way.
“When the 386s arrived, I was running Microport System V on a 286 machine.”
As was I. I didn’t know you had this, though.
And yes, Random832, it sucked rather badly. That Unix could be made to run on a 286 was a tribute to the folks at Microport, but $DEITY, was it ever painful. There were changes made to C news because I had to make them to get it running on that system – my System V/AT machine was the main Usenet feed into Houston for a while, in the days when you could reasonably do it on a USR 2400-bps modem – that were simply horrific.
I don’t regret doing it, but I don’t miss it for a moment, either.
>As was I. I didn’t know you had this, though.
I have it on good authority that there were corporate all-hands meetings to discuss the bug lists I turned in.
I’ll third what Eric and Jay have said. Say the words “far pointer” to someone old enough to remember what that means and the implications thereof, and watch the resulting shivers and convulsions. And without an MMU, it was not really possible to enforce the memory-protection constraints that large-scale systems need in order to keep running reliably. I know that the remaining Amiga aficionados still believe that virtual memory is just a passing fad, and an OS that doesn’t let user apps walk the kernel’s internal data structures and rewrite the CPU’s interrupt vectors is a toy that isn’t up to the job of handling real work. But I don’t think history is on their side on this one. :)
Oh, and you ran PDP-11 Unix? Good — now you know why DEC invented the VAX.
System V/AT was my first Unix system.
Picture a Unix newbie fixing filesystem lossage with fsdb. Then turn to something sane.
By comparison, the $500 NCR Tower XP I replaced that box with, even given the sustained weirdness that was the TCP/IP implementation, was a major improvement.
@ESR”
> It looks to me like the real trigger was the release of the 386 – that is, first ship of
> “PC” systems with the capability for a flat 32-bit address space and a VM around
> 1987. That is, the commoditized hardware had to reach the right capability level
> to host Unix before the interesting things could happen, and they began to happen
> with dizzying rapidity immediately thereafter.
Didn’t Minix run on some truly small hardware? I’m thinking a 286, but I could be mistaken.
At one point I had it running under System 7 on a Mac (e.g. in user space), but I don’t recall if that was an SE or an SE/30.
MINIX 1.0 was designed to run on the 8088-based PC or above. Quite remarkable, getting a Unix to run on one of those — though there exists a tiny, ASM-implemented Unixoid OS for the C64 called LUnix (similar in some ways to OS-9).
You won’t get any argument that the rapid scaling up of intel processor capabilities represented by the 8080, Z80, 286, 386 sequence gave more horsepower to software. But my point was that prior to the escape of intel architecture from IBM proprietary walled garden, all computer suppliers enforced their own proprietary gardens as well. The fact that IBM attempted but failed to put the genie back into the bottle with the PS/2 in 1987 and Apple continually engineered its line to ensure a walled garden demonstrate that the impulse was still present but that the breadth of the market at that point made the effort moot. Without the escape & massive success (and the vast increase / breadth in compatible hardware accessories) of the MS-DOS – PC – intel architecture, the broad platform that hackers operated on may well never had been created and would have been much more expensive.
@esr: Regarding RMS, we are all the center of our own worlds, and the hero of our own narratives. What gets told about the hacker culture at MIT’s AI labs seems largely filtered through the perceptions of RMS and what he thought it was. Given his evangelical nature and subsequent attempts to impose a particular vision of what software development ought to be on the world, I wonder how much the real MIT AI Labs culture resembles Stallman’s recollections, and what you might hear if you could get other old timers who were involved there to spin tales.
I’d love to get Guy Steele’s take on it, for example. When Stallman was expressing the first version of emacs in ITS TECO, he did so by collecting and regularizing independent sets of macros TECO users had written to make thier lives easier. Steele was on of them, and briefly worked with RMS on his attempt, but seems to have stepped back and let RMS run with it. Given RMS, I can understand why he might..
>I wonder how much the real MIT AI Labs culture resembles Stallman’s recollections, and what you might hear if you could get other old timers who were involved there to spin tales.
We’re not completely in the dark about that. I have talked to some of RMS’s contemporaries at the Lab (including Guy Steele) and seen their tracks in the Jargon File. My impression is that RMS’s account is factually accurate as to his experience; neither Steele nor others have even hinted otherwise to me. I do not, however, take that as a guarantee there’s no sentimental patina on his recollections – it would not surprise me if his account were incomplete and somewhat sentimentalized.
> I know that the remaining Amiga aficionados still believe that virtual memory is just a passing fad…
The Amiga approach was in fact quite appropriate given the quasi-real-time constraints of multimedia, games and the like – and this is where the aficionados have a point. But yes, there are some real tradeoffs here: the lack of memory protection might be seen as a problem for general-purpose applications. Now, for quite some time MS-DOG and the Apple Mac did not have memory protection either, yet they were used for real work – still, the 80386 (with a flat address space, and making virtual memory feasible, not just memory protection) was a critical development.
I remember Microport System V/286, but was never forced to use it. For that matter, I believe AT&T had a System V port on a 286 at one point.
I got a Unix machine before I got a PC. I got (and still have) an AT&T 3B1, the larger and more powerful sibling of the UNIX PC.. It was a single user workstation designed by Convergent Technologies (now part of Unisys), and produced for AT&T in an OEM deal. It used a Motorola MC68010 CPU (the first model that possessed a hardware MMU. I believe) running at 10 mhz. It had a monochrome monitor with a bit mapped GUI, and mouse, and used an MFM HD of varying sizes. It could boot and run a Convergent port of AT&T System V Release 2 in *one* megabyte of RAM and perform useful work. It could have up to 4MB RAM – mine has 3.5. A client back then used a 3B1 with 3MB RAM to support four users on terminals plus one on the console to run a specialized distribution management database package. It worked fine.
A PC clone joined me later running MS-DOS 2.11, and in turn got replaced by a 386 running Windows 3.1. Win 3.X said it would run in 4MB, but really wanted more. Mine had 8MB. I looked at my 3B1 running System V R2 in as little as 1MB RAM, and my 386 struggling to run a single user OS with a multi-tasking shell in 8MB RAM, looked in the direction of Redmond, Washington, and said “What are you *doing?*” (I still say that.)
The 386 certainly made various things possible. There was controversy when it was released. Before the 386, custom was “second-sourcing”, with vendors using chips saying “What do we do if your factory burns down?”, and insisting on more than one supplier for a part, so Intel wasn’t the only source of 80X86 CPUs. With the 386, Intel declined to license it to other chip makers, betting they had the production capacity to meet the demand. Customers weren’t thrilled, but they didn’t have a lot of choice.
Another OS that was hobbled by the 286 was IBM’s OS/2. When the 286 was released, supporting protected mode and more RAM, everybody was waiting for the OS that would follow MS/DOS and use the capabilities of the new hardware. What we eventually got was OS/2, but by the time it arrived, the 386 was appearing. 286 machines were largely used as fast DOS workstations. IBM partnered with Microsoft to develop OS/2, and I heard stories that Microsoft wanted to skip the 286 and develop for the 386 instead, but IBM said “No, we promised to support the 286, and you’ll develop for it.” Microsoft chose to abandon OS/2 and concentrate on Windows, and the nail in OS/2’s coffin came when no support was offered for 32 bit Windows applications. Had OS/2 been developed for the 386 in the first place and the IBM/Microsoft partnership remained in effect, we might be running a version of OS/2 now.
@esr: I never thought RMS’s recollections were factually *inaccurate*. Whatever his flaws, “liar” was never a term I heard applied to him. But we all remember what we want to remember, and it’s precisely that sentimental patina I was thinking of.
very good essay. an aside re: decus. it wasn’t all application stuff. it came with conroy’s amazing free/open source c compiler, and helped us bootstrap an immense amount from unix over to dec platform.
The historical evidence is against you. The code-sharing communities gated by proprietary tools and OSes never scaled, neither in volume nor in quantity.
CodePlex has a good number of projects and there are a number of C# projects on GitHub. In the total Open Source universe C# is only around 1% but that’s still a large number.
Code sharing occurs quite a bit in things like MATLAB, Google Earth, ESRI and other closed tools.
/shrug
Many devs will share regardless of platform. This can be seen on sites like stackoverflow, code ranch, dev blogs, etc.
Relatedly, there’s a good reason you only remember the games from the DECUS tapes. Everything else bit-rotted to unusability long ago. You might want to think about why.
I remember the games because we played them. Nothing more to read from that. It’s not like those games have anything but historical interest anymore except for a very few who like playing retro ascii games.
me- There was a hacker explosion, just not an open source hacker explosion. Tons of freeware and shareware was developed during this period. Very similar to the current explosion in mobile app developers.
ear – Yes. Now ask yourself why so little of it survived. Me, I know why – those were formative years for me.
I was there too. Around the time you were coding Z-80 assembly comm software on TRS-80s I was learning Z-80 assembly on TRS-80s in college.
You bring this up a lot but application software has a shorter shelf life than system software. It simply is the nature of the beast since UI and features sets must adapt rapidly.
Big shareware titles that we remember from the 90s like Doom didn’t die but transitioned to regular products. Many shareware utilities were made obsolete when Apple and Microsoft incorporated those features into the OS. Utilities like PC-Talk (80s) slowly became less relevant as we moved away from dial up. Word processors, spreadsheets and databases from the 80s faded in the 90s as Office dominated which came bundled with many new PCs.
MS Office has been around 23 years. Photoshop the same. Both of those made a lot of shareware obsolete on the PC and Mac platforms.
Whether the code was open or not wouldn’t matter to the longevity of these programs. The ecological niches they filled either disappeared or ended up dominated by another. more competitive product.
What you see as an advantage of open source I see it more as a result of a scarcity of good commercial software competing in the same domains.
> prior to the escape of intel architecture from IBM proprietary walled garden, all computer suppliers enforced their own proprietary gardens as well.
Well, open platforms were not at all unknown in the 8-bit days. So, if the IBM PC had not escaped its proprietary status, some other platform might have spread among hobbyists in a “commoditized” way, allowing for an open-source community. In fact, since the virtual-memory capable MC68010 was released as soon as 1982, we might have seen UNIX-like OS’s on commodity microcomputers even earlier than we did.
@nht: “I remember the games because we played them. Nothing more to read from that. It’s not like those games have anything but historical interest anymore except for a very few who like playing retro ascii games.”
/me raises hand, with versions of Adventure, Dungeon, Empire, Larn and Nethack for a couple of different platforms. (There is a Palm OS port of Nethack using SDL that runs on my Tapwave Zodiac 2. esr was amused when I showed it to him.)
“Big shareware titles that we remember from the 90s like Doom didn’t die but transitioned to regular products. Many shareware utilities were made obsolete when Apple and Microsoft incorporated those features into the OS. Utilities like PC-Talk (80s) slowly became less relevant as we moved away from dial up. Word processors, spreadsheets and databases from the 80s faded in the 90s as Office dominated which came bundled with many new PCs.”
Shareware != open source. I still have some shareware code in use. It’s still here for the same reason most of what I run is open source if possible. I run commercial software only if a shareware/freeware/open source product is not available to do the job. In about 95% of the cases, I can find something that does the job as well or better than commercial products.
And some of those shareware products became full time jobs for the creators. On MS-DOS, I ran a third-party COMMAND.COM replacement called 4DOS, which added command line recall and editing, aliases, environment variables evaluated on the command line, a built in file viewer ala Unix less, and a vastly expanded batch language with power equivalent to a Unix shell script. The author, Rex Conn, developed it for his own use, and released it as shareware when it achieved a sufficient state of development. He thought it would sell a few thousand copies, and maybe fund a new development machine or some gadgets for his boat It be came his full time job, and still is – he now offers a graphical tabbed console for Windows called Take Command based on the old DOS offering. I asked back when whether he ever wished it *had* stayed the limited success he anticipated, and he said “Yes. Every time I wake up in the morning to 500 4DOS messages in my Inbox!”
Another such case is Sammy Mitchell, proprietor of Semware. Sammy became popular in the DOS days with a shareware editor called Qedit that was very widely used. Semware now offers a programmer’s editor called The Semware Editor, and it’s Sammy’s main business. He never expected that to be the case either.
Ultimately, a Darwinian survival of the fittest takes place. Products occupy niches in a computing environment. If the environment changes, the niche may disappear, and so will the products that evolved to populate it. This applies to commercial software as well as anything else. Once upon a time, for example, WordPerfect was the most popular word processor, and sold enough copies at $500 each to fund toll free tech support and still make money. WP missed the transition to Windows, and by the time it caught up, MS Word was firmly established as market leader. WP still survives as part of Corel’s offerings, but it’s a relatively tiny slice of the word processor market.
There was a primitive open source software culture in the mainframe world in the early to mid-70s in universities. We had games available that we could access off-hours using TSO. In particular, there was a FORTRAN version of Adventure that was mailed around from place to place, with improvements being made and sent out. (It usually required someone on the inside, as access to the 9-track tape drives and card punches was only available to paying customers.) There was the “Trek” game, also FORTRAN, IIRC. A friend and I wrote a random treasure generator for the original D&D, written in APL, that got sent to a few other places. I cut tapes for it for at least 3 friends, some of whom went on to grad school elsewhere. I have a 9 track of a bunch of them in a box with my flowcharting template, write rings, 360/370 assembler cards, and other computing antiques.
At one Fortune 500 mainframe shop I worked in summers, the systems programmer had a network of friends at other mainframe places that would share useful patches and utilities, also via 9-track tapes snailmailed around.
Damn. I need to get back to writing code again, its been over 10 years since I did more than Word macros or Excel programming…..
ech, how old is that 9-track, and what density is it written at? It may well be readable even now.
“I didn’t foresee Linux. If I had, I might have written it myself.”
So… “Erix”? Instead of a penguin, what would the mascot be? A Samurai?
>So… “Erix”? Instead of a penguin, what would the mascot be? A Samurai?
Nah, I’m not quite that Japanophilic. I dunno, but I did rather like the seagull that was proposed back before Linus settled on the penguin.
I remember the 60’s and 70’s for code, and in those days, Software was not
capitalized it was an expense and in many cases was free. IBM used to provide
free software with a mainframe and would provide a staff programmer to support
a client.
The SHARE user community provided code to the attendees and the long lived HASP program
was written by the University of Houston.
That era died when companies started capitalizing software and putting it on their books. Once it was an Asset, it was property and not to be shared but sold.
patb209: No, HASP wasn’t written by the University of Houston. It was written by IBM Federal Systems Division at the NASA Manned Spaceflight Center (now the Johnson Space Center). However, since IBM FSD was an IBM user, not an OS/360 development shop, it still counted as user code, not part of the OS.
I always thought that it was a historical travesty that NASA/JSC was a JES3 user…
HASP Houston Automated SPOOLing Program.
If HASP was written at the Houston Manned Space FLight Center, i’ll stand corrected
i was always told the Houston stood for UH.
I think the larger point is that as long as Software was an expense, companies took
one view of it, once it became a “Capital Asset” or “Intellectual Property”, it became
restricted
“….we should have seen the emergence of a viral Linux-like open-source platform around 1982, resembling DOS on steroids and perhaps launching off the BBS culture’s communications links.”
A bit later, (MSDOS 2.0), IBM was developing their follow-on, which would have greatly improved things for everyone. (Of course, this was NOT open-source.) Bill Gates faced them down, and refused to give them the MSDOS source code they needed for their project. We got MSDOS 3 instead.
UH wasn’t an OS/360 shop at the time. When I was there in 1974 for an engineering seminar, they were just starting up with a 370, but most work was done on a 7094.
Yes, it’s Houston Automatic Spooling Program, from the MSFC being in Houston.
Don’t take IBM’s position on systems software as typical, or driving the industry. They had a large sword hanging over their head in the form of an antitrust case. They did not copyright or sell their software as one result of that case; other vendors were under no such constraints, and did copyright it.
>Don’t take IBM’s position on systems software as typical, or driving the industry. They had a large sword hanging over their head in the form of an antitrust case.
During the early Unix years AT&T was in a similar position as a result of the 1956 consent decree that had settled their anti-trust troubles. Their colossally bungled attempt to commercialize it followed the divestiture of the Bell Operating Companies on 1 Jan 1984 almost immediiately.
The early Usenix tapes (which were collections of mostly source code) as well as the Berkeley 4BSD tapes were in my recollection the way we distributed software in the late 70’s and early 80’s. At every Usenix conference we would make copies (or request copies) of the tapes. In my mind that was the first public group distribution of various sources.
Later if you had a net connection you would also ftp to different companies and universities to get whatever was deemed handy. I would have maps of ftp destination to available software. The software would be in source form and up to you to port it. This was before Usenet. Yes, when Usenet got rolling and useful (around 1981) there was comp.sources.unix
To the good old days!
AT&T did have a 286 System/V Unix which I believe ran on Olivetti machines. Not a true multiuser Unix system since a user space program could take down the machine, but still useful.
@DMcCunney
As a user of QEdit and 4DOS back in the days of running my BBS in the 1980s, I’d have to say that I’m very surprised to hear that either Sammy Mitchell or Rex Conn is still making money off that stuff. 4DOS and QEdit had their place, but there is literally nothing either one of those tools could do for me today; bash is so much better than 4DOS/4OS2/4NT/TakeCommand as a scripting tool, it’s not even funny, and Vim and Emacs are so much more powerful than the relatively limited QEdit/TSE. Even if I were forced to work on the Windows platform, bash, Vim and Emacs all run there.
Why do the old hackers love kicking RMS? I know he has his flaws but the guy has made seriously important contributions to the Open-Source/FOSS world and to the Linux ecosystem. He’s like the weird kid that everyone one picks on among a bunch of weird kids that everyone picks on.
Aside from the personal eccentricities befitting his mad genius, RMS legitimately
led a number of GNU projects to victory, including GCC and Emacs and creating
an eco-system for things like R, SAGE, etc…. http://www.gnu.org/manual/blurbs.html
if any one architect had ever produced the vast pile that GNU has they would be considered
one of the top software people in history.
It’s an oddity that Linux took off when GNU Mach is living on within MACOS.
It’s an oddity that Linux took off when GNU Mach is living on within MACOS.
I don’t believe there is any GNU code in the OSX kernel. The XNU kernel is from NeXTSTEP and the 2.5 mach kernel from CMU.
“Why do the old hackers love kicking RMS?”
I doubt I qualify as “old hacker” in this company (perhaps more “alter kacker”), but to me, RMS’s code contributions are more than offset by the damage he did to the idea that software should be made freely available by talking about it in dogmatic, ideological terms. RMS set the cause of open source software back at least a decade.
@DMcCunney “Shareware != open source.”
Yes, but hackers != open source either unless you want to restrict it as such. As an inclusive term then many 80’s shareware programmers would be part of that sub-culture.
>Yes, but hackers != open source either unless you want to restrict it as such. As an inclusive term then many 80?s shareware programmers would be part of that sub-culture
Hackers != open source culture then. Nowadays trying to make that distinction is difficult and rather pointless – an unintended consequence of my culture-hacking, but I’m OK with it.
“Why do the old hackers love kicking RMS?”
I doubt I qualify as “old hacker” in this company (perhaps more “alter kacker”), but to me, RMS’s code contributions are more than offset by the damage he did to the idea that software should be made freely available by talking about it in dogmatic, ideological terms. RMS set the cause of open source software back at least a decade.
He’s not the weird kid that other weird kids pick on because they want someone to pick on. He’s the weird kid that other weird kids pick on because he obnoxiously tells them they should do things HIS way and they are evil if they don’t. All in the name of “freedom”.
The dude spent a lot of his time in the early days not trying to make a better world but trying to shaft another set of developers (at Symbolics) because they had the audacity to try to make a living from something they loved to do.
> RMS’s code contributions are more than offset by the damage he did to the idea that software should be made freely available by talking about it in dogmatic, ideological terms.
RMS’s codification of the FOSS ethic was by no means perfect, but it was arguably a positive development. Many hackers from the microcomputers/BBS community were introduced to FOSS through RMS’s writings, GNU software and the GPL ‘legal code’. Linus Torvalds might plausibly be among them; the very first versions of LINUX were released under a vague non-commercial license, as was typical in the IBM PC community at the time, and he only switched to the GPL upon advice from others.
I’m in the camp that believes RMS has done far more good than harm. Those that take serious object to his dogmatism, to the point of rejecting Open-Source, FOSS or the FSF, are as polarized as RMS himself. There was bound to be a battle of ideals on the issue of closed vs. open source regardless of his involvement. To put the brunt of the blame for the religious war on RMS is shortsighted, IMHO.
>To put the brunt of the blame for the religious war on RMS is shortsighted, IMHO.
Maybe I shouldn’t comment on this, since I’m often identified as RMS’s rival in this “war”, but I’m going to wear my anthropologist hat here…
The situation is not symmetrical and the polarization not equal. The “religious war” is much, much more important to one side (RMS’s) than to the other side (putatively mine). And one side embraces a much wider range of particular issue stances than the other. I could tell you what I think this means, but I think you’ll absorb the implicit lesson better if you figure it out yourself.
The problem is that RMS cast the whole idea of having access to the source code in terms calculated to satisfy his ideology and fanaticism, and never mind what others who don’t share that fanaticism think. He preached tot he choir and ran off a lot of parishioners.
Many of us rejected so-called “free software” and the soi-disant “Free Software Foundation” who are nonetheless ardent proponents of open source.
Is RMS the Sheldon Cooper of open source?
You can just imagine him correcting this comment in true Sheldon Cooper style “…first of all, it’s free software not open source…”
HA!
@Joseph Coffland:
No there wasn’t. Cost/benefit analysis, yes. Religion, no.
Only if you’re interested in rewriting history (which a lot of his supporters seem to be).
I was once convinced that Sheldon Cooper was supposed to be a parody of Stephen Wolfram.
Stallman is a unique special blend of Asperger’s, self-imposed hermitude, and ideological extremism. Each corner of this deadly triangle tends to magnify the effects of the others.
I think of RMS as the Bobby Fischer of software. Genius at first, then slow decay into weirdness, surrounded by accolytes that refuse to see the descent into decreptitude. Come to think of it, Nicolai Tesla was pretty much the same thing, though I don’t think that RMS spends any of his time feeding pigeons….
Hackers != open source culture then. Nowadays trying to make that distinction is difficult and rather pointless – an unintended consequence of my culture-hacking, but I’m OK with it.
It’s not difficult to make that distinction. As I said, many mobile app developers have that same mentality and most of that is closed source. Hackers love creating elegant code that does useful stuff.
I find “open source culture” to be remarkably exclusive rather than inclusive. Take for example the responses on the OSI mailing lists for anyone that wants to do a CC-BY-NC equivalent license. “That’s not open source…get lost” is the usual response.
The Creative Commons philosophy seems much more inclusive allowing more or less the full spectrum of open to closed. There’s also no religious war between CC-BY-SA and CC-BY that I’ve noticed. No “battle of ideals” appears to exist in that community and was unnecessary in ours.
@Joseph Coffland
I think it is not just a religious war or differing ideologies that differentiate RMS. It is a matter of different cultures too.
For example, (and correct me if I’m wrong), I don’t think ESR would have an ideological problem with singing the Free Software song, or its Open Source equivalent. Even so, it’s not something I think he would choose as a method of disseminating information or inspiring people in a group setting. (Please correct me if I’m wrong about this too. Have I missed out on ESR crooning the open source message to the world? I would pay money to see that video.)
>The problem is that RMS cast the whole idea of having access to the source code in terms calculated to satisfy his ideology and fanaticism, and never mind what others who don’t share that fanaticism think. He preached tot he choir and ran off a lot of parishioners.
Here’s an outsider’s view:
I think that what RMS thinks he did is fix the BSD-style licenses (i.e. closing the door for open source code to become not open). The ideology aspects around it serve as a rationale, a justification, a explanation,.
Something like : “we hold these truths to be self-evident: that all users are endowed with certain unalienable rights; namely the right to use a program for any purpose, the right to … (etc, 4 freedoms)”
I suspect the priest/preacher-like attitude is something that came later (and/or that a lot of it is perception by people who disagree with his leftist politics). Or maybe he needed to be adamant about it to help it gain acceptance. I wasn’t there, so I wouldn’t know.
I also think ‘guest’ has a point in that is was especially the GPL’s guarantee that ‘Free software will remain Free’ -i.e. RMS’s fix to the BSD-stylelicenses – that was attractive to the “free for non commercial use”-inclined code/program sharing crowd including Linus “making Linux GPL’d was definitely the best thing I ever did”. I’ve always understood that to mean something like “having Linux GPL’d made a lot of people comfortable enough to cause mass collaboration (and a different license would probably not have had the same effect)”.
@kn:
> I think that what RMS thinks he did is fix the BSD-style licenses
And a lot of us think he broke them.
> (i.e. closing the door for open source code to become not open)
Except, that’s not true at all. Open source code remains open forever (AIUI there are some obscure loopholes in copyright law that let a copyright holder or their heirs revoke a license in certain circumstances, but the GPL doesn’t and can’t fix them). The original version doesn’t stop being open just because a closed derivative version exists.
> Except, that’s not true at all. Open source code remains open forever
Yes of course. And I know that.
I was thinking two things at once and got confused.
New attempt :
1- To RMS, the fact that BSD-style licenses allow for “free” code to end up in proprietary licensed programs was a flaw that needed fixing
(e.g. if proprietary programs can use free code, but fixes and improvements on that code don’t return to the free version, “Free software” would not only assist in the creation of proprietary programs, but Free software would also almost automatically be inferior to the corresponding proprietary program as it would lack said fixes or improvements)
2- the resulting GPL, ensuring that GPL’d code was very unlikely to be reused in proprietary / commercial programs or that these programs would become “Free software” (code accessible,reusable, shareable, …) themselves if they incorporated GPL’d code, resonated well with a large group of (hobbyist?) programmers that tended to use “free for non-commercial use”-type licenses (perhaps they felt they would be taken advantage of if the fruits of their hobby/work contributed to the profits of a company, especially if that company doesn’t give any software or code back)
Does that make sense ?
>And a lot of us think he broke them.
honest question : Do you (plural, if you want, re. ‘lot of us’) think that now, in retrospect, or did you think so the moment GPL was first published ?
I, for one, have been arguing against the GPL from the outset. I’m the one who coined the term “General Public Virus”.
@esr: ” I dunno, but I did rather like the seagull that was proposed back before Linus settled on the penguin.”
A shitehawk? No. You don’t want to go there. Really.
@Morgan Greywolf: “As a user of QEdit and 4DOS back in the days of running my BBS in the 1980s, I’d have to say that I’m very surprised to hear that either Sammy Mitchell or Rex Conn is still making money off that stuff.”
Both built user bases sufficient to be profitable. Rex went into partnership with Tom Rawson to form JP Software. 4DOS got licensed by Symantec and released by them as NDOS. When Windows came around, it got ported to 32 bit Windows as 4NT, and later enhanced to become Take Command, which these days is a tabbed GUI console program. Tom Rawson has subsequently left JP Software, but Rex is still doing it for a living. (He released the old 16 bit version 4DOS code as open source, and a Bulgarian developer picked it up and enhanced it for a bit, so there’s a 4DOS 8.00 floating around.)
Yes, bash is arguably more powerful, and there are versions of bash, tcsh, and zsh for Windows. (Ksh, too, if you install AT&T’s U/WIN environment.) But the folks who form Rex’s market are Windows users and developers. Take Command is a lineal descendent of a product they first encountered under MS-DOS, and works pretty much the same way. And it’s integrated into Windows in a way that things like bash *can’t* be. They aren’t interested in running bash or the like, and Take Command meets their needs. (I run the freeware TCC-LE version. It’s an updated version of 4NT, with 4DOS features in a console window. I don’t need full Take Command.)
Sammy now develops The Semware Editor. It isn’t Vim or Emacs, but it’s quite powerful, thanks. The last time I communicated with him, he was developing a new language he intended to implement TSE in, that could also be used stand alone to develop other programs. I pointed him at Bram Moolenaar’s Zimbu (http://www.zimbu.org/) because Bram seems to be doing something similar. The old Qedit is still available as TSE, Jr. (It turned out there was an editor for HP minis called QEDIT that Sammy didn’t know about when he was writing his program for DOS, so a rename was required.)
I have Vim and Emacs here, but on the Windows side, I mostly use Florian Balmer’s Notepad2. It’s based on the Scintilla Edit Control, and can replace Windows Notepad. It isn’t anywhere near as powerful as Vim or Emacs, but most of what I do doesn’t *need* Vim or Emacs, and Notepad2 meets my requirements.
There are an assortment of outfits like that, which fill gaps in larger company’s product lineups. They take the risk that the larger company may at some point fill that gap themselves. At one point, Microsoft was interested in acquiring 4DOS, but Tom and Rex didn’t like the terms. The MS rep threatened that they would enhance COMMAND.COM and kill 4DOS. Tom and Rex said “Yeah, right.” and were correct. If you’re one of those companies and acquired a large enough market base, you can make a living serving that market. There may not be tremendous *growth* potential, but you may not care. You have sufficient revenue to stay in business.
@nht: “@DMcCunney “Shareware != open source.”
Yes, but hackers != open source either unless you want to restrict it as such. As an inclusive term then many 80?s shareware programmers would be part of that sub-culture.”
There are programmers who are hackers and there is code that is open source. Not all hackers write open source code, and arguably, not all open source code is written by hackers. (If you are a programmer who wants to make a *living* writing open source code, you go to work for someone like IBM or Google who uses and develops open source software, and you work on what your manager assigns you.) I never equated hackers with open source.
@Jay Maynard
“The problem is that RMS cast the whole idea of having access to the source code in terms calculated to satisfy his ideology and fanaticism, and never mind what others who don’t share that fanaticism think.”
These ideas of RMS are shared by many others in and outside of the USA.
I suspect that in Europe and South America, the whole “pro BSD – anti GPL license” camp tends to be seen as a “Freedom to screw over your users” ideology. At least, that is my own personal experience YMMV. Also “More Free than Thou” does not resonate much in Europe. To me, this reminds me of the old “Washes whiter than white” adds. That is, as a way to fool us.
I could imagine the BSD license resonates more in Russia and East Asia. But maybe not so much because of their love of Freedom (but I am easy to persuade on this).
In short, the GPL expresses the political and moral feelings of a large number of people. A very large number. The way you try to frame this as the result of a single sick mind does in no way improve your powers of persuasion.
“Also “More Free than Thou” does not resonate much in Europe.”
Of course it doesn’t. Europeans don’t truly grok freedom, and never have.
@Jay Maynard
“Of course it doesn’t. Europeans don’t truly grok freedom, and never have.”
Another endearing soundbite to win over the hearts and minds of Europeans.
I, for one, have been arguing against the GPL from the outset. I’m the one who coined the term “General Public Virus”.
GPL is a wonderful license for rent-seekers of all stripes.
I suspect that in Europe and South America, the whole “pro BSD – anti GPL license” camp tends to be seen as a “Freedom to screw over your users” ideology.
When has BSD code EVER screwed over their users?
> Another endearing soundbite to win over the hearts and minds of Europeans.
At this point, I don’t give a damn about winning over your hearts and minds. You’ve made your choices, and soon will get to live with the consequences. I’m sure you’ll find some way to blame the US for your dependence on Russian energy and imported Muslim labor (which will outbreed you and adopt Shari’a, at which point they can complete Hitler’s project of getting rid of those pesky Jews, gays, and other “undesirable” influences).
@The Monster
Yawn. The end of the “Abendland” was already predicted in 1918. Better written I suppose.
For one thing, both the UK and the Netherlands have ruled a hundred million muslims each for a century or so. I think we can manage without the advice of some parochial minds from the USA.
We also know these Sharia fanatics are paid by the Saudis under the protection of the USA.
“When has BSD code EVER screwed over their users?”
Apple?
> The end of the “Abendland” was already predicted in 1918.
Sorry, I don’t know much about IBM 360 systems.
@winter “Apple?”
In what way did Apple screw over FreeBSD or any other bsd licensed code users or developers?
For one thing, both the UK and the Netherlands have ruled a hundred million muslims each for a century or so. I think we can manage without the advice of some parochial minds from the USA.
Experience as a colonial empire doesn’t have much value in figuring out how to manage assimilation of an large sub-culture. America has far greater experience as a melting pot than your largely homogenous racial mix. You’re 79.3% dutch monoculture with another 5.7% other EU ethnicity.
We’re not the parochial ones. In fact your statement is about as dumb as someone in the US saying that we could manage without Dutch advice on flood control. Well yeah, if you want to ignore the country that has the most experience in an area go right ahead.
EU dependance on Russian energy essentially means if anyone is going to do anything about Russia in the Ukraine it’s not going to be the EU and it’s going to have to be us. WE might not do anything because of a decade of war we’re tired and if the US doesn’t do anything then non-nuclear proliferation effectively just died.
Crimea is lost because of the obvious Russian threat that if the Ukraine doesn’t just cave it might take the whole country over. Given nukes that wouldn’t be much of a threat. As is, given the large russian ethnic population anyway, there’s no risk to Russia to simply annex Crimea. Not like the EU is going to do anything if the US doesn’t. There should be German and French tanks sitting in the Ukraine right now and there should be a EU/Ukraine defense treaty in the works to safeguard Ukraine proper in response to effective Russian annexation of Crimea.
But that isn’t going to happen so pretty much you guys are freeloading off US taxpayers and feeling all superior about it.
Why not rename this blog to “rms bashers”? I mean, it’s pathetic to see someone like Mr. Raymond take subtle digs at rms’ credibility in practically _every_ post of his. And then his drooling fans join in. It’s very pathetic. The funniest thing is that rms doesn’t even read any of this.
What you people don’t realize is, that he’s considered a hacker god by everyone who counts and is starting out. Him and (to a lesser extent) Linus Torvalds. I’m 17. I’m a nobody (for now). But even I know what sort of sacrifices rms has made, and how he has revolutionized the whole open source culture. Yes Linux is the backbone, but GNU/FSF is the seed, the “Bible” or church, if you will. So all this snapping at him does him no harm. Anyone starting out in the field of programming and even remotely interested in open source knows what a great man he is.
I’m 17, and going to start college this year, and me and many of my friends (admittedly all nerds) know this. _Nobody_ considers Mr. Raymond to be anything but a mildly entertaining but eloquent writer. Please stop positioning yourself as something of a “guardian” of the hacker culture. We know who it is.
@Sam Shipman:
>> The end of the “Abendland” was already predicted in 1918.
>Sorry, I don’t know much about IBM 360 systems.
Groannnnnnnnnn…
Very punny.
> “Of course it doesn’t. Europeans don’t truly grok freedom, and never have.”
> Another endearing soundbite to win over the hearts and minds of Europeans.
I admit that I felt a short moment of resentment until I remembered that I actually agree With Jay Maynard on this one.
@Aditya:
So are these avant-garde programmers using google code or sourceforge? Because they sure as shit aren’t the majority on github. Or are they all just using savannah and running hurd? Where do all the cool kids hang out?
You would know a bit about rms without Torvalds. But the license wouldn’t be anywhere nearly as important without Torvalds.
Yes, yes, yes. Him and Che Guevara. You just admitted that you didn’t live through it and that you’re not yet old enough to make good decisions.
And most of us aren’t religious. Some of us actively despise religions of all stripes. So as far as some of us are concerned, this merely furthers the perception that you have a lot of growing up to do.
We’re not trying to harm. We’re trying to mitigate the damage from harm already done. Obviously we still have a lot of work to do.
>>> “Of course it doesn’t. Europeans don’t truly grok freedom, and never have.”
>> Another endearing soundbite to win over the hearts and minds of Europeans.
>I admit that I felt a short moment of resentment until I remembered that I actually agree With Jay Maynard on this one.
There have always been Europeans who understood freedom but the trend over the past couple of centuries has been for them to leave. (To the benefit of the US, Australia, etc. Economies of many South American countries only function as well as they do because of such people and their descendants.) Unfortunately it seems the quality of what’s left behind suffers somewhat, on average.
@nht:
The problem with CC-BY-NC is that there’s a LOT of legal hair-splitting over what is and isn’t commercial use (I’ve seen people who are worried that posting something on a blog with AdSense ads counts as “commercial use” and I’ve yet to see a strong rebuttal of that view), so people like me tend to avoid non-commercial use clauses as not worth the compliance hassle.
> so people like me tend to avoid non-commercial use clauses as not worth the compliance hassle.
But many of the people using CC-BY-NC see this as a feature, not a bug. Sure, “NC” is horrible from the point of view of an open source license – but the people using it don’t want an “open-source” license. Instead they want a “reformed fair-use” that isn’t as uncertain and restrictive as the official government fair-use provisions.
The bigger problem I see with “NC” is that it’s used in two different ways by people with two different attitudes. (1) “If you want to use this commercially, contact me about licensing terms for that; I can be very reasonable.” and (2) “If you want to profane my art by using it commercially, then choke on your filthy lucre and die, you vile capitalist pig!”
Sure, “NC” is horrible from the point of view of an open source license – but the people using it don’t want an “open-source” license.
Why? There’s no real reason to exclude academic licenses and accept GPL as “open source” and have yet another silly term such as “shared source” to describe it.
As for whether or not having a blog with ad-sense might be considered commercial I’d just say yah, it’s annoying that it might be treated as such but a bright dividing line is better than a blurry one. At what level of revenue does a personal blog become commercial?
I don’t have any issue with folks that avoid using any NC stuff due to compliance concerns. I feel the same way about GPL and LGPL. It’s still “commons” stuff though.
The OSDL excludes licenses with field of use restrictions. This is only sane (and is, in fact one of the saner ideas popularized by the GPL). If I can’t be sure I can generally use it without asking permissions — it is not part of the commons.
Personally, I think ithe OSDL should also exclude licenses with mandated source distribution, but that would not be politically expedient.