I’ve written software for a lot of different reasons besides pure utility in the past. Sometimes I’ve been making an aesthetic statement, sometimes I’ve hacked to perpetuate a tribal in-joke, and at least once I have written a substantial piece of code exactly because the domain experts solemnly swore that job was impossible to automate (wrong, bwahahaha).
Here’s a new one. Today I released a program that is ugly and only marginally useful, but specifically designed to shame other hackers into doing the right thing.
Those of you who have been following the saga of reposurgeon on this blog will be aware that it relies on the rise of git fast-import streams as a universal history-interchange format for VCSes (version-control systems).
Alas, support for this is still spotty. On git, where the format was invented, it’s effectively perfect. Elsewhere, bzr comes closest to getting it right with official import and export plugins but a weird asymmetry between export and import. Mercurial has a fairly solid third-party exporter, but its third-party importer is crap. Elsewhere the situation is worse; Subversion is typical in that it has a proof-of-concept third-party exporter that loses some metadata, and no importer.
And this is ridiculous, actually. It’s already generally understood that writing exporters to the stream format is dead easy – the problem there seems political, in that VCS devteams are perhaps a bit reluctant to support tools that make migration off their systems easier. But having written one myself for reposurgeon, I know that a high-quality importer (which encourages migration towards your VCS) is not all that difficult either. Thus, there’s no excuse, either technical or political, for any self-respecting VCS not to have an importer.
I decided to prove this point with code. So I dusted off the oldest, cruftiest version-control system still in anything resembling general use – Walter Tichy’s Revision Control System (RCS). And I wrote a lossless importer for it. Took me less than a week, mostly repurposing code I’d written for reposurgeon. The hardest part was mapping import-stream branches to RCS’s screwy branch-numbering system.
To appreciate how silly this was on any practical level, you need to know that RCS doesn’t have changesets (the ability to associate the same comment and metadata with a change to multiple files). I cheat by embedding changeset-oriented data as RFC822 headers in RCS comment fields. An exporter could be written to invert this and completely recover the contents of the import stream, and I’ve been communicating with the author of rcs-fast-export.rb; it may actually do this soon.
There is one circumstance under which rcs-fast-import might be useful; if you wanted to break a project repo into multiple pieces, blowing it apart into constituent RCS files and re-exporting separately from the cliques might be a way to do it. But mainly I wrote this as a proof of principle. If crufty old RCS can bear the semantic weight of an import stream, there is simply no excuse left for VCSes that claim to be modern production-quality tools to be behindhand on this. None.
At this point there is an inevitable question burning in the minds of those of you who are moderately clued in about ancient VCSes. And that is: “What about SCCS?”
Ah, yes. The only VCS even cruftier and more ancient than RCS. Those of you really clued in about ancient version-control systems will have guessed the answer; they’re so similar that making rcs-fast-import speak SCCS if anyone ever wants that would be pretty trivial (in particular they have the same semantics of branching, which is the hard part). Actually the code is already factored to support this; out of 841 lines only 36 are the plugin class that exercises the RCS command set, and an SCCS plugin wouldn’t be more than a few lines longer.
But I targeted RCS partly because it’s still in actual use; some wiki engines employ it as a page-versioning backend because it’s fast, lightweight, and they neither want nor need changesets. In truth, if you have a directory full of documents each one of which you want to treat as an atomic unit, RCS still has utility (I use it that way). SCCS, on the other hand, survives if at all in a handful of creakingly ancient legacy installations.
(What made the difference? RCS was open-source from the get-go. SCCS wasn’t. We all know how that dance goes.)
Yes, 841 lines. 574 of them, 65%, stripped out of reposurgeon. Less than a week of work, crammed in around corners while I was busy with other things. It’s not complicated or tricky code. The trick is in having the insight that it’s possible. And a living rebuke to every modern VCS that hasn’t gotten its fast-import act this together yet.
One entertaining side-effect of this project is that I figured out, in detail, how CVS could have been written to not suck.
Those of you into VCS archeology will know that CVS was a layer over a RCS file store, a layer that tried to provide changesets. It was notoriously failure-prone in some important corner cases. This is what eventually motivated the development of Subversion by a group of former CVS maintainers.
Well…here I am, writing rcs-fast-import to make RCS hold the data for losslessly reconstructing import-stream changesets…and at some point I found myself doing a double-take because I had realized I had solved CVS’s problems. Here’s how I explained it to Giuseppe Bilotta, the author of rcs-fast-export:
Incidentally, a side effect of writing the importer was that I figured
out how CVS should have been written so it wouldn’t have sucked :-) It
had a tendency to break horribly near deletes, renames and copies;
this is because the naive way to implement these (which they used)
involved deleting, copying, and renaming RCS master files.In fact, I figured out last night. while building my importer so it
would round-trip, that you can have reliable behavior from changesets
layered over RCS only if you *never delete a master*, not even to
rename it. I know what the right other rules are, but it’s nearly
twenty years too late for that to matter.Sigh. If I had looked at this problem in 1985 I could have saved
the world a lot of grief.
Giuseppe said:
> I’m very curious to hear about your solution about tracking these
> operations. I mean, git doesn’t track them because it only cares about
> contents and trees, but how would you do that in a file-based vcs?
Here’s how I explained it:
On delete, don’t delete the master. Enter a commit that’s an empty file
and set the state to “Deleted”. (The second part isn’t strictly necessary,
but will be handy if you need to do forensics because other parts of
your repo metadata have been corrupted.)On rename, don’t delete the master. Copy it to the new name, so the
renamed file has the history from before the rename, but also *leave
that history in the original*. Give the original a new empty commit
as you did for delete, with a state of ‘Renamed’. If your RCS has
a properties extension, give that commit a rename-target property
naming what it was renamed to. Give the same commit on the master copy a renamed-from property referencing the source.On copy, check out the file to be copied and start a new master with
no history. If your RCS has a properties extension, give that commit
a copied-target property naming what it was renamed to, and give the
initial commit of the copy a copied-from property referencing the
source.On every commit, write a record to a journal file that looks like a
git-fast-import commit record, except that the <ref> parts are RCS
revision numbers.You’re done. It may take you a bit to think through all the
retrieval cases, but they’re all covered by one indirection through
the journal file.Don’t like the journal file? No problem, you just write a
sequence-numbered tag to all files for each commit. This would be
slower, though.There are optimizations possible. Strictly speaking, if you have
an intact chain of rename properties you can get away with not
copying history to the target of a rename.The key point is that once a revision has been appended to a specific
master, you *never delete it*. Ever. That simple rule and some pointer
chasing gives you a revision store that is robust in the presence of D,
R, and C operations. Nothing else does.Not by coincidence, modern DVCSes use the same strategy.
He did raise one interesting point:
> Considering the debates still going on the “proper” way to handle
> renames and copies, I’m not sure it would have been accepted ;-)
But it turns out that has an answer, too:
The above scheme gives you git-like semantics. For bzr-like semantics,
add one more wrinkle; a unique identity cookie assigned at master-creation
time that follows it through renames. Your storage management doesn’t
care about this cookie; only the merge algorithm cares.All this is so simple [and actually implemented in rcs-fast-import] that
I’m now quite surprised the CVS people got it wrong. I could ship a
daemon that implemented these rules in a week, and could have done
so in 1985 if I’d known it was important.
Sigh…sometimes the hardest part is knowing what to spend your thinking time on. I console myself with the thought that, after all, I have gotten this right some times that it mattered.
Eric,
Might the CVS problem have been an example of “Premature optimization is the root of all evil”?
I remember from the old CVS documentation that they cared about storage sizes. Your solution at first looks like it will keep redundant copies. Could it be that that direction might have been closed off by “premature optimization”?
Modern VCS, eg, git, were designed with the idea of never, ever deleting anything. Git started with keeping complete archives of all historical versions of files. Add a space, copy the complete file (or so I remember the early discussions). Only (somewhat) later did they include compression using ZIP to keep only the differences. That is, doing the optimization after they had the semantics clear.
If you do not care about keeping redundant copies around, your ideas seem logical. If you try very hard to keep as little in storage as you can get away with, they might seem, literally, unthinkable.
>Your solution at first looks like it will keep redundant copies.
See my remark about optimization. There would be “redundant copies” only in the presence of renames, and even those can be eliminated with some cleverness about including rename pointers in the metadata.
But I’m sure your more general point is correct, especially with regard to file deletes. “Never delete a master” must have looked daunting to the CVS designers, even if in practice it’s a relatively rare case.
I’ll second Winter’s comment. Mercurial and git are profligate users of disk space; CVS was not. This doesn’t matter much in these days of terabyte drives, but when a big disk was 100 MB and the cost per megabyte was over US$1, it mattered a lot.
in 1985 storage was a way different beast than it is today, so it makes sense that they wanted to delete un-needed stuff. In 1985 I was using floppies. 8 inch floppies.
I guess you should never judge the past by today’s standards.
There is also a problem with lack of merge tracking information in CVS (and it is merge tracking, i.e. tracking parents of a merge, not merged-in tracking, i.e. tracking which revisions got merged in already, like Subversion 1.5+ does with ‘svn:mergeinfo’ property – here lies the madness). Are you representing such info in the “log”?
>Are you representing such info in the “log”?
Yes, that’s quite easy. In general it just means keeping a list of the merge parents in the commit. In rcs-fast-import I cheat and embed that info in RFC822-style headers in the comments. If I were building a real CVS-equivalent I’d put it in a journal file.
sidenote: it looks like new WordPress does something wrong with encoding of name in comments, It should be “Nar?bski”, not “Nar?bski” (which I see as “Nar?bski”. Eh, PHP.
But then again, CVS didn’t quite get away with it, did it? There’d have been no issues if they’d gotten that right. To work properly, RCS needs that space… if they’d wanted to get away with saving as much space as possible, they should’ve had another start– one that didn’t rely so much in the files being kept.
Some of my friends and I have kind of a running joke that if you really want one of us to do something, just tell us it’s “impossible” and stand back. There are some jobs that I’ll call very difficult to automate, such that the cost of getting the job done isn’t justified. But I don’t think I ever say those jobs are impossible, most likely out of fear that someone like me will take it as a challenge to prove me wrong.
This suggests that those 574 lines in reposurgeon should be reorganized into (a) module(s) or a library, to facilitate such reuse in the future. One of my friends (who was, and may yet be, the maintainer for some obscure hardware drivers in Linux) writes nearly all of his programs by building a library and a small front end to it, so that it’s easy to build on his work for other applications.
>This suggests that those 574 lines in reposurgeon should be reorganized into (a) module(s) or a library, to facilitate such reuse in the future.
Of course I gave this serious thought. I concluded that the gain from being able to distribute each program in one piece won the argument in this case. It’s an unusual situation in that well over half of the putative library code was actually removed in the rcs-fast-import case, leaving little more than the import-stream parser. In particular, about half the LOC of the Repository class was machinery for a primitive to delete commits that rcs-fast-import didn’t need to carry.
@eric:
““Never delete a master” must have looked daunting to the CVS designers, even if in practice it’s a relatively rare case.”
But then, they could not have predicted that rareness. We, after more than a decade of use, can see what is rare and what is not.
But I will accept your judgment on these matters unquestioned.
@The Monster:
“just tell us it’s “impossible” and stand back.”
I want an automatic scan for programs that hang or perform nefarious acts. They say it is impossible. ;-)
Ah, but has anyone written a reposurgeon importer/exporter for Visual SourceSafe or whatever MS calls it these days?
I’m pretty sure it’s impossible.
:-)
Is there any Ubuntu/Debian package available for reposurgeon?
>Is there any Ubuntu/Debian package available for reposurgeon?
Not that I know of. But it’s a single self-contained script with no dependencies other than Python; just grab it and go.
I doubt you’ll be able to motivate anybody to prove you wrong simply by placing the “impossible” label on this particular task.
Anybody who is dumb enough to trust MS VSS with their code won’t be smart enough to write an exporter, and anybody who is smart enough to write an exporter will immediately realize that there isn’t any crapware in any MS VSS repository on the planet that’s worth liberating.
Well, of course it is; as fast-import format was invented on git, therefore it has best match to git capabilities and features.
BTW. the fast-import stream is used (by the way of svn-rdump and svn-fe / svn-fast-export) in ongoing efforts to provide native interaction with SVN repositories in Git (so you would be able to fetch from Subversion repository to Git repository).
Sidenote: there is Zit, which is SCM interface on top of Git allowing for easy versioning of single files, like RCS. Though I don’t know how widely it is used…
This looks remarkably similar to how Mercurial is implemented (from what I understand of its architecture), with per-file storage of deltas/changes, and changelog/revlog; what is missing is the manifest (equivalent of tree objects in Git) that are in Mercurial (and I guess would be missing in CVS-ng).
Well, as they say hindsight is 20/20.
>Well, of course it is; as fast-import format was invented on git, therefore it has best match to git capabilities and features.
Actually, I think the true genius of the format is how non-git-specific it is. Or maybe that should be interpreted differently, and the true genius of git is that its model can be expressed with an elegant and minimal set of abstractions.
>This looks remarkably similar to how Mercurial is implemented (from what I understand of its architecture), with per-file storage of deltas/changes, and changelog/revlog;
That’s true, and the reason for the similarity is twofold. (1) There’s really only one correct design under these constraints, and (2) Supposing there weren’t, I am informed by a friend who knows Matt Mackall, the designer of Mercurial, that Matt and I have (in the friend’s judgment) very similar styles of thinking and designing. I believe this, because absolutely nothing I’ve read about Mercurial has surprised me; it seemed very much like an assembly of natural choices for me to make pursuing that problem.
>Well, as they say hindsight is 20/20.
You know, that’s exactly what Giuseppe replied! :-)
Yeah, but the missed opportunity stings a bit extra because I was waaaay ahead of the curve on VCSes. Early adopter in the early 1980s, author of some pioneering front ends under Emacs in the early 1990s, and I described a scheme for distributed VCS in a paper in 1995, a couple years before arch and a decade before hg/git. I know that I had a slight influence on the design of BitKeeper (because Larry McVoy said so at the time) and it is possible from timing and architectural similarities that my 1995 paper heavily influenced arch, though sadly Tom Lord is sufficiently crazy that neither I nor anyone else is ever going to get a reliable confirmation or disconfirmation out of him.
So it’s not just 20/20 hindsight. I was thinking the right kinds of things back then, it was just (dammit) that I didn’t know the problem was important enough for me to concentrate on.
What about writing fast-esport tool for MS Visual Studio Team System?
> Not that I know of. But it’s a single self-contained script with no dependencies other than Python; just grab it and go.
That’s fine, thanks. I prefer applications with minimum dependencies. I have a small project with some bad commits and wanted to edit/remove them.
> (What made the difference? RCS was open-source from the get-go. SCCS wasn’t. We all know how that dance goes.)
Unix wasn’t open source, either. Didn’t seem to matter then.
>Unix wasn’t open source, either. Didn’t seem to matter then.
That’s…complicated. Technically, no, it wasn’t. But until AT&T sort-of-productized Unix in 1984 (just after the divestiture) it was widely treated as something like an open-source commons. Even tacitly by AT&T itself, which was operating under a consent decree dating from 1956 that got it out of antitrust trouble at the price of locking it out of the computer business. It was during that early period that Unix attracted the cadre of people (including, for example, Richard Stallman, the early BSD hackers, and myself) who would later shape the explicit open-source movement.
Dude, I had the 4.0 BSD tapes and a licensed copy of System 32V. Did you? (Maybe you did, at Rabbit, but your CV shows you weren’t there until May 1983, and 4.2BSD was released in August that year. I think it likely that 4.2BSD was your earliest exposure to the internals of Unix.)
AT&T’s SYSVR1 shipped in 1983, not 1984.
The 1982 AT&T consent decree (aka the Modification of Final Judgement) lifted the restrictions of the 1956 consent decree. The 1982 MFJ allowed AT&T to enter the computer business (they failed by any measure.)
The USL .v BSDi lawsuit kept 386BSD sequestered, and linux resulted in the vacuum. Linus has stated that if 386BSD had been available, he would have never started on the linux kernel.
Oh yes, BTW. In reference to your CV. You state:
I acted as system administrator, support person, toolsmith and resident UNIX expert for up to 20 programmers on BSD 4.1, System III, System V, XENIX and FOS environments running over a VAX-11/750, several AT&T 3B series machines, a handful of 68000-based UNIX boxes and the IBM PC/AT
The only way to run Unix on a 68000 was by running a pair of them. You could run Unix on a (single) 68010. Which 68000 machines did you have that ran Unix? (There were a couple.)
As for the AT&T 3B machines you had to administer, well, you have my deepest sympathy. I note it was your last “real” job, and can only imagine this as the proximate, if not primary cause.
>Dude, I had the 4.0 BSD tapes and a licensed copy of System 32V.
No, I had bootleg Berkeley tapes in a couple versions and eventually SV4r1.
>AT&T’s SYSVR1 shipped in 1983, not 1984.
Yes. ’83-84 was a strange period for those of us following the System V side of things. AT&T was making noises about productizing but hadn’t got serious about it yet, not even to the point where they asserted copyright in legally correct form on the source tree. They took about three years to get their fingers out after the MFJ and then, as you note, fucked it up royally.
>The USL .v BSDi lawsuit kept 386BSD sequestered, and linux resulted in the vacuum.
Yeah, I know all about that as it was going on; I was friendly with Rob Kolstad.
>You could run Unix on a (single) 68010. Which 68000 machines did you have that ran Unix?
I think they were actually 68010s, and I was using 68000 as a generic for the whole line – but I might be wrong about that; it was early enough that they might actually have been 68000s without proper MMUs as you’re thinking of. I don’t remember the maker’s corporate name; something starting with F, I think. They had white cases, if that helps. :-)
BTW, I checked with McVoy about your involvement in BitKeeper.
He said that he had been noodling over some problem and you suggested a line of thinking that bore fruit.
He also had some rather unkind things to say about you, which I won’t repeat here.
If McVoy is the guy I think he is (a company I worked for used bitkeeper for a while) then I’d take what he says with a grain of salt.
@Yeshua Lizzard
Survey says: BZZZZZZT! Sun 1. Thanks for playing, though.
> Survey says: BZZZZZZT! Sun 1. Thanks for playing, though.
BZZZZZZT! The Sun-1 used an external MMU.
ftp://reports.stanford.edu/pub/cstr/reports/csl/tr/82/229/CSL-TR-82-229.pdf”
Thanks for being ignorant, though.
I’ll make it easy for you and just quote page 6:
Full virtual memory capability is possible with tne 68010 processor, the virtual-memory version of the 68000. The original 68000 processor cannot fully recover from page faults because it does not save sufficient state information to continue an aborted instruction. However, for a limited set of operations, such as load and store, recovery from aborted instructions is straightforward. With additional software assistance, recovery from more complex addressing modes appears feasible.
They never made it work quite right.
>They never made it work quite right.
Jezus Lizzard is quite right about this. Some very early workstation-class machines used kluges like the Sun 1’s external MMU, but they were kluges and did not long survive.
Jezus: Your original comment was:
The Sun-1 didn’t run a pair of them, so the Sun-1 is still a counterexample.
Me? I wouldn’t know directly; my 68K box was an NCR Tower XP.
OK, the only way to run a BSD4 style (VM) Unix on a 68000 was with a pair of them, or you could “limp” along if you liked, but the 68000 could not re-start a faulted instruction, no matter what you had for a MMU.
Your NCR Tower XP ran a 68010 with a custom MMU.
The problem, esr, is that if you could have fixed CVS twenty years ago, you probably would have. And yet you didn’t, so I have to guess that some piece of information available now didn’t exist then.
Russell: 20/20 hindsight. Also, nullable references are a bad idea, threads should be as lightweight as your hardware can stand, buffers shouldn’t be in executable space, and yes, it is in fact worth it to build a concept of length into your most basic array library even if it is slightly less performant than skipping the length check.
It occurs to me the true ending of the “Worse is Better” story hasn’t been written yet; the MIT approach doesn’t win in months or years but over decades it can. CVS couldn’t be fixed by a culture caught in Worse is Better; the right choice would have been rejected even had it been proposed.
> Yeah, I know all about that as it was going on; I was friendly with Rob Kolstad.
The Jesus Lizard worked with both Kolstad and Brian Berliner (of CVS fame), and still keeps in-touch.
You will have to guess if this was at Convex or Prisma. TJL will state that he got many people who were at Prisma when it went down new jobs at Sun, with the formation of the Rocky Mountain Technical Center in order to do so. Kolstad didn’t do so well at Sun. ’nuff said.
TJL knew many people at BSDi, including Kolstad, Jeff Polk, Donn Sealey, and Trent Hein. The Jesus Lizard was an actual *customer* of BSDi, for no reason other than to attempt to help a real Unix-for-the-masses come into being. TJL is also on a first-name basis with Rick Adams, and Mike O’Dell, who started it all.
So yeah, TJL knows all about what was going on then, too.
> The problem, esr, is that if you could have fixed CVS twenty years ago, you probably would have.
Seems to me that esr (and the rest of us) failed to do even what Berliner did with CVS.
>TJL knew many people at BSDi, including Kolstad, Jeff Polk, Donn Sealey, and Trent Hein. The Jesus Lizard was an actual *customer* of BSDi, for no reason other than to attempt to help a real Unix-for-the-masses come into being.
Me, too. Same reasoning. I used BSDI to bring up the free community ISP I built in ’93. Linux wasn’t mature enough yet; furthermore I didn’t yet know it was destined to be.
I think that it might be better to use the term “technological capability” than “information.”
Multi-gigahertz CPUs, multi-gigabyte main memories, and multi-terabyte backing store are, in theory, just simple quantitative improvements. However, we often reach the point where the world changes qualitatively because of these improvements.
Not having to worry about effective utilization of every bit and every CPU cycle lets you concentrate on the problem. Not having to wait an interminable time for results (while changing out floppies every minute during the wait) lets you concentrate on the problem.
Solutions which take advantage of these qualitative improvements beget other solutions. Memory-managed, dynamic languages such as Python really come into their own in a resource-rich environment. This is exactly what Eric was referring to a few weeks ago when he was contrasting programming then and now.
It may be that the answer was self-evident then, yet it would have taken weeks and weeks of time to get it right and prove it was right.
On the other hand, you may be perfectly right that he was just missing some information, such as, for example, how different people used CVS. This is, of course, a problem that the internet fixes quite handily.
@esr:
These were most assuredly Fortune 32:16 XP machines. I’m told they were quirky little buggers.
>These were most assuredly Fortune 32:16 XP machines. I’m told they were quirky little buggers.
Almost bingo. Suffix doesn’t ring a bell; I don’t think they were XPs, just plain 32:16s.
As for quirky, I liked ’em. The terminals that came with the machine had quite nice keyboards, I remember that.
Of course they were quirky. The Fortune 32:16 was really a Thomson-built Micromega 32. Thomson being a French company. C’est bizzare!
And yes, they were true 68000 machines, at least in the early instances.
Note that Eric has it slightly wrong: you could put a MMU on the 68000 (several machines were based off the Sun-1 design, which had one), but you still couldn’t deal with re-starting the faulted instruction on a 68000 without running a pair of them. The 68010 fixed that design flaw.
I also think that you’re being too hard on yourself. You might have thought about these problems 20 years ago, but 20 years ago the problems with CVS that are seen as huge now were viewed as relatively minor then. Back then almost everyone thought CVS was good enough — this probably had more to do its widespread use and the fact that it was lightyears better than those that came before it (RCS, SCCS).
IOW, it’s more about perspective than about the facts. In the 1980s, PC hard disk drives were loud by today’s standards. But back then, we didn’t care; they certainly were a lot quieter than floppies or tapes or pretty much anything else used for mass storage before then. These days, we wouldn’t even think of putting something so loud and obnoxious in any machine.
Thread-jacking, to be sure but:
http://esr.ibiblio.org/?p=2561:
and today we note that Symbian is shutting the doors:
http://developer.symbian.org/wiki/Symbian_Foundation_web_sites_to_shut_down
Notes from the last meeting: http://developer.symbian.org/wiki/2010-11-24_All_councils_wrap-up_call
@Yeshua Lounge Lizzard:
Do you have some sort of reading deficiency? If not, then what part of “[t]he Symbian platform remains business critical to Nokia and their estimate of selling >50m S^3-based devices still holds” from your second link did you not understand? Or are you simply linking to things you didn’t read and then drawing erroneous conclusions as a result?
Morbund Greyhair,
Ever hear of corporate PR? Are you really that big a sucker for the CEO-speak?
Of *course* it’s still business critical. The Windows Mobile 7 port isn’t ready yet.
Duh!
@Facebook:
Yes, I’ve heard of corporate PR. What it actually sounds like to me — reading between the lines — is that Nokia is taking Symbian to a more closed development model and that they may be using the open source development base that exists and building more closed source stuff on top of it. (Think Apple.) They will likely release further code for that open source base, but there will be no community involvement (Think Apple again). That’s why the spend so long focusing on the licensing. I think Elop, as a former Softie, is more comfortable with a model along those lines than he is letting the community take Nokia’s flagship OS in whatever direction.
If you still don’t get it, I’m saying flat out that you and Lizzard are jumping to conclusions.
>If you still don’t get it, I’m saying flat out that you and Lizzard are jumping to conclusions.
I agree. But I think you are, too :-)
I guess I should blog about this.
@Moronic Greyhound,
I see Elop serving the interest of Microsoft here.
Step one is to take Symbian back to closed source. Everyone understands that without an effective leader, any open source project will die. Symbian, as a platform just died (except for the currently shipping version.)
Step two is to give Microsoft some serious market share, by flipping Nokia to WM7. With market share, the developers will come.
Step three is to kill Android (the only Open Source choice that still matters, meego is a massive failure. It’s a tablet OS for the Atom, and even Microsoft wouldn’t support Atom in Windows Mobile 7.) Oracle’s lawsuit will taint Android enough that Nokia won’t dare to adopt it, even without Elop’s loyalty to Redmond. Without Nokia, Android must stand on the backs of Taiwan and Motorola.
We will know that Google is frightened when they start pushing Chrome OS.
$EMPLOYER still has plenty of SCCS around, probably because it was the only SCM system shipped with Solaris until very recently. The current plan will make it obsolete in 2011Q1, but I’m not optimistic.
Fortunately, there is GNU CSSC (Compatibly Stupid Source Control).
Hindsight can be a beautiful thing sometimes!
If you promise not to laugh, I’ll mention that I keep my .zshrc under RCS, only because it became so unwieldy to manage otherwise (mostly aliases, with a few functions thrown in to keep things interesting). If nothing else it’s a good way for a newbie to learn how version control works before graduating to Subversion or Git.
>If you promise not to laugh, I’ll mention that I keep my .zshrc under RCS,
Laugh? Heck, I RCS many of my dotfiles. Completely appropriate; sometimes you want file granularity rather than changesets.
>In fact, I figured out last night. while building my importer so it would round-trip, that you can have reliable behavior from changesets layered over RCS only if you *never delete a master*, not even to rename it. I know what the right other rules are, but it’s nearly twenty years too late for that to matter.
I remember being given the instruction to never delete or rename a file, the first time I got commit access to a CVS repository, back in 1994 and for the reason that it could break the repository, though in a way that could be fixed by hand. I had the impression then it was folk wisdom.
Why rcs-fast-import is not listed on http://www.catb.org/~esr/software.html?
ESR says: Bug fixed.
The git repository at http://thyrsus.com/gitweb/?p=rcs-fast-import.git is gone.
>The git repository at http://thyrsus.com/gitweb/?p=rcs-fast-import.git is gone.
Arggh. Yes, it is. Must push project to GitLab…
Also: why not put
commitid
s in the RCS files?It’s taking you a while, itsn’t it?