I’ve been mostly blog-silent for the last week because I’ve been working my tail off on a new project. It’s reposurgeon, a tool for performing surgery on repository histories, and there are several interesting things to note about it.
One of my regular guests on the blog, apparently a younger programmer, recently left a comment regretting the old days he wasn’t there for. He reports feeling stifled by the experience of Googling every time he has a nifty programming idea and discovering that someone else has already done it.
But with new days come new challenges, and new opportunities. Reposurgeon exploits a possibility that didn’t exist until quite recently; there has never been anything quite like it before, though there was were very partial precedents in svndumpfilter and my own svnsquash/svncutter tool.
reposurgeon is a repository-history editor. With it, you can edit old comments and metadata, remove junk commits (of the sort frequently generated by repository conversion tools such as cvs2svn), and perform various other operations that version-control systems (VCSes) don’t want to let you do.
I wrote it because I’ve been doing a bunch of repository conversions recently and I wanted a way to deal with the crufty artifacts those tend to create. But there are other obvious uses; one would be expunging repo contents that’s got some intellectual-property issue from the history, so you’re not re-infringing every time somebody clones the repository…
And when I say “editor”, I mean a tool general enough to have uses I’m not anticipating. It has a rather powerful little minilanguage in it for specifying selection sets. It lets you dump the metadata from the VCS history (committer, commit date, and comment text) in a textual form that you can edit and feed back into to it to modify the history.
Something like this, though limited to one specific VCS, could have been written before now. But the astute reader will note that I haven’t mentioned a specific VCS. In fact, reposurgeon can operate on histories created by RCS, CVS, Subversion, git, Mercurial, bzr, and probably several other VCSes about which I haven’t the faintest clue.
Yes, you may boggle now. If you are wondering where in the fleeping frack I get off claiming to support version-control systems I admit I don’t know anything about, this shows you are paying attention. It has been rumored that I am a clever fellow, but what, what, *what*? Right. You deserve an explanation and shall have one.
The trick that enables reposurgeon to do its magic is that it only fakes editing repositories. What it actually edits is git-fast-import command streams.
Aha! Some of you are already nodding knowingly. For the rest of the audience, a “git-fast-import command stream” is a format Linus Torvalds and friends invented to flatten a git repository history into one big file that can be used to reconstitute the repository by another instance of git.
This is meant to enable writing import tools; the moment you have an exporter that can generate this format from (say) Mercurial or bzr, you can transcode repositories from the other system to git as easily as you pour water from one cup to another. (Well. Some older systems that use only local usernames as committer IDs rather than full email addresses have an issue, but it’s easily worked around in a variety of ways.) And exporters are easy to write; if your special VCS has a command-line interface at all, odds are building an exporter is about a day’s work in the scripting language of your choice.
This stream format has useful properties. It’s easy to parse and self-describing. But the really important property is that it expresses an ontology or data model that is both very simple and general enough to capture the state of repositories made not just with git but with other VCSes. It has to, or it wouldn’t be good enough to support importers – too much state would get lost in transition.
So, lots of git fans have written exporters for a huge range of VCSes. Not to be outdone, fans of other VCSes have written importers for their systems. They don’t want all the migration to be one-way, you see. Lossless import capability makes a VCS a potential destination, giving it a competitive advantage over systems that can only export projects away from themselves.
Look what’s happening here! Without necessarily intending it, the git crew have created a de-facto-standard interchange format for passing around version-control histories. This is huge. Because what it actually does is decouple the whole I’ve-got-a-project-history thing from any individual version-control system.
Watch for second-order consequences of this in the future. In particular, I predict that VCSes will increasingly converge on supporting exactly the set of abstractions in a fast-import stream. They’re a good enough set, and being interoperable will prove a powerful lure.
While the rumor that I’m a clever fellow isn’t entirely false, the most important knack I have is for seeing the stupid-obvious possibilities that have been sitting under peoples’ noses all along – in this case, the possibility for a VCS-independent history-editing tool. What reposurgeon actually does is take a fast-import stream in one end, allow you to hack it in various interesting ways, then ship the modified repo out its other end as a modified fast-import stream.
It can look like you’re editing a repository, sure. But that’s because reposurgeon has a method table in it that’s indexed by VCS type and contains a small handful of command-line templates for each of them, including an importer command and an exporter command. And that, basically, is it; that handful of commands is all reposurgeon knows about any specific VCS, and probably all it will ever need to know. Adding reposurgeon support for new VCSes is easy and doesn’t require changing any executable code at all.
If you want to work with a VCS that isn’t in the list, use the exporter of your choice to dump to a stream file, then tell reposurgeon to load from that. When you’re done, tell reposurgeon to write the stream to another file, then use whatever importer you like to rebuild a repo from it.
There is one significant drawback to operating this way. In a system like git or Mercurial that uses hashes of a commit’s content to identify it, the IDs of anything that’s downstream of a commit you alter or delete will change. This will tend to hose people trying to sync from the modified repo. You do not, repeat not, want to use reposurgeon on a publicly-visible repo – not unless you can get everyone to re-clone it in a clean directory afterwards.
But this isn’t reposurgeon’s fault; any surgery tool would have the same issue. In a way, that’s liberating; metadata that no surgery tool could possibly preserve even in principle is metadata that reposurgeon doesn’t have to worry about preserving.
There are a couple other fun things about reposurgeon. I wrote and documented the whole thing in eight days from a standing start, and if after seeing the manual page you think that’s a lot of work for eight days you’re not wrong. I was able to be that productive because (a) I didn’t pause to reinvent any wheels, and (b) stuck to a brutally simple, minimalist design.
An example of not reinventing wheels is how I support metadata editing. Look at the following session transcript; I’ve added some whitespace and comments (beginning with ;;) for clarity:
esr@snark:~/WWW/reposurgeon$ reposurgeon reposurgeon% read reposurgeon: from git repo at '.'......(0.20 sec) done. ;; That's the repo being grabbed reposurgeon% list 426 426 2010-11-01T00:47:57 Documentation improvement. ;; That's a summary listing of event 426, a commit reposurgeon% write 426 commit refs/heads/master mark :425 author Eric S. Raymond1288572477 -0400 committer Eric S. Raymond 1288572477 -0400 data 27 Documentation improvement. from :423 M 100755 :424 reposurgeon ;; That's how it looks as a fragment of a git-import stream reposurgeon% mailbox_out 426 ------------------------------------------------------------------------------ Event-Number: 426 Author: Eric S. Raymond Author-Date: Mon 01 Nov 2010 00:47:57 -0400 Committer: Eric S. Raymond Committer-Date: Mon 01 Nov 2010 00:47:57 -0400 Documentation improvement. ;; And that's how it looks when it's been mailboxized.
That ‘mailboxized’ version is is the form you get to edit. It contains all the metadata you can safely modify and nothing that you can’t, with the (unavoidable) exception of the event number. In fact, reposurgeon has an “edit” command you can use that grabs as much of the repository’s metadata as you want, launches an editor session, then de-mailboxizes what you leave when you exit your editor and applies the changed bits.
The point here is that I didn’t invent anything I didn’t have to. Reposurgeon isn’t some glossy idiotic GUI thing where you have to edit commit metadata via a form full of clicky-boxes; you get to use your own editor and deal with a data format as simple as an email message.
Reposurgeon is a command-line tool in the classic Unix style. (Yeah, I wrote a book about that once.) Part of the reason I wrote it that way is that it meant I got to use the Python cmd.Cmd class as my interpreter framework – once again, not inventing anything I didn’t have to. But I would have done it anyway, because command-line tools can be scripted. And that was a goal.
(This project is, by the way, reason #2317 why I heart Python. The ready-to-use convenience of the cmd.Cmd class, the email parser, and shlex were absolutely essential to getting this done without bogging me down in low-level implementation details.)
Back to scripting. Here’s a command I could have used to generate that transcript:
reposurgeon 'verbose 1' read 'list 42' 'write 426' 'mailbox_out 426'
See that? Reposurgeon actually improves on the classic style by having no command-line options. None. Instead…the command-line arguments are interpreted exactly the same way user input typed to the prompt would be. There’s only one specially-interpreted command-line cookie; ‘-‘, which means “run the interactive interpreter’.
So, for example, I can say “reposurgeon read list -” to my shell prompt; reposurgeon will cheerfully read the repo in the current directory, list its commits and tags, then hand me a prompt. This is what I mean by brutally (and effectively) simple.
It’s not perfect, of course. I’ve only tested it on small repos with linear histories, which is why I’m calling the initial release 0.1; it needs to be torture-tested using large repos with tricky topologies, and it needs to be tested on things that aren’t git. There are some operations I have planned but haven’t implemented yet, like doing a topological cut of a repo into two repos that keeps the rest of the branch structure intact.
But it’s a good start. It’s already quite useful for my original goal, cleaning up cruft from repo conversions. The core classes are solid. The expression language works. The code is properly factored. It should make a good platform for implementing more complex surgical operations like history merges.
Best of all, every time I add a capability to the tool, it will support every single VCS, now and in the future, that speaks the import-stream format. And the fact that this is even conceivable is a pretty good reason not to pine for the old days.
UPDATE: Thanks to Russ Nelson’s suggestion, the project now travels under the sign of the blue sturgeon. Heh.
I have the gut feeling that this story has something to teach about the whole culture or philosophy of Unix programming and of even the strengths and weaknesses of the general Open Source world at large, but cannot really define clearly what. Something along the lines of sacrificing usability for maximum programmability, automatisability, scriptability, extensability i.e. for the sake of possible synergies to make, and one of the such possible synergies is making it possible and fairly easy for other programmers to increase usability by orders of magnitude by putting a “clicky” interface over it all. Something along the lines of a non-liner trajectory of satisfaction from the viewpoint of the less-technical user: this whole thing will for a long, long, long time be a confusing thing for the average “Mort” VB.NET programmer using Visual SourceSafe and at the point when some puts a “clicky” interface over it will turn very, very, very quickly into a “holy crap, it is awesome”. Basically a _________I usability graph for the less-technical user. I think this might be the typical way how Open Source esp. in the Unix-culture typically works.
>and at the point when some puts a “clicky†interface over it will turn very, very, very quickly into a “holy crap, it is awesomeâ€.
Yes. That could be done relatively easily. The front-end GUI would run reposurgeon as a child process, issue it commands, and parse the responses.
With a properly-designed command-line tool you can do that. On the other hand, if your code has a huge pile of GUI on the front there is usually no hope of finding a layer behind it that can be scripted. (The GIMP is an honorable exception.)
Wow, how old is your knowledge of GUI environments? Apple Events and OLE Automation have not only been around for more than 15 years now, they’re a quantum leap beyond traditional Unix environments in terms of scriptability: applications become live systems whose individual objects can be talked to using a common scriptability framework, in any language with bindings to that framework. Traditional Unix didn’t gain this until years later, and even then it’s still not quite right. (D-Bus does not give you the in-process component capability that ActiveX does, for instance.)
Before you dismiss this as a klugey hack you might want to consider how much of our nation’s financial infrastructure literally runs on Excel spreadsheets. :)
>Apple Events and OLE Automation
Proprietary and single-vendor-controlled. No reason for me to care about them or notice them; they’re nothing but a trap I refuse to blunder into.
And it only is such because it goes to
greatheroic lengths to be one.True, but still significant counterexamples to the thesis. The fault of those is more that they’re dense and difficult in ways that a GUI-front-ended tool are not, than any inherent problems in being proprietary.
At my job I have to deal with code almost never intended to be run any way other than to support an interactive session between a human and the underlying system. The idea that another program should be able to automate the process is an afterthought.
The tools to do that are brittle; basically you record a sequence of operations in a format that knows what “screen” you were in, what field on that screen you entered what input, and a couple of other details about the state of the screen when you entered it. If the “script” references a prompt but something else appears at that point in processing, the process aborts with a error message and a “Press Enter to continue” prompt that no keyboard is connected to if it’s being run from a command line. (This isn’t a horrible deal if the script is run from a menu within the app itself, which of course is the mentality for how it’s implemented.) There are no conditionals in this “scripting language” at all.
The Unix tradition for jobs too complicated to do as a pipeline is that you write a library to do the actual work and then create a front end that invokes the libraryand/or write a minilanguage in which to do the scripting. Either you grok why that’s important or you don’t; I’ve had little success educating those (like my company’s devs) who don’t.
That’s pretty nifty. I am wondering what the advantage of using reposurgeon is, as opposed to using plain old Git commands.
What I mean is, why not just do the following:
– git-fast-import your repository into Git
– edit the repository using git-rebase (or whatever else)
– git-fast-export the repository back to a command stream
Am I missing something, or wouldn’t that accomplish the same thing? Or is your goal just to provide a convenient interface for re-writing repository history?
>That’s pretty nifty. I am wondering what the advantage of using reposurgeon is, as opposed to using plain old Git commands.
Well, one advantage is that doesn’t have to be a git repo you’re editing :-). I haven’t tested the bzr and Mercurial support, but there’s good reason to expect it will just work, requiring at most some minor fixes to command options.
I’ve tried doing surgical things with git native commands and found it to a be huge pain in the ass. git really, really doesn’t want you to do things like editing comments on non-tip commits or deleting commits. The interface of filter-branch, which is the main native git tool for that kind of repo surgery, is as bad as Unix find(1) for baroqueness and peculiar, unobvious restrictions. I have actually beaten it into doing much of what I wanted, but it was not a pleasant or rewarding experience.
Also, there was a very specific operation I needed to support – coalescing cliques of adjacent commits with identical comments into one – that I think even filter-branch can’t hack. I think it would take custom programming reaching deep into the git plumbing for that, and that’d be a lousy use of my time when I can write a vastly more general tool that serves a dozen different VCSes with less effort.
Monster:
Try using magic words like “Enterprise Integration” on them. I’m working on a multi-million dollar project at work (that’s late for all the usual reasons) and one of the screaming pains in my future is that there’s apparently no API for extracting work product status as it flows through the (multiple) systems.
I can hack up something if I really have to, but my limited experience so far suggests that it’s going to be brutally painful. (For example, ripping the log files exposes me to a bug in their log rollover-logic that apparently assumes nobody will ever actually *read* the logs unless the system is halted.)
>> Apple Events and OLE Automation
> Proprietary and single-vendor-controlled. No reason for me to care about them or notice them; they’re nothing but a trap I refuse to blunder into.
But of course, if anybody else here _bragged_ about speaking outside of experience, they’d be met with scorn. Of course, the licensing and branding of a tool automatically reveal if there’s fuck all valuable in it. No need or reason to even check to see anything, because the Unix approach is the end-all. I understand that it’s your blog, not a democracy, but still, if you’re going to fill your mouth with ‘engineering’ and ‘professionalism’, you could at least check what the rest of the world is doing.
>Of course, the licensing and branding of a tool automatically reveal if there’s fuck all valuable in it.
You think you’re being sarcastic, I see. OK, let me put it plainly: proprietary tools don’t merely have zero value to me, they have high negative value. That isn’t about the Unix approach being the end-all, which is a nearly orthogonal issue; it’s about proprietary tools being intrinsically dangerous traps that jail your data and put you at the wrong end of a power relationship. See, for example, my recent post on the lost world of Krythar. I went through that sort of thing so many times that I reached the point of “never again!”.
So, when somebody like Jeff Read says “OLE! Active X!”, what I hear is “Hi Eric, why don’t you ram some bamboo splinters under your fingernails? It’ll be ever so much fun!”
Worst
Idea
Ever
Really? Really, Eric? Rewriting history?
…This illustrates an interesting point about the hacker culture. It consciously distrusts and despises egotism and ego-based motivations; self-promotion tends to be mercilessly criticized, even when the community might appear to have something to gain from it. So much so, in fact, that the culture’s ‘big men’ and tribal elders are required to talk softly and humorously deprecate themselves at every turn in order to maintain their status. How this attitude meshes with an incentive structure that apparently runs almost entirely on ego cries out for explanation.
“Before you dismiss this as a klugey hack you might want to consider how much of our nation’s financial infrastructure literally runs on Excel spreadsheets. :)”
Yeah…just think…if the banks all ran on unix, there would’ve been no need to hire that army of high school dropouts to robosign all those thousands of foreclosure documents. One little script and the problem is solved…
Excel spreadsheets.
Hate ’em.
Hate the mindset that produces them — wanna-be programmers doing wanna-be programming, without understanding the first thing about the rudimentary discipline of reproducibility.
However, it’s taken several years, but I’ve come to terms with the fact that a spreadsheet is a reasonable input mechanism for a lot of people for a lot of tasks.
You just can’t let them do any business logic in them, or if they insist, you need to make sure that everything gets double-checked 9 ways from Sunday outside the spreadsheet environment.
In this, Python and xlrd are your friends. You can transform their spreadsheet into something that can be archived, diffed, and double-checked. And if you’re working with serious money, some day, somebody will thank you for saving their ass(ets).
> You think you’re being sarcastic, I see. OK, let me put it plainly: proprietary tools don’t merely have zero value to me, they have high negative value. That isn’t about the Unix approach being the end-all, which is a nearly orthogonal issue; it’s about proprietary tools being intrinsically dangerous traps that jail your data and put you at the wrong end of a power relationship. See, for example, my recent post on the lost world of Krythar. I went through that sort of thing so many times that I reached the point of “never again!â€.
I dunno, Eric. Microsoft is just about as paranoid about backward compatibility as the Perl 5 maintenance team – both organizations will do so much to protect your existing data / code that they won’t make otherwise reasonable improvements. I’ve never lost data due to a Microsoft change.
My impression of Apple, OTOH, is that it periodically tells it’s developers to relearn everything from scratch. I’m probably ignorantly incorrect, but if so, they at least have a problem marketing to me.
In general, your complaint falls a little flat due to other factors. Microsoft and Apple are big enough not to leave you in the lurch. But small companies – like small open source projects – are much more likely to fail and leave you with a mess. This might discourage people from depending on small open source projects which embody highly domain specific knowledge. Sure, I might be able to work on the source, but if I don’t have a solid understanding of the domain that isn’t much help.
Yours,
Tom
>But small companies – like small open source projects – are much more likely to fail and leave you with a mess.
There’s an important difference. If an open-source project fails, my data isn’t stranded. Probably they didn’t use an opaque, poorly-documented format; we tend not to do that. But supposing they did, I have the code.
>I’ve never lost data due to a Microsoft change
But it’s not uncommon for that to happen. We had a commenter here recently speak of being advised to keep all his data in textual formats by a senior professor when he was a student, with the consequence (he reported) that he now has better access to his data from 30 years ago than he does to Microsoft Word files from 6 years back.
Why do I keep reading that as “reposturgeon” ?
>Why do I keep reading that as “reposturgeon†?
By Goddess, I think you’ve just told me what the project’s logo is.
Your personal feelings about OLE and ActiveX have nothing to do with the fact that their existence negates your strong statement about GUI and scriptability being mutually exclusive. As… fucking dreadful as working on a Windows platform can be, their devs have a leg up on (traditional) Unix devs when it comes to shit like this — which is, in 2010, really fundamental for a desktop environment.
Thankfully, there was this guy Miguel de Icaza who saw what good ideas were being developed into amazing tech in the Microsoft realm, and decided those great ideas warranted free software implementations for the Unix community to use. Thus, GNOME and Bonobo, which provide much of the OLE automation functionality as open source software on Linux and other free Unix systems. Similar projects exist for KDE and GNUstep. (I’m partial to the GNUstep implementation; StepTalk is awesome sauce, and Objective-C comes with an introspective MOP baked right in.)
@Tom: Open-source projects don’t generally get anywhere at all unless they accomplish something important enough to make them worthwhile. So while your scenario of an open-source project petering out and leaving you in the lurch is theoretically possible, it probably doesn’t happen that often. I also don’t buy the idea that if you don’t understand the domain of some dead codebase you’re totally screwed. It’s one thing to understand enough to maintain the code, but quite another to hack together some kind of exporter/dumper to jailbreak your data. If the software was well-designed *AT ALL,* then this should require no more domain-specific knowledge than it takes to grok the UI.
And open-source implementations can even act as the liberators for the data files of closed-source projects. For example, I once needed (okay wanted) a Flash decompiler, so I could figure out how to cheat at a Flash game. All existing ones seemed to be commercial, windows-only products. I looked at Adobe’s SWF specification, but got bored pretty quickly. So I realized I could just get the source code of the open-source Lightspark Flash player, slap some printf statements into its ActionScript bytecode parser, and instantly have the ActionScript equivalent of assembly code. That’s when I learned that Lightspark doesn’t even compile with GCC 4.4 (developer uses 4.5.) But that’s besides the point. If it wasn’t just some silly Flash game, I would have pursued it further gotten it working, all without even reading the specification for the file format.
>your strong statement about GUI and scriptability being mutually exclusive.
I did point out GIMP as an exception. The problem isn’t that they instrinsically have to be mutually exclusive, it’s more cultural. Microsoft has forgotten how to do scriptability right, if it ever knew; by all accounts I’ve heard, OLE is deeply horrible to work with, which is, er, why they had to try to replace it with .NET.
I don’t know that the same thing is true of Apple’s scripting interface, and I’d expect somewhat better architecture from them on the whole…but the way to bet is still that Apple scripting is a black art relying on interfaces that are dusty and indifferently maintained. Again, this is a prediction from culture.
Microsoft backwards compatibility? ‘Tis to laugh. What about Word version hell? Microsoft’s version compatibility runs only one way, where people are forced to shell out the big bucks for unnecessary upgrades just so they can collaborate. Which was Eric’s point, in part. And their closed ecosystem is confining in all sorts of ways. Ball and chain.
BTW, Eric. I have been playing around with a new (to me, at least) SCM called fossil: http://www.fossil-scm.org/
It’s distributed-repository, integrated scm, ticket, wiki and blog for a project providing a command-line and web interface in a single executable, with HTTP transport for the wire protocol and it uses a sqlite db for storage. I am considering it the perfect upgrade/superset from the SVN + SVNTrac combination (which the web interface is very similar to) I have been using recently.
Anyway, have you heard of it, do you have any critique?
>Anyway, have you heard of it, do you have any critique?
There are indeed some clever ideas there. Also some design choices that bothered me, but I’d have to look at it again to remind myself what those were.
The way it usually worked was, Apple respected backwards compatibility ruthlessly — to a point. Then they say “okay, time to retire that old Quadra and upgrade to the new shit. It really is better, we promise.” The way they do this is the right way: backward compatibility shims, up to and including CPU emulation or virtualization to keep the old shit humming as if nothing had changed. No retarded Ex functions, no keeping your doddering old DOS kernel around for the sake of the crufty shops who still have to paste stuff in from old Lotus spreadsheets. No compromises. Backward compatibility and forward thinking.
But lately they’ve become a lot more fond of the upgrade treadmill. And that’s disturbing.
Much as I seem like an Apple booster hereabouts — even a fanboy — it is simply a matter of acknowledging that only Apple has the talent, the team cohesion, and the relentless dedication to make end users happy that are necessary to drive real innovation and change in the personal computing hardware field. Time and time again, they have literally defined the terms under which discourse about computing and usability take place. Consider: virtually all modern GUIs are clones of the 1984 Macintosh UI. Virtually all modern laptops are clones of the early 90s Powerbook. The market for competing music players to the iPod (except for rinky-dink Chinese knockoffs) virtually dried up. All PDAs (and accordingly, smartphones including the iPhone) — directly descended from the Newton (the term “PDA” is an Apple coinage). And of course virtually all smartphones and tablets follow the ergonomic and industrial design patterns set forth by the iPhone and iPod. It really is Steve Jobs’s digital world; we just live in it.
But as a developer, Apple has shat on my head beyond redemption. As part of a mandatory security update for Panther, they pushed out a dynamic version of libstdc++ that came as part of Tiger. It completely broke C++ compilation on Panther, and no Panther compiler made available by Apple would properly link against libstdc++ anymore. XCode could still compile apps for Panther, but in order for the compilation to work right you had to be building on a Tiger box. The message from Apple was subtle but reeked no less of condescension: “If you were serious about developing for our platform, you would have upgraded a long time ago.” I’m just a guy who programs to put food in his belly and also as a hobby. Not one of these fucking hipsters who line up outside the Apple Store in the freezing cold to be the first to have whatever shiny just dropped in from Cupertino. Historically a lot of people have loved Macs because they just keep working in the face of planned obsolescence and needless upgrade cycles, and Apple is showing it no longer cares about this important segment.
I’m not asking you or anyone to like them, I’m asking you to at least test them, or even know about them, rifle through any good insights in them, and make better programs. Just like De Icaza did. That’s what upsets me. I know about your ‘closed software is torture’ stance.
>I’m not asking you or anyone to like them, I’m asking you to at least test them, or even know about them, rifle through any good insights in them, and make better programs. Just like De Icaza did.
No, it really isn’t worth my time. There are several reasons for this.
One is that people like De Icaza exist. If .NET and the flotilla of Microsoft technologies around it failed to suck rocks, I would know that it failed to suck rocks because various of my peers that I trust would have been telling me long since “You know, a lot of people have flung shit at Miguel but he’s actually on to something.” Then I would have looked, and by now I’d grok .NET as well as I grok, say, XML.
Another is that I in fact used to spend a fair amount of time digging into proprietary software technology for insights. Decades ago, now, but the lesson I learned was pretty clear; it just doesn’t pay off well. You end up with a lot of minor “insights” that can’t be pulled loose from technological and legal entanglements you want no part of.
A third is that, three decades into this, I’ve learned enough history of successful and failed approaches that additional time I spend looking at the record of what others have done is perilously likely to be a net loss over where I’d be if I just thought up my own solution. I couldn’t have said this as a young programmer – then, my creative talent was present but not the experiential wisdom that comes from having sifted through bad ideas (and the occasional good one) for thousands of hours.
I guess I can summarize as “Been there. Done that. Wasn’t impressed.”
(Eric E. Coe)
> What about Word version hell? Microsoft’s version compatibility runs only one way, where people are forced to shell out the big bucks for unnecessary upgrades just so they can collaborate.
That treadmill wasn’t so much because of incompatibility with previous versions, and more because of incompatibility with newer versions: morons who bought the new Office started saving documents in the new default document format, and other morons who wanted to play ball but had older versions had to kindly ask to resubmit all those docs or buy new versions.
That sometimes Openoffice works or has worked better with old .docs than Word is something I heard too, but it’s something else.
Eric, thanks for this on two levels:
1 – I am just getting over conjunctivitis (again). I am quite elated that I now pose a greater danger to Mercurial repositories than I do humans, if the lasting effect of contact is any measure. I’m shut up in my home office, could not hug my kid on her 5’th birthday and have been generally miserable, this will last a few more days.
2 – I explored the same possibilities when HelenOS was looking to get away from SVN to a DVCS. The first import to HG went pretty well, the problem was, the trunk in the SVN repo moved several times. For each trunk move, an unresolved head emerged. I worked on it for about a week and kept thinking, “If I could make those go away in git, I’ve nailed it”. My day job called as usual, and HelenOS went to BZR. Ironically, I was able to move CryoPID to HG , which had basically the same circumstances, without a major issue when the community decided to bring the project back to life.
What I did not see on the project page was a mailing list? I don’t *think* I’ll be able to fix HG related bugs, but I sure as hell can report them :)
>What I did not see on the project page was a mailing list? I don’t *think* I’ll be able to fix HG related bugs, but I sure as hell can report them :)
There isn’t one yet. I’m looking into options.
>With a properly-designed command-line tool you can do that. On the other hand, if your code has a huge pile of GUI on the front there is usually no hope of finding a layer behind it that can be scripted. (The GIMP is an honorable exception.)
I have to call baloney on this one. You are mistaking the interface (GUI, Command-line) for the functionality. The core functionality in reside in a library which is called by an EXE that implements a GUI or a different EXE that is designed to run on the command line.
In your own post you cited the use of various python libs. You didn’t communicate with them through the command line? No you use the common interface defined by python to communicate with those libraries (email parser, shlex, etc).
The problem is that you had to use them in Python. If you Perl you have to use something else. C something else still, etc, etc. Diss on Microsoft all you like but one thing .NET does well is allow me to write one library that works across all .NET supported languages http://en.wikipedia.org/wiki/List_of_CLI_languages.
Heck somebody even trying to a scripting language that accesses .NET http://en.wikipedia.org/wiki/Script.NET
What would be nice is we can move beyond CLI and pipes to where an arbitrary scripting langauge, programming language can access X library. Not only access it but actively discover it’s content.
In fact what .NET shows that it would be possible to develop a setup where not only the computing environment is based on open source but is forced to be open source. The metadata and the IL Bytecode that Microsoft uses means that .NET binary are trivall easy to decompile. http://www.red-gate.com/products/reflector/ This includes Microsoft’s core .NET libraries. Using reflector you can drill down to the point where you see where the calls to the Windows API are made.
At one time we used assembly to program our computers. We moved on to C, and now a shift is moving to python, perl, and others as baseline. Why should we continue to remain on CLIs and ascii pipes as the primary means of interoperating?
>I have to call baloney on this one. You are mistaking the interface (GUI, Command-line) for the functionality. The core functionality in reside in a library which is called by an EXE that implements a GUI or a different EXE that is designed to run on the command line.
I think you meant “core functionality can reside in a library”, yes? You’re right; the fact is, however, that this is rarely done. The reasons are, as I noted in a previous comment, cultural. Places where GUI programming is worshiped as an icon are, with rare exceptions, places that have forgotten the importance of scriptability or never understood it to begin with. You can point out that tis could be otherwise any number of times without making it less true.
>In your own post you cited the use of various python libs. You didn’t communicate with them through the command line?
No. I had structures to pass them that it would have been pointless to re-serialize. You should google for “alternating hard and soft layers” to get some feel for how system architects think about such things.
>Why should we continue to remain on CLIs and ascii pipes as the primary means of interoperating?
Because they’re more discoverable than anything else.
Re: OLE Automation and scripting Windows Applications
Unlike Eric, I’ve spent my time equally between the Unix and Windows worlds, so I can actually speak from knowledge here. Yes, OLE Automation (and the .NET equivalent), *IF IT IS USED* can provide interfaces for automating Windows applications. The likelihood that this will be available outside of Microsoft’s Office applications is approximately NIL. The tools are there, but the culture isn’t. The typical Windows developer uses Visual Studio’s visual editor to create their GUI, which provides nice generic control names (button1, button2, etc). In order to make that useful, the developer has to go around and rename all the fields and controls to something reasonable. Since Visual Studio only supports the crudest refactoring tools, this is difficult, time-consuming and error-prone. There is very little reward for the effort, and besides, the time could be better spent shoe-horning in that crappy feature that Andrew in accounting wants. Yes, Unix programmers can end up doing the same thing, but the culture and tools lend themselves to separating UI from functionality, which leads to better potential for automation.
On the other hand, Windows DOES have a creditable scripting tool: PowerShell (formerly Monad). The PowerShell developers actually took the time to study and understand Unix scripting languages before they set out to develop a .NET scripting language, and the end product is well-designed and works. This is the first language I’ve seen that does a good job of making it easy to build and operate on streams of objects, instead of just streams of characters. Even better, it has built-in mechanisms for dealing with all of the existing de-facto standards for application automation that have grown up over the years in the Windows environment. If you have to do systems administration or systems automation in Windows environments, you will want to learn PowerShell. Unfortunately, PowerShell suffers from the same problem as every other tool that Microsoft builds: it is intended strictly for use with Windows and has no vision beyond that.
Ah well, time to go back to writing automation scripts for Windows applications so I can screen-scrape their outputs and stuff the results into a database. Could someone please pass me some extra bamboo splinters, I’ve used up all of mine…
>Unfortunately, PowerShell suffers from the same problem as every other tool that Microsoft builds: it is intended strictly for use with Windows and has no vision beyond that.
See my previous comment about ‘a lot of minor “insights†that can’t be pulled loose from technological and legal entanglements you want no part of.’ I’ve seen this over, and over, and over again. Which is how I’ve learned that people like Adriano who think I ought to run around sniffing at the latest proprietary whizbang are at best hopelessly naive.
Craig says “The tools are there, but the culture isn’t.” That’s exactly right; I believe I recall saying as much upthread. And it’s the culture that’s really generative of ideas, not the tools. I already live in a programming culture that’s healthy that way; I therefore haven’t got any need to grub around in cultures that are unhealthy.
>A third is that, three decades into this, I’ve learned enough history of successful and failed approaches that additional time I spend looking at the record of what others have done is perilously likely to be a net loss over where I’d be if I just thought up my own solution. I couldn’t have said this as a young programmer – then, my creative talent was present but not the experiential wisdom that comes from having sifted through bad ideas (and the occasional good one) for thousands of hours.
While I only been in this for 25 years, this is sounding like the criticism that assembly programmers gave the c programmers, and later still the c programmers the python/perl/ruby/etc folks. The same criticism was leveled at structured programming and later still at object oriented programming. CLI advocates vs GUI advocates and the list goes on.
Is the criticism of the technology or the company that originated it?
My ideal would be to have an open source environment where libraries have the metadata that fully describes their interfaces including rich data structures like types, enumerations, and classes. That they use a binary format that allows the source code to be reconstructed from the binary in any language (or at least object oriented langauges) and include full variable names.
That the environment has a scripting language that is able to call said libraries as easily as it can use pipes.
I want this because for my application (CAD/CAM creating parts and running a metal cutting machine) and need a richer interface to deal with my component libraries and other libraries I choose to use.
Don’t take this as an criticism of how you develop your applications. One difference between me today and me 25 years ago is the realization that the world of development is vast. So vast that even the smartest and knowledgeable of us only see slices of the overall picture.
For example the whole development of the internet and the web is largely irrelevant to the development of our core application. Which takes dimensional data, creates 3d shapes, unfolds them into 2d patterns, automatically arranges them on a sheet of metal, and then cuts them with a plasma torch all while being used by a computer illiterate operator. Where the internet has touched our software is in the import and export of data to and from other source.
I been the primary maintainer of this software since 1989, involved with it initial development in the early 1980. What COM and .NET gave me was readability and maintainability over their UNIX/C counterparts (where the original version originated).
I wasn’t keen on using Microsoft at first, even circa 1990 they were the “bad” guys (only back then IBM was worse) there wasn’t any really alternatives for ease of development. For the most part it worked great. It hasn’t been all peaches as the transition to .NET nailed us hard.
If open source had a true alternative to Microsoft’s technologies then I would be working towards using that. The closest it has is Mono but … there are the doubts.
In the end what I want is what Microsoft has but open sourced so the next 20 years of maintaining this application are subjected to the vagaries of one company.
>See my previous comment about ‘a lot of minor “insights†that can’t be pulled loose from technological and legal entanglements you want no part of.’ I’ve seen this over, and over, and over again. Which is how I’ve learned that people like Adriano who think I ought to run around sniffing at the latest proprietary whizbang are at best hopelessly naive.
I didn’t see this while typing my reply.
Doesn’t sound very hackerish to me. A good idea is a good idea regardless of the source and should be debated on it’s technical merits not whether it was developed by a proprietary source.
Just because Microsoft developed a scripting language first that passed around streams of objects doesn’t mean it isn’t worth while to look at in the linux world or any other open source platform.
>A good idea is a good idea regardless of the source and should be debated on it’s technical merits not whether it was developed by a proprietary source.
You’re quite right. But I have limited bandwidth that I have to allocate. Thus, I have to have heuristics to filter which ideas I will pay close enough attention to “debate on the technical merits”.
I’ve found the following heuristic to be effective: ignore anything from the proprietary world until an open-source programmer whose judgment you trust tells you he’s looked at it and found it good.
By “effective”, I mean I’ve been applying it for over a decade and have no evidence that I have missed anything actually interesting by doing so.
> “Which is how I’ve learned that people like Adriano who think I ought to run around sniffing at the latest proprietary whizbang are at best hopelessly naive. ”
Yes, yes, the image of me suggesting something absurd (you as a dog, curiously sniffing up MS legs? Are you a bulldog or an Akita?) is really good for deflecting my point.
When you talk about how good the latest open-source whizbang is, you could at least consider if there’s any alternative other than what open-source offers. I remember you dismissing MSDN with nary a sentence, in the middle of a discussion on documentation alternatives, and it stuck.
Your later clarification into how you rely on peers to keep you updated about this stuff is a bit more enlightened (in my opinion, of course, and you don’t care about it), but it’s really the first time I’ve ever read it here, and the previous statements I remember are much closer to ‘I shan’t touch proprietary software lest it gives me cancer’. Perhaps I’m wrong, but with that kind of attitude, who would actually take the time to point out that you’re missing on something?
>Perhaps I’m wrong, but with that kind of attitude, who would actually take the time to point out that you’re missing on something?
That would be someone like, say, Craig Trader, who weighed in earlier in the thread. He’s known me for many years, is literate in both open-source and Microsoft stuff, and has – trust me on this – little hesitation when he thinks I’m full of shit. (Yeah, that’s Craig’s picture next to “snarky bastard” in the dictionary.) Such friends are exceedingly valuable. I have several.
Beyond that, client code can be profoundly stupid, and still extract the bits of a character stream that it’s interested in. This shortens your time to a working implementation.
If you constrain yourself to something like Smalltalk (or .NET), where all client code exists within the same object environment, shipping data around as objects makes sense. Otherwise the client code must be made to know about the entire object model before it can read the objects passed to it. (This even goes for things like XML.) Contrariwise, doddering COBOL mainframe programs, Unix tools, Perl scripts, Emacs Lisp modes, .NET enterprisey front ends, Haskell programs from the boys in R&D, and even the Squeak image of that weird guy who does all his coding (even Perl) through Squeak can be made to understand text files with very little effort.
The Unix approach has quite a few drawbacks, but one of its huge advantages is it’s profoundly loosely coupled to a point of ruthlessness. That’s probably a big part of why the Unix Way has outlasted just about every other approach to computing in terms of influence. (I’m not counting mainframes here because though they will be around long after you and I are gone, no one really wants to do things the mainframe way. No one says “hey, let’s build a new cellphone OS around z/OS!” Mainframes stayed afloat by assimilating Linux.)
I don’t think even Morts use VSS anymore. Microsoft has basically admitted the product sucks, stopped updating it, and is trying to encourage its developer base to take off the training pants and migrate to something more grown-up.
Did anyone try to actually look at the project? I keep getting a 404 error.
>Did anyone try to actually look at the project? I keep getting a 404 error.
That’s odd. What URL are you chasing, and from where?
(My wife can see it from work.)
Hate the mindset that produces them — wanna-be programmers doing wanna-be programming, without understanding the first thing about the rudimentary discipline of reproducibility.
Excel is the new Basic, the native tongue of a whole generation of (badly) self-taught programmers.
It’s a pity (if understandable) that the open-source spreadsheets try so hard to replicate Excel – the idea of a declarative programming language for casual programmers tightly integrated with a spatial IDE seems like it ought to have been a lot cooler, rather than a leading source of programmer brain damage.
The URL is right from the project page at http://www.catb.org/~esr/reposurgeon/
The link is http://www.catb.org/~esr/reposurgeon/reposurgeon-0.1~dev.tar.gz
I’ve tried some obvious substitutions, like . or – for the ~ to see if it was just a typo…
that was it, just removed the ~dev altogether and got it with wget.
The link on the project page just has a typo.
J. Jay, your complaint about history rewriting is a non-starter. Importing from one VCS into another is intrinsically an act of rewriting history. You might as well clean up the impedance-mismatch-artifacts while you’re at it.
(Not to mention I don’t really fall into the “history is sacred” crowd; I prefer highly meaningful history to highly accurate history. YMMV. But still, the very act of changing semantic contexts means you’ve already crossed that bridge.)
>Importing from one VCS into another is intrinsically an act of rewriting history. You might as well clean up the impedance-mismatch-artifacts while you’re at it.
And that is exactly the use reposurgeon is intended for.
Nevertheless, I actually have considerable sympathy for the history-is-sacred position. It proceeds from important values of transparency and accountability. There’s also a feeling behind it that project contributors own a reputational stake in the project and have a right not to be made unpersons by somebody diddling with the history.
From this perspective, reposurgeon might look like an invitation to abuse. But this is exactly the same sort of muddy thinking that leads to attempts at firearms bans. Violent deviants will commit equivalent crimes with knives and baseball bats and half-bricks when they can’t get guns; villains will rewrite repos with git filter-branch or equivalent tools if they don’t have reposurgeon.
In both cases, misidentifying the tool as the problem actually drains energy from social means of enforcement that could work better.
Actually, I do agree that public history shouldn’t be changed for the social reasons in addition to the technical reasons; I’m more sensitive about the people who insist that one should never use “git rebase -i” prior to pushing up to the public repo because this is somehow lying. Or that rebasing is intrinsically a bad idea at all times.
Possibly my conscience is simply seared insensate because my primary git usage involves using “git svn”, where rebasing is pretty much necessary if you’re working with others.
BTW .. am I the only one who first saw “reposurgeon” and thought of that dystopian SF flick where the repo guys would come take your artificial organs?
>BTW .. am I the only one who first saw “reposurgeon†and thought of that dystopian SF flick where the repo guys would come take your artificial organs?
Fear the reposturgeon!
@Jeremy:
The only reason these people could possibly care is because they want to be able to exclaim “It took you five tries to get that simple fix right??!? ”
No possible good can come from pandering to people like that. In fact, the only correct response is to find something that they made a couple of passes over and ask “What the fuck are you doing polluting the public repository with your multiple lame-ass attempts at fixing this problem? Why should I have to read through all this drivel before I find the right answer? What makes you think it should be hard to read just because it was hard to write?”
I will stridently commence twiddling my repos such that every commit I made was a fucking Thor’s Hammer of Awesomeness.
Looks like a great tool. Where is the git repository for the reposurgeon source? ;)
>Looks like a great tool. Where is the git repository for the reposurgeon source? ;)
I’ll probably throw it on gitorious in a couple days. Until then, you know where to get the tarball.
Looks awesome! I’ve been wanting an interactive version of git filter-branch for a long time, and this seems to fill that need perfectly.
On a related note, is there an issue tracker anywhere? I just tested by navigating to my clone of git.git and issuing a `reposurgeon read`, and after waiting a bit, it insists that the git fast-export command “failed”. But it then proceeds to look like it’s doing a lot of work – I presume it’s now ingesting the file that git fast-export emitted even though it thinks the command failed. Even now, a number of minutes later, Python is still churning away at 100% CPU. Incidentally, the file that git fast-export emitted ended up being 1.2GB in size before it “failed”.
And now as I was typing that, it threw up an exception. You can find it at https://gist.github.com/660887.
It also appears to have left the .rs directory behind.
Now that it’s stopped, I’ve re-run the git fast-export command myself, and it turns out the failure was because of signed tags.
fatal: Encountered signed tag 0ebac1b0c737ef67f2e5c3348fa4a9153273a8d5; use –signed-tag= to handle it.
In any case, this seems very cool and I’ll be keeping an eye on it. As one final request, any chance this could support a mode that only edits a portion of the history? I’d love to be able to say something like “read the last 100 commits reachable from this branch and let me edit that”.
>On a related note, is there an issue tracker anywhere?
Not yet. I want to experiment with avoiding a monolithic forge site. Do you know of anyone who offers standalone bugtrackers and (important!) the ability to download the bugtracker state on request?
>It also appears to have left the .rs directory behind.
Yeah, it’ll do that when it crashes. 0.1 release, sorry. It’s still pretty fragile.
>it turns out the failure was because of signed tags.
Aha. Of course. I’ll have to add an option to the exporter to handle this. Work for 0.2.
>As one final request, any chance this could support a mode that only edits a portion of the history?
Restricting the read would be difficult, because the selection set parser needs to have a repo in core already. But restricting the write to a specified selection set is already in place. So you can get the effect you want, the deign just means you have to hold the entire metadata in core first (but not the blobs).
> Do you know of anyone who offers standalone bugtrackers and (important!) the ability to download the bugtracker state on request?
I’m not particularly familiar with their offering, but Lighthouse (http://lighthouseapp.com/) is purely issue tracking, and they’re free for open-source apps. I don’t know if they have a way to download a dump of the issue state, but they do have an API, so if nobody’s written a tool to do so yet you could certainly write one to suck down all the issue data. And in fact I just tested, you can change the extension of any page to be .xml or .json to get a dump of that page’s data in that format.
> Yeah, it’ll do that when it crashes. 0.1 release, sorry. It’s still pretty fragile.
Perfectly understandable. I just mentioned it in case it wasn’t a known issue (e.g. only an exception in a particular place could cause that to be left behind).
> Aha. Of course. I’ll have to add an option to the exporter to handle this. Work for 0.2.
I also notice another flag –tag-of-filtered-object which specifies how to handle tags whose tagged object is filtered out. I’m not really familiar with fast-export/import, so I don’t know under what circumstances the tool considers a tag to have been filtered out, but the default is abort.
On another note, it may be worth supporting a mode that omits the blobs (e.g. –no-data flag to git fast-export). If I only want to edit commit metadata, it would be much faster (and much easier on my disk) to skip the blob data entirely.
>I’m not particularly familiar with their offering, but Lighthouse (http://lighthouseapp.com/) is purely issue tracking, and they’re free for open-source apps.
Looks like their code isn’t open-source, which worries me. I’ll keep looking.
>I also notice another flag –tag-of-filtered-object which specifies how to handle tags whose tagged object is filtered out.
Only relevant when you tell the exporter to export only for a file subset of the repo. reposurgeon won’t do that; it thinks subset filtering is its job, grrrr. Fear the reposturgeon!
>On another note, it may be worth supporting a mode that omits the blobs
You can’t rebuild the repo without knowing where the data is, though. How would reposurgeon know in that case?
Some (incomplete) support for the hunch that GUI’s are worse at scripting than command lines.
First, you should read “In the beginning was the command line” by Neal Stephenson
http://artlung.com/smorgasborg/C_R_Y_P_T_O_N_O_M_I_C_O_N.shtml
The Unix pipeline works on linear text objects. At, least, that is what it expects. Using each command as a filter allows you make a work-flow filter from input->output. A relational database table too is just a long linear string and SQL is a query language on linear strings.
Unix STDIN/STDOUT commands are perfect for this workflow.
A GUI is more powerful than a command line for only ONE reason: It is possible to express tree based data structures and operate on them.
The data “stream” designed for GUIs is XML. It is nearly impossible to perform complex tree manipulations using a pipeline. You can apply individual commands on a data-structure, by moving around the complete structure, but that will eventually break the chain somewhere due to unanticipated side effects of filters teaming up with complicated order effects. This is a well known horror scenario in systems using pipelines of rewriting tools.
The correct way to manipulate tree structures in GUIs is to keep them in central storage and let each command or operation only work on subsets of the structure. People can visually manipulate tree structures intuitively, they are even very good at. So nobody has much problems with moving around files and subdirectories in a file system. Nor have people problems with navigating decision trees to set parameters.
The language class that can operate on linear strings is regular expressions. Regular expressions are (reasonably) simple and it is easy to write string queries in regular expressions. This makes it easy to generate and debug complex work-flows on linear strings.
The language class that operates on tree structures is Context Free Grammars (CFG). The only way to generate *full* queries on tree structures, that is, queries taking into account *all* (in-law) family relations in a tree, requires writing a CFG. The best examples of such tools are Lisp and Prolog. Both are languages designed to manipulate tree structures as programs. Writing a Tree based query is like writing a Lisp or Prolog program. There are people who can write good Lisp and Prolog programs. But there is a good reason both languages are not wildly popular. People are horribly bad at writing even simple context free grammars. What they can do easily by hand and eye, they cannot do with words.
Hence the reason why “scripts” using XML are mostly based on running regular expressions on linearizations of the trees.
Any GUI that sticks to a linear, stream, oriented data flow model can easily be scripted as a pipeline of commands and filters. Any GUI that operates on a Tree based data model (or more complex) cannot be fully scripted as a pipeline without descending into madness.
So either a GUI is no more powerful than a pipeline, or it cannot be scripted well.
There is a reason that Unix pipelines remain so popular.
Pure python issue trackers include trac and roundup.
Then you can use reposurgeon on it.
I don’t know why things like compling a compiler on itself, using an editor to edit its own source code, or using reposurgeon on a reposurgeon git repository make me so damn giddy, but they do.
>Then you can use reposurgeon on it.
I already have. How do you suppose I did my initial testing?
But no, editing the repo after I push it to gitorious would be a bad idea. Very bad idea.
Today’s features:
& followed by branch name resolves to everything on the branch.
The promised ‘split’ operation (topological cut) is now working.
There is a new ‘drop’ command to drop repositories from the load list.
There is a new ‘history’ command to display your session history.
Fear the reposturgeon!
This sounds like an interesting project, and I agree that the git-fast-import format is becoming the lingua franca of the DVCS world. But unfortunately the translation tools between DVCSs and non-D VCSs are not good enough to make your tool usable for, say, Subversion or CVS. In fact, the underlying history models of CVS and Subversion are so fundamentally different than those of the DVCSs that a perfect-fidelity conversion from CVS or Subversion to a DVCS is impossible. For example, git’s branching and merging model is not–even theoretically–able to represent typical Subversion branching and merging histories; see http://softwareswirl.blogspot.com/2009/08/git-mercurial-and-bazaarsimplicity.html for details. (Whether the flexibility is a good thing is debatable…)
>For example, git’s branching and merging model is not–even theoretically–able to represent typical Subversion branching and merging histories
I think you’re pushing the article’s argument harder than the author would approve of. It’s not typical for Subversion and CVS projects to have merge histories so complicated that a DAG-based DVCS can’t capture them; it’s merely possible, in the presence of cherry-picking and some ugly CVS mixed-tag tactics.
I’ve worked on dozens of Subversion- and CVS-hosted projects, some (like Battle for Wesnoth) extremely large, and though I was aware of the possibility I have never actually seen a pathological situation of this kind. Wise project teams avoid them, because they create ancestry tangles that are difficult to reason about.
Most of the time — often enough to make reposurgeon useful – Subversion projects have close-to-linear histories. At least that’s my experience
The repetition of this phrase suggests that a robed, hooded sturgeon, armed with a scythe, cannot be too far behind.
So, with reposurgeon blowing up the (only theoretical anyway) write-only nature of version repositories, is it time to look at some kind of cryptographic signing of changesets or repositories and/or deltas? Or is this a feature that’s already out there and I don’t know about it… or have I missed the point on that issue entirely?
>Or is this a feature that’s already out there and I don’t know about it
Most DVCSes have it built in to various degrees.
One of the reasons I’m not real worried about abuse of the tool is that in the DVCS world you can’t edit the history of a public repo without somebody noticing really quickly. Because the next time they pull, the failure of the commit hashes to match causes havoc.
> The repetition of this phrase suggests that a robed, hooded sturgeon, armed with a scythe, cannot be too far behind.
I was thinking of a sturgeon, surgical mask covering his unnerving grin, brandishing a scalpel and staring at you with an insane gleam in his mad-scientist veiny eyes.
Sleep tight?
>I was thinking of a sturgeon, surgical mask covering his unnerving grin, brandishing a scalpel and staring at you with an insane gleam in his mad-scientist veiny eyes.
See, that’s the image I had in mind. You have learned to fear the reposturgeon. You are wise!
Shit like this happens all the time in our perforce repo. Yes, it’s a mess, but having an accurate historical record of the ancestry is often far more important than reasoning about it which is why it occurs. There’s a reason why big-boy shops with millions of lines of code and — often as not — huge amounts of other assets as well tend to rely on p4. Hell, AAA games simply wouldn’t exist without it.
Mike Earl: The hash a git repository is a hash of the entire repository. Any change whatsoever changes the hash of the head. It isn’t signed, but it’s cryptographically strong already. (Googling around for signing git commits produces a series of interesting discussions outlining why it is harder than it initially looks.)
That’s why public history already should never be rewritten; from git’s point of view it actually changes the repository into a different repository and it doesn’t know how to switch between them because there is no way to resolve two separate repositories like that.
reposurgeon changes nothing fundamental in this regard; it was already possible to screw with published repos, after which you suffer the pain of telling everyone who pulled to switch over to the now-new repository. In fact one must be a bit careful to avoid this outcome occurring accidentally (though it’s not a lot of care).
Michael Haggerty is the author of the article.
@Michael Haggerty: The `svn:mergeinfo` property does not represent merge history, and contrary to what is described in various documentation is not about merge tracking (at least as I understand it). It is about tracking merged-in revisions, from which you can derive merge history (i.e. which revisions were results of merging which revision with which revision).
I would consider complicated svn:mergeinfo to be a case of mishandling Subversion repository, most probably caused by the fact that Subversion doesn’t have branches as first-class objects (cause by model of branches used by Subversion: “branching is copying”). Similarly to how one would consider mishandling creating single revision which span multiple branches, or creating revision on a tag.
I think that ‘svn-fe’ (svn-fast-export’) tool, which uses svnrdump (recursive svn dump) to convert to fast-import stream is a good solution for converting Subversion repositories to any DVCS with fast-export capabilities (Git, Mercurial, Bazaar).
@Michael, @esr: IIRC on git mailing list there was presented a tool which imported history from Subversion to Git by dumping Subversion history not dividedinto branches, i.e. from top hierarchy of Subversion repository root. I think that ‘reposurgeon’ could be used to cut such dump into branches, fixing any mishandlings and mis-svn:mergeinfos.
Here is the thread: http://thread.gmane.org/gmane.comp.version-control.git/158940/focus=159438
Can ‘reposurgeon’ handle issues of encoding (changing encoding of commit, or marking encoding of a commit as using specified charset, or changing encoding or normalization of filenames if somebody is insane to use Unicode filenames, or converting blobs from one encoding to other)?
BTW. Eric, would you be working on yours “Understanding Version Control“?
>Can ‘reposurgeon’ handle issues of encoding
No. I don’t understand those issues, so I’m not trying to tackle them.
>BTW. Eric, would you be working on yours “Understanding Version Control“?
Stalled. But it’s in the back of my mind as I work on this stuff; I’m accumulating experience for when I go back to it.
>>I’m not particularly familiar with their offering, but Lighthouse (http://lighthouseapp.com/) is purely issue tracking, and they’re free for open-source apps.
>Looks like their code isn’t open-source, which worries me. I’ll keep looking.
It’s not, but it’s rather trivial to dump your data out of the tool. As long as you can extract your data easily, I see no problem with the tool itself being closed-source. After all, it only exists because they can make money off of selling their service to other closed-source projects.
>>I also notice another flag –tag-of-filtered-object which specifies how to handle tags whose tagged object is filtered out.
>Only relevant when you tell the exporter to export only for a file subset of the repo. reposurgeon won’t do that; it thinks subset filtering is its job, grrrr. Fear the reposturgeon!
As I mentioned in an email, trying to fast-export the history of git.git actually triggers this error. git.git contains an annotated tag that points to another annotated tag – this is something that fast-export cannot possibly export, so it considers this a filtered object. –tag-of-filtered-object=drop is required to squelch the failure in this case.
>On another note, it may be worth supporting a mode that omits the blobs
You can’t rebuild the repo without knowing where the data is, though. How would reposurgeon know in that case?
If you’re importing back into the repository that was read in the first place, the blobs still exist there and you can reference them without including them in the stream.
>If you’re importing back into the repository that was read in the first place, the blobs still exist there and you can reference them without including them in the stream.
I am very reluctant to do that. The reposurgeon philosophy is to never, ever, ever try to modify a repo in place, in order to avoid the possibility of corrupting it. Better to spend disk space and a bit of processing power, as reposurgeon does, to guarantee that you always rebuild in a clean directory and back up a pristine version of the original.
Quite a bit of MacOS X works like this, too. Other parts are scriptable via AppleScript (or anything that implements that API.)
Linux has a long way to go to catch-up.
J. Jay:
Old argument is old.
TL;DR: Eric refuses to countenance solutions based on or inspired by proprietary technologies that get rejected by the heuristic filter he and his close friends employ. Given that most proprietary technologies are bletcherous, bind you to particular platforms and tools, and/or may be obsolesced out from under you, I’d say Eric’s strategy is not wrong, though his heuristic function might be buggy enough to cause him to miss a few valuable innovations.
Besides which, command line utilities that manipulate text streams win because they are trivial to implement, debug, test, and integrate. Objecty tools with RPC hooks are nice, but assume the pre-existence of a heavyweight object fabric just to run and talk to one another; the minimum required fabric for text-based CLI tools to communicate is basically stdio.
@esr:
Regarding an issue-tracker, I set up Roundup (based on your earlier recommendation on the blog) on a web server I had access to a while ago, and it worked well. I did not find the format in which stuff is stored to be too opaque, and in any event you can always edit anything through the web interface. If you don’t mind running it yourself, that might be a good choice.
And my strategy, if I ever used anything like reposturgeon, would be to script checkouts from the original and the new copy, with a diff to insure that they correctly meet up at all the expected points. But that’s just because I’m the kind of paranoid person who always runs at least a year behind the latest subversion release…
@Jakub:
commit (and tag message) encoding must be UTF-8. This is a requirement imposed by the fast-import stream format. I imagine filename encoding could be changed simply by moving the file. Similarly blob encoding could be done by piping the blob through iconv. I’m not quite sure what the current operations supported by reposurgeon are, though, as the manpage appears to be damaged.
@esr:
> I am very reluctant to do that. The reposurgeon philosophy is to never, ever, ever try to modify a repo in place, in order to avoid the possibility of corrupting it. Better to spend disk space and a bit of processing power, as reposurgeon does, to guarantee that you always rebuild in a clean directory and back up a pristine version of the original.
A laudable goal, though I would like to point out that in the case of git it doesn’t really matter. git-fast-import generates entirely new packfiles, and the only modification of the existing data that’s done is moving the branch pointers. Assuming it adds reflog entries (and I would be extremely surprised if it didn’t), the entire old repo can be recovered simply by resetting the branch pointers back to their previous value in the reflog. The only issue at that point is the disk space used by the new packfiles, but this data will eventually be garbage collected.
In any case, the suggestion for omitting blobs and writing back to the original repository is meant to be an optimization, so that it can work reasonable well with large repos like git.git. A quick test shows that `git fast-export $flags >/dev/null` in git.git takes about 27 seconds with blobs, and a mere 2.5 seconds without blobs. I suspect the performance difference for fast-import will be even more significant. And of course if the repo contains a number of large binary files, omitting them would be a huge win.
If it makes you feel better, you could try building a new repo that simply points to the old as an alternate and then fast-import there. It would still be able to resolve the blob data, but it would not modify the original repo. The downside is of course that a new repo that points to the old with alternates is most likely not what the user will want to use, especially if they wish to throw away the old repo. Of course, you could also just `git clone` the old repo and work in your clone.
>A quick test shows that `git fast-export $flags >/dev/null` in git.git takes about 27 seconds with blobs, and a mere 2.5 seconds without blobs
If export is that fast on a project the size of git, I’m going to flat refuse to worry about performance at this point in the game. Given the expected usage pattern of the tool, I think the lag would have to be pushing five times that before it would be more than a trivial annoyance.
>the manpage appears to be damaged.
Oh? How?
@Jakub: Clearly you hail from foreign parts, so you tell me: have we not yet reached the point where designers of new software can just say “Hand me UTF-8. We’re done here.”
>If export is that fast on a project the size of git, I’m going to flat refuse to worry about performance at this point in the game. Given the expected usage pattern of the tool, I think the lag would have to be pushing five times that before it would be more than a trivial annoyance.
27 seconds was with piping the output to /dev/null. It goes up to 41 seconds when I pipe it to a file. And it takes reposurgeon significantly longer to ingest this file – a simple test took over 4 minutes before reposurgeon aborted, and the line it aborted on is only 29% of the way into the file. Unfortunately I can’t test the time it takes reposurgeon to import a –no-data version because it aborts on the very first filemodify (as it can’t handle sha1 datarefs).
>>the manpage appears to be damaged.
>Oh? How?
The first thing isn’t damage per se, but the “delete =t & 1..:97” example line is documented as deleting tags to mark 15. That should say 97. However, it really does appear damaged below the line “Here are the available and planned surgical commands:”. What follows that is a series of indented paragraphs, which are all written as if they expect to be prefixed with the name of a command, but the actual command name is completely missing. And in fact if I compare it to the HTML documentation at http://www.catb.org/esr/reposurgeon/reposurgeon.html, I can see that there are indeed supposed to be command names here. I don’t know if there’s any other damage because I stopped reading the manpage when I got here.
>What follows that is a series of indented paragraphs, which are all written as if they expect to be prefixed with the name of a command, but the actual command name is completely missing
Aha. I see what’s going on here. The manual-page master is in DocBook, so I can easily generate both roff and HTML from it, but the man-page stylesheets are failing to render the <cmdsynopsis> markup I’m using inside <term>
Bother. None of the workarounds for this make me happy. Oh well, I’ll come up with something.
Looks like my previous comment got eaten. I’ll try again without the unicode.
@esr:
Going through the HTML documentation, the reductions table appears to contain a couple of errors:
M a + C a b -> C a b + M b
This will leave file a unchanged, which is incorrect. It should be:
M a + C a b -> C a b + M a + M b
Similarly
R a b + C b c -> C a c
will result in files a & c, when b & c is correct. The only correct alternate form for this reduction is
R a b + C b c -> C a c + R a b
which isn’t really a reduction at all.
>Going through the HTML documentation, the reductions table appears to contain a couple of errors:
And that’s why I published the table. I figured it was not unlikely I’d gotten some rules wrong, and hoped that if I had someone would spot it before I shipped 1.0.
@Kevin Ballard, @esr: The situation is a bit hypothetical wrt. using fast-import for that, but I had old CVS repository where commit messages were written using 8bit ISO-8859-2 encoding. To have them displayed correctly by git-log, I had to either recode commit messages, or mark them with ‘encoding iso-8859-2’ header – I have done the latter with the help of ‘git filter-branch’ (set `i18n.commitencoding’, run git-filter-branch without filters).
It is strange that fast-import imposes utf-8 as restriction…
@Kevin Ballard:
“27 seconds was with piping the output to /dev/null. It goes up to 41 seconds when I pipe it to a file. And it takes reposurgeon significantly longer to ingest this file – a simple test took over 4 minutes before reposurgeon aborted, and the line it aborted on is only 29% of the way into the file. Unfortunately I can’t test the time it takes reposurgeon to import a –no-data version because it aborts on the very first filemodify (as it can’t handle sha1 datarefs).”
I respect Donald Knuth
“Premature optimization is the root of all evil”
First, I would like to point out that a repository is meant to be a history archive. And in archives, history is sacred.
For me personally, I see only two uses for reposurgeon (probably a lack of imagination):
– Cleaning up conversions of one version system to another
The intend here is to correct conversion errors and recreate the original history
– Cleaning up a complex commit history in a private branch before pushing it to a public repository
Not to brush up your L337 powers, but to remove circular dead-ends and blatant commit errors,
like adding temporary files by accident or cleaning up incorrect comments. Reposurgeon might be
easier than redoing a new commit series on a clean cloned repository.
For me, both these functions are only incidental and would make a few minutes wait on a project the size of git irrelevant. It would not be worth the agony of fear that I would hose a public repository.
You might have a point if you tend to work privately in one system, eg, git, while you contribute to a different public repository system, eg, Subversion. But I fear I would hardly sleep at night if I regularly convert large project repositories between version systems.
Why is it that when I read this, I faintly hear Blue Öyster Cult in the distance?? :-)
>Why is it that when I read this, I faintly hear Blue Öyster Cult in the distance?? :-)
Because you’re supposed to. Duh!
>I have to call baloney on this one. You are mistaking the interface…
I will back ESR on this. I have done Windows programming for years, and it was the Unix tradition that taught me to separate the functionality from the interface. I can’t tell you how much code I’ve seen where the business logic was right there in the OnButton1() function. Microsoft’s attempts at scripting via OLE et al. simply stink. Even back in the command line DOS days, compare the kludgyness that is batch to sh. MS is focused on clueless users who don’t even think about automation and it shows in the software their tradition has produced.
That’s why I figured a scythe had to be involved.
>I can’t tell you how much code I’ve seen where the business logic was right there in the OnButton1() function.
And I agree where I disagree is that function’s interface has to be done through a linear text stream. In my own CAD/CAM application the UI forms are a thin shell over a series of presenter classes. The presenter classes are fully scriptable and you can even simulate the UI by the use of mock object implementing the form interface.
There is no reason why we should remain at a point that the only medium for two binaries to communicate is through linear strings of ascii text. The problem that anything more sophisticated is either proprietary or shattered into a dozen competing standards.
One of attempts to bring graphical pipeline-like programming to the problem of data visualization and analysis was/is AVS/Express, where you compose your application from blocks (such as e.g. data source, widget returning a number, isosurface generator, and visualization sink, connecting them with “pipes”). Unfortunately proprietary; IIRC there was similar OSS program (not as good), but I since forgot its name.
Reposturgeon: because 90% of git commits are crap.
@Robert Conley:
“There is no reason why we should remain at a point that the only medium for two binaries to communicate is through linear strings of ascii text. The problem that anything more sophisticated is either proprietary or shattered into a dozen competing standards.”
Do you have a general query class that can handle anything beyond a linear string of text (ASCII/UTF8)?
Regexp can handle linear text strings. One level higher in the Chomsky hierarchy you have Context Free Grammars on tree structures (XML). You do not want to go higher.
Querying a tree structured data object is comparable to writing Lisp or Prolog programs (see my comment above). A pipeline of operations on tree based structures is *very* difficult to handle.
This is like databases beyond relational databases (a linear format): It can be done, maybe, but is not easy.
@Robert Conley:
As Eric mentioned, linear text streams are discoverable. They are also very general purpose. You can use linear streams to easily connect processes running on disparate machines halfway around the world.
The only two drawbacks to linear streams are performance and development time. While it is theoretically possible to marshal any interface to a linear stream, this sort of marshalling is seldom done inside a single tightly coupled program, because it would increase development time and decrease performance with no payback. But between programs, linear streams are much more robust than any other IPC that has ever been invented. Even junior web programmers can often get it right.
I would go so far as to submit that (with the exception of a very few real-time interrupt handling / IPC control problems), if you can’t figure out how you could map your design to a linear stream, you don’t have a very good design. You yourself admit as much when you mention that your classes are fully scriptable.
There is no question that when you have a CPU-intensive problem that you are solving over and over, it can be worthwhile to bring other IPC mechanisms to bear on the task. But the flip side of “premature optimization is the root of all evil” is that most real optimizations are, by definition, custom. If it were a generic optimization, the compiler/interpreter/original programmer/whatever might have already done it for you. So, if you have a CPU-intensive bottleneck, either you’re going to throw off-the-shelf hardware at the problem (in which case you’re probably going to be very grateful that linear streams were used to connect processes), or you’re going to bite the bullet and do some custom programming. At the point you’re willing to invest in a customized solution, it’s nice to have some generic IPC tools in your bag of tricks, but I don’t think we’ve gotten to the point where the solutions for distributing CPU-intensive tasks across processes and across processors are completely fungible.
I just shipped 0.2.
Filenames with embedded whitespace are handled.
The ‘expunge’ operation to remove files from the history is working.
The ‘split’ operation (topological cut) is now working.
There is a new ‘drop’ command to drop repositories from the load list.
There is a new ‘history’ command to display your session history.
The ‘view’ command was a bad idea and has been removed.
& followed by branch name resolves to everything on the branch.
A bug that caused spurious date modifications when editing events
with a non-local timezone has been fixed.
http://www.catb.org/esr/reposurgeon/
Fear the reposturgeon!
A JSON stream would be a neat compromise.
The problem is that what people “really” mean is that they want to be able to communicate with objects with richer semantics than “plain text”. And that’s a great and noble goal. But it’s not just the inability to agree on “what the objects” are, it’s an inability to agree on the desired semantics. Do you have 32-bit integers, 64-bit integers, arbitrarily large integers? Repeat this several hundred times.
What pure text allows you to do is not to actually come to an agreement; it actually just papers over the issues and we all squint at each other and think we’re getting somewhere. No agreement at all. And, oddly, that mostly works.
JSON’s a neat compromise partially because it still leaves a lot of those issues pleasantly fuzzy. A careful reading (which won’t take you long) of the JSON standard reveals it doesn’t actually have an opinion about number size, for instance.
I recently created a “protocol” at work which is the following: 4 network-order bytes to indicate the size of the next JSON message in bytes, followed by that number of bytes in UTF-8 encoded JSON. This turns out to be surprisingly powerful. I’ve heard from other people that they independently invented the exact same protocol, which I find interesting. (How often does that happen?)
@Winter:
>- Cleaning up a complex commit history in a private branch before pushing it to a public repository
>Not to brush up your L337 powers, but to remove circular dead-ends and blatant commit errors,
>like adding temporary files by accident or cleaning up incorrect comments. Reposurgeon might be
>easier than redoing a new commit series on a clean cloned repository.
>
>For me, both these functions are only incidental and would make a few minutes wait on a project the size of git irrelevant. It would not be worth the agony of fear that I would hose a public repository.
Cleaning up unpublished branches is a big reason I’d want to use this tool. In fact, I just got done modifying a branch I already had which was difficult to work with using `git rebase -i` as it contained a merge (git-rebase has issues with merges). This sort of action is something I can imagine doing with relative frequency, and having to wait 20+ minutes (a guess based on how long reposurgeon took to process 29% of git.git) before I can even begin to use reposurgeon is a rather large roadblock.
>having to wait 20+ minutes (a guess based on how long reposurgeon took to process 29% of git.git) before I can even begin to use reposurgeon is a rather large roadblock.
I have bad news. On my machine, it took 49 minutes for 0.2 to load git. That’s too slow.
If you think you can speed that up, I’ll take patches. Someone besides me, not sharing all my design assumptions, should look at this.
@esr:
>I have bad news. On my machine, it took 49 minutes for 0.2 to load git. That’s too slow.
I’m in the process of testing this myself. I forgot to wrap reposurgeon with the `time` tool, but it’s already used over 26 minutes of CPU time.
In any case, I suspect that teaching reposurgeon how to handle a –no-data mode will speed it up a reasonable amount, though it’s hard to say how much. Such a mode would obviously not be able to edit blobs, but it could still construct a new repo simply by preserving all the original objects/packfiles from the old repo. Sure, that’s a bit of a disk bloat, but for a lot of operations it may make sense to run in –no-data mode and then just repack the repository afterwards.
>I’m in the process of testing this myself. I forgot to wrap reposurgeon with the `time` tool, but it’s already used over 26 minutes of CPU time.
Optimization would no longer be premature. 0.2 completes the feature list I was aiming at, and I’ve got enough of a regression-test suite to be reasonably sure of the code. Actually, the only area I’m still dubious about is the file op composition logic for after a commit delete; my intention is to hand-craft a repo with as many of the weird cases in it as I can contrive and use that to add to the regression-test suite.
If you’re willing to work on speed optimization while I push demonstrating correctness, I think that would be a good division of labor. I don’t think I understand git internals well enough to do the speedup you want.
@esr:
I can try to devote some time to it, but unfortunately I’m not well-versed in Python. Add to that my general lack of time these days, and I’m not optimistic. I’ll let you know if I figure anything out though.
Total runtime on my machine was 55 minutes real time, 53 minutes CPU time.
Another point of interest, running “EOF” after reading git.git took a significant amount of time. I presume it was deleting the .rs directory. It might be worth printing a message saying as much while it’s performing that task.
ESR says: Noted. I can do better than that; I can spin a baton prompt.
Correcting two pieces of misinformation from above:
Adriano wrote:
That treadmill wasn’t so much because of incompatibility with previous versions, and more because of incompatibility with newer versions: morons who bought the new Office started saving documents in the new default document format, and other morons who wanted to play ball but had older versions had to kindly ask to resubmit all those docs or buy new versions.
Microsoft Office Compatibility Pack
This was released six months before Office 2010. The corresponding version for Office 2007 was released eight months before Office 2007 shipped.
If auto update is configured, Windows downloads this and lets you know its ready to install in the background.
Hate the mindset that produces them — wanna-be programmers doing wanna-be programming, without understanding the first thing about the rudimentary discipline of reproducibility.
Excel is the new Basic, the native tongue of a whole generation of (badly) self-taught programmers.
Excel works as a quasi IDE/programming environment because it allows someone to experiment, see results, and to build off of things that are very easy, and get useful work done. It is HORRIBLE at teaching structured programming, because it doesn’t require it. (You can still benefit significantly from taking a structural programming approach to writing Excel formulas, and organizing your Boolean operators to provide kill switches. You can significantly boost performance by using table lookups rather than calculations, etc.)
You can also comment Excel formulas, both within in the formula and with floating text notes in the cells they’re in.
For people who can hack structured programming, it allows scriptability and full programming with VBA. I don’t use VBA as a conscious choice – while it would probably improve performance on a lot of what I do…it also means that my working products may not work in Excel for Mac or in environments where scripting is turned off for security reasons. It also means that my code is maintainable.
Excel is rarely the best tool for the job. It (or a work-alike) is often the tool you can count on having in a Windows or Mac environment.
Excel is also proof that a team within Microsoft, back in the early 1990s, knew low level performance optimization cold. When Excel was ported from the Mac version to Windows, Microsoft was gunning hard for outcompeting Lotus 1-2-3 and Quattro Pro, and there was a real incentive to get as much performance as possible out of 1993 vintage hardware, and the near miraculous work is that that performance optimization is still paying dividends.
I don’t use OpenOffice or LibreOffice (though I install both), because architecturally, they suck flaming elephant turds for performance. I have customers who have submitted patches for these performance issues. The general result of said performance enhancements are “We can’t do that, we’d have to rewrite 70% of {insert list of functionality in the program}.”
It’s a pity (if understandable) that the open-source spreadsheets try so hard to replicate Excel – the idea of a declarative programming language for casual programmers tightly integrated with a spatial IDE seems like it ought to have been a lot cooler, rather than a leading source of programmer brain damage.
One of the reasons why I try and show Eric the atrocities I commit in Excel is because I keep hoping that he’ll see what the hell it is that makes Excel ‘click’ for the way my brain works and we can then find a better set of tools for what I do that maps to what I already know. I’ve had a cognitive ‘thud’ result in trying to learn a better programming language for going on 20 years now.
So, yes, I’d like to see something that does what I do in Excel as an open source project.
reposurgeon 0.3 is released
‘split’ operation renamed to ‘cut’.
New ‘inspect’ command for looking at commits in raw form.
‘list’ command adapts to current width of terminal window.
Issue a baton prompt during repo cleanup, which can be a long process.
Multiple instances can now run in the same directory.
Some speedup on import and export.
http://www.catb.org/esr/reposurgeon/
Fear the reposturgeon!
Ken Burnside Says:
> Excel is rarely the best tool for the job. It (or a work-alike) is often the tool
> you can count on having in a Windows or Mac environment.
That seems a little backward to me. How can a tool be useful if you don’t have it? Surely usefulness is, by some measure, predicated on availability and the obstacles that must be overcome to get it should it be absent. I’m sure there is a tool for getting the lid of a paint can, but screwdrivers are rather handier, and are probably, consequently, the best tool, all things considered. Specialized tools are always better because they are specialized, however they are also harder to come by. That is, after all, why we have general tools.
However, I will offer up a defense of Excel. Building complex prediction models is difficult. Excel provides lots of facilities to make it possible, but, because it is difficult, you can easily get in the weeds. Expecting a CPA to be a programmer is the problem, not the tool itself. I’m not aware of any better tools, though that is more likely due to my ignorance rather than a deficit in the market.
However, the fact is that the vast majority of uses of Excel are simply for keeping lists of things, and perhaps summing and averaging a few columns. I know this because Microsoft themselves did a very large study of the users of Excel to find out what they did. They expected lots of complex math spreadsheets, what they got was lots of lists of stuff along with graphs and summary tables of that stuff. That is why Excel 5.0 added a massive number of features targeted at lists of stuff (and the related matter of pivot tables) rather than additional complex financial functions.
An interesting fact about Excel (according to Joel Splosky who worked on the Excel team) is that they maintained their own C compiler, separate from the Microsoft tool. I am not sure what that means, but it is surely an interesting fact.
I honestly don’t think that’s the issue.
I think most CPAs can program effectively enough for their own needs. The main problems with using Excel as a program development tool are a lack of easy, transparent, software versioning, and the encouragement of storing, not just your business logic and your presentation logic together, but rather, your business logic, your presentation logic, and your data together.
The thing is, if Excel were a power tool rather than a spreadsheet, there would long ago have been sufficient successful lawsuits to force Microsoft to shorten the power cord, to make sure the saw stops immediately when it comes into contact with flesh, to make sure that the power is cut off when the thing is immersed into water, and to plaster warnings all over the thing about how you need to wear safety goggles and double-check your output whenever you use it.
> reposurgeon 0.3 is released
Did you get the patch against 0.2 I sent you on Friday?
ESR says: No. Please resend.
> ESR says: No. Please resend.
Done. In case it miscarries, you can find the patch at http://pastebin.com/bC8M0mnw
>Done. In case it miscarries, you can find the patch at http://pastebin.com/bC8M0mnw
0.4 shipped:
Handling of inline data, previously extremely buggy, has been fixed.
Can now handle streams produced by bzr-fast-export, which uses inline.
Unfortunately, bzr-fast-import is buggy enough to make rebuilds fail.
First cut at hg support, by Phil Roberts.
Your fixes for inlines and missing author fields appear to parallel work I’ve been doing on 0.4. Additionally, the VCS information is now a list of objects, and each has a “styleflags” method describing the format variation the exporter emits. Presently the styleflags are:
“nl-after-commit” = inserts an extra NL after each commit (on for git, off for bzr)
“nl-after-comment” = inserts an extra NL after each comment (on for bzr, off for git)
“export-progress” = exporter generates its own progress messages, no need for baton prompt.
The purpose of these flags is so that reposurgeon can round-trip dumps *exactly*, even down to whitespace. This is useful for regression testing.
Please fill in the hg styleflags, if any, and check that 0.4 works with your hg.
OK, style flags should be (“nl-after-commit”, “export-progress”). Reads work, hg’s fastimport extension is too flaky to test write operations. If you’re going for perfect round-trips, shouldn’t you have an “nl-after-data” flag as well? Actually, now I look at it, the problem is hairier than that: hg-fast-export puts an extra newline after the data command for commit messages, but not after any others.
One thing I’ve discovered is that usernames in hg are freeform: conforming them to email address format is only best practice. Of course, that’s a headache for the authors of hg-fast-export.py, not for you.
>If you’re going for perfect round-trips, shouldn’t you have an “nl-after-data†flag as well?
Only if it’s actually needed to capture what some exporter does.
I don’t completely understand your report,. How is “extra newline after the data command for commit messages, but not after any others.” distinguishable from nl-after-comment.
If you could send me a representative dump, I could round-trip it and diff.
Oh, that kind of comment. I thought “nl-after-comment” meant after comments in the fast-import syntax itself, not comments on commits. Pardon my thinko.
In that case, add “nl-after-commit” to the options for hg, obviously.
> In that case, add “nl-after-commit†to the options for hg, obviously.
“comment”! I meant comment”! Dammit!
I’ve had my head in Etruscan all day. 2500 years is a long time to bring your attention forward!
Patrick Maupin Says:
> The main problems with using Excel as a program development
> tool are a lack of easy, transparent, software versioning
Last I checked a C compiler didn’t come with version control either. However, Excel does come with a complete, built in revision control system. So I really don’t know what your point here is. AFAIK you can still open Excel 1.0 spreadsheets in Excel 2010.
> the encouragement of storing…
> your business logic, your presentation logic, and your data together.
This doesn’t make sense either. It is not a general purpose programming tool, but rather a tool for storing and displaying data in particular ways. If you are using programming logic such as VB or C# you must separate it into modules. For sure there are disadvantages to having everything in one file, but there are also very important and significant advantages.
I’m not suggesting that Excel is perfect for every job, but the fact is that it is very good for 99.9% of the uses to which it is put (even if the execution isn’t always so great.) FWIW, the same is true of most spreadsheet programs, not just Excel, though I personally think it is one of the best. (For the record let me say, as I have often before, I hate the whole Office 2007 GUI thing.)
Frankly, architecture astronauts who say things like “A spreadsheet is a visual, spacial programming environment” make me wonder how they get out of bed in the morning. Most people use excel to keep lists of stuff. It is pretty good at that.
That’s not the point at all. The input to the C compiler can be easily version controlled and diffed.
Sorry, if it’s not diffable, readable (e.g. not XML) ASCII or Unicode, it’s not a real language or a real RCS.
It’s Turing-complete, and a lot of people who don’t even know what that means manage to do a hell of a lot of general purpose programming in it.
I don’t really mind having everything in one file for small jobs. I do mind it not being text.
Sure, but I have seen abominations that give me a visceral reaction against spreadsheets. Also, Excel can very seductively coax you into enhancing your list. It’s a very slippery slope.
I’ve always been impressed by the list of programming projects you have maintained ESR. And I enjoy the explanatory articles which accompany them though I don’t always follow them 100%.
I have always been wondering how best to maximize the potential of a version control system for single user projects and smaller projects without creating enormous overhead. Most of the time, all I do is commit code and sometimes use branches to separate my experimental code and so on, but nothing much beyond that. By creating such power tools, you interest me in looking deeper at these systems and finding better and more advanced uses of a version control software.
>I have always been wondering how best to maximize the potential of a version control system for single user projects and smaller projects without creating enormous overhead.
What enormous overhead? Disk space is cheap.
Patrick, Jessica:
Spreadsheets have a number of advantages:
1) They are nearly ubiquitous.
2) Circa 2006 and later, most of them use an XML-derived schema and format for the data.
3) They can refer to, import and export external data sources; Excel 2003 and onwards does this remarkably seamlessly.
They let someone who would look at a mess of nested braces and curly brackets and declared variables in a text editor and throw up their hands perform the following steps:
1) “What is it I need to accomplish?”
2) “OK, that’s kind of like…this. Let’s see what this function does. Hmm. Not quite right. Let’s see…a-ha!”
3) “OK, that’s throwing an error, let’s figure out…hm. OK, let’s put an IF statement in there to trap the error.”
4) “OK, now let’s make sure the user inputs are limited so that Joe down in shipping can’t screw this up…”
Jessica, have you seen the shockwave ‘if you did this in 2003, do this in 2007’ tutorial? I find that the 2007 interface, after a period of adjustment, is much better than 2003’s – aside from screen refresh time on my underpowered laptop. It is much easier to FIND new and interesting things in Excel than it was before. Excel outgrew its old menu hierarchy with Excel 2000….
Ken:
About the ease of use, I get it. I really do. But your step 4 hardly ever happens, because the developer and the user are one and the same, and a lot of people don’t think about stuff like that.
About the advantages, I think you only alluded to the most fundamental one in your “steps” section. People almost instantly grok spreadsheets for lists, and the additional understanding required to think about cells that sum and that sort of thing is miniscule. This is what makes them ubiquitous, not the other way around. It’s no accident that VisiCalc sold millions of Apple 2’s.
The usefulness of the XML-derived schema is not as good as it could be — minor changes in the spreadsheet can lead to huge changes in the XML, in my experience. Nonetheless, you are right, in that it ameliorates some of the worst attributes of binary formats.
> What enormous overhead? Disk space is cheap.
Overhead I defined purely in relation to the size of the project itself and especially when I don’t use it efficiently (too many commits too frequently for instance). I agree that disk space is cheap, but I still like to minimize utilization when possible.
I agree that the benefits of a VCS outweigh the disk utilization though. I just wanted to utilize it better than I am in my current and previous projects.
If Excel had existed in its present form back in 1993, Befunge would never have needed to be invented.
reposurgeon 0.5:
We can round-trip bzr dumps with commit properties.
New ‘split’ operation, opposite of coalesce/delete.
Multiple author headers per commit are handled (helps with bzr).
Fear the reposturgeon!
Disk space may be cheap, but using more rather than less will bound your performance on “large” datasets. That’s the real reason to be efficient with disk space nowadays.
>Disk space may be cheap, but using more rather than less will bound your performance on “large†datasets. That’s the real reason to be efficient with disk space nowadays.
Agreed, but…not my problem. I’m doing software engineering, not anything that has really large datasets like (say) oilfield geology. I can remember when a version-control history of 3000 revisions would have been ‘large’, but it’s 2010 and terabyte disk drives are cheaper than a meal at a four-star restaurant; those days are gone.
Thus, if I can trade a use of disk space that would have seemed extravagant in 1995 for better leverage on my code-management problems, I’ll do it, you betcha.
@esr:
Do you have any current plans to support submodules? Reposurgeon currently can’t read any repos that use submodules. There’s two reasons for this. The first is reposurgeon can’t handle blobs that are referenced by sha1, and the second is reposurgeon doesn’t realize that files with a mode of 160000 are actually gitlinks instead of blobs.
>Do you have any current plans to support submodules?
I think the most informative answer I can give to this is “What the fuck is a submodule?”
I had no idea git had any such facility, from which it follows as the night the day that I haven’t made any plans to support them. Er. Documentation pointer, please?
>I think the most informative answer I can give to this is “What the fuck is a submodule?â€
Submodules are the git equivalent of SVN externals. In short, they allow you to embed another git repository with yours. What’s actually happening is your repository keeps a file called .gitmodules which documents where to find the submodule repo and the path that it should be cloned into. And then the containing tree gets a file entry with a mode of 160000 and the sha1 (which is normally a blob) is actually the commit that should be checked out in the submodule (this is called a gitlink). The idea here is that the submodule always has the exact same commit checked out when you go to a commit in your parent repo. If you equate this with SVN externals, it’s like providing a revision as part of the external.
You can find more information in the Git Community Book chapter on submodules.
I guess I should clarify what it looks like to reposurgeon. A submodule is basically just a filemodify entry that has a mode of 160000 and the data is a sha1 ref. Reposurgeon just needs to be able to handle sha1 refs as data entries, as well as be smart enough to not try and let you edit it as if it were a blob.
>Reposurgeon just needs to be able to handle sha1 refs as data entries, as well as be smart enough to not try and let you edit it as if it were a blob.
What do you mean by “handle”? I’m guessing from your description that the SHA1 ref is normally inline data associated with the M op, but whether it’s inline or a blob reposurgeon will just normally pass it through without messing with it.
As for not letting you edit it…that would violate reposurgeon’s design philosophy, which is to let you perform any kind of object edit that wouldn’t throw the repo into a fatally inconsistent state. It’s up to you to use the power wisely.
Ken Burnside Says:
> Jessica, have you seen the shockwave ‘if you did this in 2003, do this in 2007′ tutorial?
No, I’d appreciate a link.
# Patrick Maupin Says:
> That’s not the point at all. The input to the C compiler can be easily version controlled and diffed. [etc…]
Ah, so it’s not a true Scotsman then?
> Sure, but I have seen abominations that give me a visceral
> reaction against spreadsheets. Also, Excel can very seductively
> coax you into enhancing your list. It’s a very slippery slope.
Are you talking about Excel or C++?
# Jeremy Bowers Says:
> Disk space may be cheap
For those of you who are interested there is a great article about the time/space trade offs in designing a version control system here. It was written by Eric Sink who is the President of a company that makes a pretty popular Windows version control system.
>For those of you who are interested there is a great article about the time/space trade offs in designing a version control system here.
That article is really good. Well-reasoned, concrete, and even with tasteful and effective use of data visualization. I must send Eric Sink a note of appreciation.
Jessica:
You may not view this as a real issue. I dealt with Lotus Notes in its early days. You could dump your program out in ASCII, but the only way to create the same thing was to do it with the GUI, then do another dump, and see if you generated the same progra,. Same thing with LabView — lots of clicks to create a “program”, but no way to do a real diff or anything. Same thing with early versions of Excel, Visual Basic, etc.
I don’t give a damn if later versions work better. The whole fucking mentality of “I’m going to take care of all your data for you” is anathema to being able to manipulate your own data properly, and I have an admittedly visceral reaction to any group of programmers who thinks that way.
I write programs.
I write programs that write programs.
Sometimes, I even write programs that write programs that write programs.
YOU may not view it as a problem to not be able to batch manipulate your own data using whatever tools you want, but I certainly do.
And diffing and revision control are really important, and are also very good proxies for most of the other things I might want to do with stuff I’ve written — if it’s easy to diff and version control in a textual fashion, then the rest just falls out easily.
It’s fine with me if you think that’s a “no true Scotsman” argument, but that probably shows both an inability to explain on my part, and a lack of relevant experience on your part.
Well, I don’t generally use either. I use lots of Python and Verilog. I used to write lots of C and assembler, but much less of that these days.
The problem is not the seduction, per se. It’s human nature to invest a bit more time to polish things that are working that can be made a bit better. The problem is the seduction coupled with the limitations. A lot of GUIs make it very easy to get started, but more difficult to be a productive power user. Excel is the same way. For a lot of problems, it takes you 100%, no problem. For a lot of other problems, you might struggle with the last 10%, but for larger problems it can quickly get unwieldy. For a good description of the opposite of this, see Eric’s “Why Python?” article of yore. I still recommend that to fence-sitters who might want to learn the language.
>YOU may not view it as a problem to not be able to batch manipulate your own data using whatever tools you want, but I certainly do.
Clearest indication yet that Jessica speaks truth when she says she’s not part of our tradition. This visceral reaction is how any hacker would feel in those circumstances. And Patrick has nailed the reason, too – when you can’t get the data out of jail, you can’t bring the full force of your tools and your intelligence to bear on solving problems with it. That’s intolerable.
Jessica:
http://office.microsoft.com/en-us/excel-help/interactive-excel-2003-to-excel-2007-command-reference-guide-HA010149151.aspx
Patrick:
As I stated earlier – Excel is very rarely the RIGHT took for the job. It is usually the tool you have on hand, and it’s a tool that’s easier to get into than a formal programming language.
Give me a good IDE that lets me think spatially rather than in list mode for Python, and I’ll switch. Make it easy enough to use and make it a glide path from Excel and it might manage to turn a lot of ‘self taught’ Excel programmers into Python coders.
@esr:
>What do you mean by “handle� I’m guessing from your description that the SHA1 ref is normally inline data associated with the M op, but whether it’s inline or a blob reposurgeon will just normally pass it through without messing with it.
The sha1 ref is used as a dataref, where reposurgeon currently understands marks and inline data. reposurgeon appears to write blobs out to disk, but a submodule entry doesn’t have associated data to write out to disk.
In any case, I thought reposurgeon was blowing up on my submodule, but it’s actually blowing up earlier. Trying to read my repo gives
fatal: Could not write blob ‘9ebdf43b1e817484d03ca0bc1d3b9b442e64d85a’: Broken pipe
which is curious, as that hash is just a regular blob. Is reposurgeon actually hashing it to figure out the sha1 directly? Because that sha1 isn’t referenced like that anywhere in the fast-export stream.
>As for not letting you edit it…that would violate reposurgeon’s design philosophy, which is to let you perform any kind of object edit that wouldn’t throw the repo into a fatally inconsistent state. It’s up to you to use the power wisely.
By not editing it I mean I can’t pop open an editor to edit the contents of the blob. After all, not only does resposurgeon not have the contents, but in this particular case, that idea makes no sense as this is actually the sha1 of a commit from the submodule, not a blob at all. However it would still make perfect sense to let me replace that sha1 with a different sha1.
>Is reposurgeon actually hashing it to figure out the sha1 directly?
No. reposurgeon never computes hashes.
That “Broken pipe” looks like reposurgeon died while the exporter was trying to write to it. I’ve never seen that happen. Is there some way you can give me a .fi of your repo so I can reproduce it here?
>>No. reposurgeon never computes hashes.
That’s what I thought, which is why I was so surprised to see it mentioning a sha1 that literally never appears in the fast-export stream. The only way to get that hash to appear is by adding –no-data, but reposurgeon doesn’t do that. Inside of the regular fast-export stream this same blob shows up simply as mark :277.
>That “Broken pipe†looks like reposurgeon died while the exporter was trying to write to it. I’ve never seen that happen. Is there some way you can give me a .fi of your repo so I can reproduce it here?
Unfortunately no, this repo is of my project at $JOB.
>That “Broken pipe†looks like reposurgeon died while the exporter was trying to write to it. I’ve never seen that happen. Is there some way you can give me a .fi of your repo so I can reproduce it here?
Oh hrm, I didn’t think about this the right way. The “fatal: Could not write blob ‘9ebdf43b1e817484d03ca0bc1d3b9b442e64d85a” message is clearly coming from fast-export, as it tries to write the blob, and the pipe has broken. But I don’t think reposurgeon has actually died at this point, as that line is followed up with
reposurgeon: git fast-export -M -C –signed-tags=verbatim –tag-of-filtered-object=drop –all repository export returned error.
which is clearly coming from reposurgeon itself. Is it possible that reposurgeon simply can’t handle large blobs? This particular blob has a length of 46143.
>Is it possible that reposurgeon simply can’t handle large blobs? This particular blob has a length of 46143.
Unlikely. I just loaded the reposurgeon repo in reposurgeon, and it contains much larger blobs – I’m looking at one with a length of 115448. If there are any magic thresholds they’re buried in the Python implementation layer somewhere.
That sequence of events is strange. While I agree it looks like your exporter is reporting SIGPIPE, it’s even clearer that reposurgeon hasn’t died. Either there’s some way to get SIGPIPE that doesn’t involve process death or that exporter message is leading us down a garden path.
Incidentally, inside of the .rs directory, there is no blobfile that corresponds to this blob. There are files that correspond to all blobs earlier in the file, but it stops right as it reaches this blob. This implies to me that it never even tried to open the new blobfile in the first place, let alone read data from the stream to pass into the blobfile.
Actually I read it wrong. It’s stopping slightly earlier than I thought. It writes out through blob :265, and the fast-export stream errors while trying to write blob :271. However, reposurgeon has apparently failed to write blobs :267-:270 as well (:266 is a commit).
AH HAH! Found it! It was submodules after all. Commit :266 includes the first reference to a submodule in the project.
M 160000 a384cd155c318f70aa8289a27bfd332aa7bd5898 AssessmentKit
This pesky line is causing reposurgeon to silently fail.
>This pesky line is causing reposurgeon to silently fail.
OK, lemme look at the code…I see where the special case for mode 160000 needs to go. The FatalException getting swallowed I don’t understand, but…
….my turn to AHA! That was a subtle one. I used a fancy technique – context-manager protocol – in popen_or_die, and didn’t grok what would happen if I raised a FatalException when the __exit__ method was itself tickled by a FatalException.
Shipped as reposurgeon 0.6:
Tweaked to pass through git submodules without failing.
Also contains a fix for a subtle bug in error handling.
Fear the reposturgeon!
I’m not sure why, but the FatalExceptions seem to be getting swallowed up, so their message is never emitted. This is what caused my confusion. Any idea why that’s happening?
Also, the number of comments I’m making here is a strong indication that some sort of issue-tracking site would be a welcome change ;)
BTW, looks like the error message got squelched because popen_or_die() raises a new FatalException() in __exit__ if it can’t close the pipe. This completely obliterates the old exception. Inserting the following at line 1366 fixes this problem:
if type:
complain(value.msg)
>BTW, looks like the error message got squelched because popen_or_die() raises a new FatalException()
Yup. I spotted that, fixed it, went off to dinner, came back to ship 0.6, and saw your comment. Good catch.
Actually this is encouraging in a kind of backhanded way. When your first bug in field testing is that subtle, it usually means the code is basically sound.
Just as no battle plan survives contact with the enemy, so, too, does no program survive contact with users. The amount of damage that first contact causes is a good measure of program quality; wonder if the same applies to battles?
Huzzah! reposurgeon 0.6 can now fully read my work project. Thanks!
>Huzzah! reposurgeon 0.6 can now fully read my work project. Thanks!
I shall be most interested in any feedback about the range and effectiveness of the surgical operations. Also, if you have any new ones to suggest, fire away.
BTW, I don’t know if you care about listing ways to get reposurgeon, but I’ve been submitting your reposurgeon releases to Homebrew, an OS X package manager. This means on OS X, if you have an up-to-date copy of Homebrew you can just type `brew install reposurgeon`. Homebrew currently lists 0.5 but I just submitted the 0.6 release.
>BTW, I don’t know if you care about listing ways to get reposurgeon, but I’ve been submitting your reposurgeon releases to Homebrew, an OS X package manager.
Is there a way for me to publish a URL to the installable?
>Is there a way for me to publish a URL to the installable?
If you are interested in listing ways to get Reposurgeon, you could simply say that if you are running OS X then it is available via Homebrew, using the command `brew install reposurgeon`.
reposurgeon 0.8 has shipped.
This is a beta. It is likely the next release will be 1.0.
Expunge now saves deleted material into a new repository, so it can
be used to carve up repositories by file path match.
New ‘renumber’ command, in case importers ever care about marks
being consecutive.
Allow Passthrough events to be merged.
After a cut operation, option and feature events in the original
repo will be duplicated onto the late fragment as well as remaing
on the early one.
Fear the reposturgeon!
@esr and Jakub,
Sorry, I didn’t notice that you replied to my comment of November 3.
> I think you’re pushing the article’s argument harder than the author would
> approve of. It’s not typical for Subversion and CVS projects to have merge
> histories so complicated that a DAG-based DVCS can’t capture them; it’s
> merely possible, in the presence of cherry-picking and some ugly CVS mixed-tag
> tactics.
Well, I’m the author of the article that I quoted :-)
Here are two examples of simple and common practices in CVS and Subversion that are not representable in DAG-based DVCSs:
1. In CVS it is common for people to use a “released” tag that is added to filewise revisions one at a time as changes are reviewed and approved for release. This practice makes use of the fact that a CVS tag is really like a tag–it is an object that can be attached to an arbitrary revision in each file. Checking out the “released” tag results in a working copy that includes the currently-approved version of each file, even though those revisions never appeared in trunk contemporaneously. Nevertheless, it is possible to trace the ancestry of the tagged revision of each file on the “released” tag. The same practice is possible in Subversion (though I presume it to be less common because Subversion “tags” are conceptually different than CVS tags). AFAIK it is impossible to represent something like this in the DVCSs because they only allow entire project snapshots to be tagged.
2. In Subversion it is possible to cherry pick single commits, or even changes to single files from a commit, from one branch to another and to keep track of what was done in the svn:mergeinfo. For example, if a later merge spans parts of a branch that were already cherry-picked, Subversion will exclude the previously-cherry-picked changes from the merge. AFAIK there is no way to represent this in the DAG-based DVCSs; if one branch is merged into another, then *all* of the changes on the merged-from branch are incorporated into the merged-to branch. Conversely, if a DAG node has two parents, then the implication that *all* of the changes in the ancestry of both parents is included in the merge node. Obviously, it is possible to do cherry-picks using the DVCSs; the point is that they do not record the fact that a cherry-pick was done or use that fact when performing future merges.
The impedance mismatch is actually not caused by the fact that the DVCSs are DAG-based, but rather that the nodes of the DAG represent full repository snapshots rather than the separate-DAG-per-file model of CVS or the mishmash model of Subversion.
So I stand by my assertion that it is impossible to perfectly represent an arbitrary (or even a typical) CVS or Subversion history in a DAG-based DVCS.
I am not claiming that this makes CVS and Subversion superior to the DVCSs; in fact I believe that in most cases the restrictions of the DVCSs bring other big advantages and are worth accepting.
> @Michael Haggerty: The `svn:mergeinfo` property does not represent merge
> history, and contrary to what is described in various documentation is not
> about merge tracking (at least as I understand it). It is about tracking merged-in
> revisions, from which you can derive merge history (i.e. which revisions were
> results of merging which revision with which revision).
Since you are using the DAG-style meanings of “merge history” and “merge tracking”, you are entirely correct. But the whole concept of a “merge” is different in Subversionland–it is more akin to what is called a cherry-pick in DVCSland. So a Subversion user would claim that svn:mergeinfo *is* recording merge history and enabling merge tracking and he would be equally correct. It is this conceptual mismatch the prevents a DVCS from representing an arbitrary Subversion history.
> I would consider complicated svn:mergeinfo to be a case of mishandling
> Subversion repository, most probably caused by the fact that Subversion
> doesn’t have branches as first-class objects (cause by model of branches
> used by Subversion: “branching is copying”). Similarly to how one would
> consider mishandling creating single revision which span multiple branches,
> or creating revision on a tag.
My feeling is that even in the Subversion world the design decision to confound the branch/tag and path namespaces is recognized as a mistake. And the Subversion implementation of svn:mergeinfo was also constrained by that design decision. But svn:mergeinfo would be complicated in any design that allows cherry-picking, with or without first-class branches.
@esr How reposurgeon compares to git_fast_filter by Elijah Newren?
[1] http://thread.gmane.org/gmane.comp.version-control.git/116028 and links therein
>@esr How reposurgeon compares to git_fast_filter by Elijah Newren?
It’s designed for both interactive and scripted use, not as a plugin library for custom python scripts. Near as I can tell git_fast_filter can’t be used in an interactive/exploratory mode at all.
The range of operations supported out of the box is much wider, including: topological cut, merge, commit deletion, and coalesce. It is possible that git_fast_filter may be ultimately more flexible if you are willing to put in programming effort in Python.
I don’t see an analogue of the selection-set resolver.