I’m taking a management-approved break from NTPsec to do a repository conversion that dwarfs any I’ve ever seen before. Yep, more history than Emacs – much much more. More backtrail than entire BSD distributions, in fact about an order of magnitude larger than any repo I’ve previously encountered.
Over 255000 commits dating back to 1989 – now in Subversion, formerly in CVS and (I suspect) RCS. It’s the history of GCC, the Gnu Compiler Collection.
For comparison, the entire history of NTP, including all the years before I started working on it, is 14K commits going back to 1999. That’s a long history compared to most projects, but you’d have to lay 18 NTPsec histories end to end to even approximate the length of GCC’s.
In fact, this monstrous pile is so huge that loading it into reposurgeon OOM-crashed the Great Beast (that’s the machine I designed specifically for large-scale repository surgery). The Beast went down twice before I started to get things under control. The title isn’t quite true, but I post it in commemoration of Mark Atwood’s comment after the first OOM: “You’re gonna need a bigger boat.” Mark is the manager signing the checks so I can do this thing; all praise to Mark.
I’ve gotten the maximum memory utilization under 64GB with a series of shifty dodges, including various internal changes to throw away intermediate storage as soon as possible. The most important move was probably running reposurgeon under PyPy, which has a few bytes less overhead per Python object than CPython and pulls the maximum working set just low enough that the Beast can deal. Even so, I’ve been shutting down my browser during the test runs.
So I hear you ask: Why don’t you just put in more memory? And I answer: Duuuude, have you priced 128GB DDR4 RAM with ECC lately? Even the cheap low-end stuff is pretty damn pricey, and I can’t use the cheap stuff. The premise of the Beast’s design is maximizing memory-access speed and bandwidth (rather than raw processor speed) in order to deal with huge working sets – repository surgery is, as I’ve noted before, traversal computations on a graph gigabytes wide. Nearly two A bit over three years after it first booted there probably still isn’t any other machine built from off-the-shelf parts more effective for this specific job load (yes, I’ve been keeping track), but that advantage could be thrown away by memory with poor latency.
Next I hear you ask: what about swap? Isn’t demand paging supposed to, you know, deal with this sort of thing?
I must admit to being a bit embarrassed here. After the second OOM crash John Bell and I did some digging (John is the expert admin who configured the Beast’s initial system load) and I rediscovered the fact that I had followed some advice to set swappiness really really low for interactive responsiveness. Exactly the opposite tuning from what I need now.
I fixed that, but I’m afraid to push the machine into actual swapping lest I find that I have not done enough and it OOMs again. Each crash is a painful setback when your test-conversion runs take seven hours each (nine if you’re trying to build a live repo). So I’m leaving my browser shut down and running light with just i3, Emacs, and a couple of terminal instances.
If 7 to 9 hours sounds like a lot, consider that the first couple tries took 13 or 14 hours before OOMing. For comparison, one of the GCC guys reported tests running 18 or 19 hours before failure on more stock hardware.
PyPy gets a lot of the credit for the speedup – I believe I’m getting at least a 2:1 speed advantage over CPython, and possibly much more – the PyPy website boldly claims 7:1 and I could believe that. But it’s hard to be sure because (a) I don’t know how long the early runs would have taken but for OOMing, and (b) I’ve been hunting down hot loops in the code and finding ways to optimize them out.
Here is a thing that happens. You code an O(n**2) method, but you don’t realize it (maybe there’s an operation with hidden O(n) inside the O(n) loop you can see). As long as n is small, it’s harmless – you never find it because the worst-case cost is smaller than your measurement noise. Then n goes up by two orders of magnitude and boom. But at least this kind of monster load does force inefficiencies into the open; if you wield a profiler properly, you may be able to pin them down and abolish them. So far I’ve nailed two rather bad ones.
There’s still one stubborn hot spot – handling mergeinfo properties – that I can’t fix because I don’t understand it because it’s seriously gnarly and I didn’t write it. One of my cometary contributors did. I might have to get off my lazy ass and learn how to use git blame so I can hunt up whoever it was.
Today’s bug: It turns out that if your Subversion repo has branch-root tags from its shady past as a CVS repository, the naive thing you do to preserve old tags in gitspace may cause some sort of obscure name collision with gitspace branch identifiers that your tools won’t notice. You sure will, though…when git-fast-import loses its cookies and aborts the very last phase of a 9-hour test. Grrrr….
Of such vicissitudes is repository surgery made. I could have done a deep root-cause analysis, but I am fresh out of fucks to give about branch-root tags still buried 4 gigabytes deep in a Subversion repository only because nobody noticed that they are a fossil of CVS internal metadata that has been meaningless for, oh, probably about fifteen years. I just put commands to nuke them all in the translation script.
Babysitting these tests does give you a lot of time for blogging, though. When you dare have your browser up, that is…
You could rent a 1433GB machine at Google for one-off-ish use.
>You could rent a 1433GB machine at Google for one-off-ish use.
I actually considered the cloud back when we were contemplating the Beast’s design. The problem is that, while it’s relatively easy to get a virtual machine with enough RAM and disk, the virtualization overhead means that memory latency and bandwidth are both pretty crappy and variable compared to actual physical hardware.
An alternative (I use them exclusively at work) is https://www.packet.net/bare-metal/
An AWS I3 (well, a i3.4xlarge or larger) will almost certainly be more performant in every respect than your workstation, if you use its local NVMe storage.
https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/
Not only is the 128GB of RAM expensive, you also usually have to go to an expensive CPU and motherboard to install it. Though if you’re using ECC memory you’ve already got the expensive motherboard and maybe the CPU. The usual consumer motherboards don’t support it. Nor do Intel’s mainstream CPUs; you have to move to a Xeon. (AMD is better about that; the entire Ryzen line has ECC support, though not all motherboards do. But you have to step up to a Threadripper to get support for 128GB, or an Epyc if you can manage to get your hands on one.)
> Though if you’re using ECC memory you’ve already got the expensive motherboard and maybe the CPU.
That I do. It’a a Xeon 3 hexacore.
@esr: Babysitting these these tests does give you a lot of time for blogging, though. When you dare have your browser up, that is…
What happened to the old 32 bit machine the Great Beast replaced? If you still have it, and it still boots, blog from it. You can do other things from it too, and not be reduced to thumb twiddling because the Great Beast is preoccupied.
(And as for RAM, will the mobo in the Great Beast take 128MB, even if it were affordable? If you are crashing due to out of memory conditions, that is what swap is for. The drawback is that it will take longer, but converting GCC will take a long time no matter that you do.)
This is the sort of thing I’d think about doing on an Amazon s3 cluster, rather than locally.
>Dennis
>What happened to the old 32 bit machine the Great Beast replaced?
Down the basement near the daybed. It’s the guest machine.
Actually, I find I’m using my *phone* for browsing during tests and not being too unhappy about it.
Actually, I find I’m using my *phone* for browsing during tests and not being too unhappy about it.
Which phone do you have? I occasionally browse from my Android tablet, but a 7″ screen in landscape mode at 1280×800 resolution is at best adequate. My cell phone is the smallest, cheapest, least powerful feature phone Samsung makes. All it does is calls and SMS, because that’s all I want it to do. Anything else is something else’s job. And it’s something else’s job because I tend to need a bigger display area than a practical phone can have.
If you can browse from a phone during tests and not be too unhappy, more power to you. I wouldn’t be happy at all.
>Dennis
>Which phone do you have?
Original 1+1
>I wouldn’t be happy at all.
Probably not. I have more tolerance for small, low-refresh-rate displays than most people do these days.
A smartphone display need not be low-refresh-rate. The problem is that you’d have to admit Apple products into consideration, which you probably won’t do.
Plenty of Android devices, including the 1+1 mentioned, don’t have a low refresh rate.
>Plenty of Android devices, including the 1+1 mentioned, don’t have a low refresh rate.
The 1+1 is actually a very nice phone – or was, it’s now discontinued. I can’t recommend the line, though, as I’ve heard persistent rumors of the manufacturer cheaping out on parts in later models and stiff-arming customers when the resulting problems came home to roost.
Indeed. I use a Nexus 6 but I doubt I’ll get any of Google’s later (Nexus or Pixel) offerings. Partly due to build quality issues as well as the decision to remove the audio port on Pixel 2.
Does refresh rate have any relevance at all to reading text on an lcd screen?
Anyway, my Nexus 4 (yes, I kept it for over 4 years) started having hardware issues recently, so I bought a year-old model basically on clearance on Black Friday. 5.5″ 1440p screen. Oh baby.
With mosh and screen sessions, any serious work you can do on a command line isn’t really limited by the screen but only by the keyboard.
http://www.businessinsider.com/apple-battery-throttling-gives-customers-reason-to-distrust-2017-12
I’ve been a Mac user since 1989.
I’m not going to get rid of my current Mac, but my *next* laptop will probably be a Dell, with System 76 in second place.
Apple will have to stop sucking and stop being Evil before I’ll go back.
>This is the sort of thing I’d think about doing on an Amazon s3 cluster, rather than locally.
I already mentioned the virtualization-overhead issue, which hits hardest on the exact figures of merit that are most important for this job.
Another point worth noticing is that “cluster” is no damn good here. With only one minor exception near search-and-replace on the metadata, the core algorithms reposurgeon uses are intrinsically serial. It’s an unusual load – you really want low memory latency and high memory bandwidth, you want high speed on a single CPU core, but more CPU cores are mostly just space heaters.
I noted that the core algorithms reposurgeon uses are intrinsically serial.
My implementation language matters too. Even if it were possible for reposurgeon to get more use out of concurrency, it’s written in Python, which still hasn’t solved its global interpreter lock problem. Here again my job load is unhelpfully extreme – the expensive parts are pure computation in core, exactly the sort of thing the GIL thwarts parallelizing. If the code blocked on I/O the GIL would be much less relevant.
There is some good news here, though. I noted in the OP that PyPy rocks really hard. What I didn’t say is that it’s so effective (probably within 2x to 3x of hypothetical compiled C on this job) that I am probably not going to have to move reposurgeon out of Python after all.
I had been considering such a move very seriously, with an eye on Go. Now it looks like that won’t be necessary…at least not until and unless I have to do a repository multiple times the size of GCC’s. if ever. This seems unlikely.
I’m trying to imagine what (in the Open Source world) would be larger or more complex that the GCC code. Can someone enlighten me?
Unix (including its derivatives)?
No, because you can do multiple parts of Unix separately. What’s bigger than GCC that has to be done all in one piece?
>What’s bigger than GCC that has to be done all in one piece?
NetBSD and pkgsrc have been pointed out, but both of those could be carved into chunks if it were required.
Single-piece projects….hmmm. It would have to be a very long-running project with lots of contributors.
The only possibility that leapt to my mind was Apache, but according to Atlassian Apache is just 225MB. That’s large but not huge; GPSD is 188MB, NTPsec 60MB, GNUPLOT 39MB. By comparison GCC is 46GB (these are Git repository sizes, not commit counts).
I just accidentally stumbled over Facebook’s git repo being 54GB. That is almost certainly multi-project, however.
To overtop GCC we’d probably need something the age of GNUPLOT (1987 or earlier) with a huge volume of contributors year over year. I think I’d know of any plausible candidate. I don’t.
Is that 46GB before repacking? git fast-import creates very inefficient repositories; after repacking with aggressive settings I’d expect a size similar to the GCC git-svn mirror (which is under 2GB). (“git gc –aggressive” for a clone of that mirror takes over 45 minutes wall clock time and 40GB memory on a 16-core/32-thread Xeon. Presumably resource usage for repacking the conversion done with reposurgeon would be similar.)
>Is that 46GB before repacking?
Before, and I should have thought of that. I know this is an issue. But I was unconsciously deferring repacking until I have a Git repo I think of as production-ready.
If you had to choose between naive virtualization and naive swapping, virtualization would almost certainly be the better choice. Virtualization might reasonably add ~3x latency. Going to disk is probably ~1000x as bad.
It’s absolutely possible to get better performance out of one or the other, but I doubt you have the resources to spin up a ~100 person commercial development team for optimization.
I’m the only one using ssh over the faster machine from a fast laptop? :)
>I’m the only one using ssh over the faster machine from a fast laptop? :)
I’d do this, but switching displays is a PITA when your desktop setup is dual 4KS and your laptop has a smallish screen because you chose it for light weight and road use.
Am I missing something? If the point is to have a box you can do other things on while you wait for the Great Beast to process a repository, all an ssh session needs is to open a terminal window where you can monitor progress.
Your standard userid profile is set assuming two 4K monitors and switching is a pain? Don’t use your standard ID. Set up a separate one for use specifically when going in remote through SSH, with the appropriate permissions to monitor the Reposurgeon process. The remote ID profile assumes single small screen and is configured accordingly. (Offhand, the ID logs into a CLI, not the GUI.)
>Dennis
Indeed, I don’t know why the Beast should even have X running on it. Under normal circumstances, it could run headless. Access ought to be SSH to
screen
(so that losing the connection doesn’t terminate anything).This assumes, of course, that the purpose of the Beast is reposurgery, and not to be a workstation as well.
The beast is my desktop as well ad the repository surgery machine. Otherwise, it would be sitting idly a lot of the time, which would be silly.
The beast also keeps eric’s feet warm.
For me, the most surprising thing I read in this is that you still use a single machine for HPC tasks and web browsing. I thought nobody does that any more (or ever did?).
As you point out, the two workloads probably need very different optimizations as well. Web browsers are optimized to burn lots of RAM and cores, and don’t care much about latency. All the things that make the Beast great are wasted on such menial tasks.
>I thought nobody does that any more (or ever did?).
We do when managing two 4K monitors is part of the switching costs.
Besides, I really do want to have a browser on my workhorse. To use the web interface on GitLab, if for no other reason.
> We do when managing two 4K monitors is part of the switching costs.
I have those too, but I plugged them into hardware designed and marketed for gamers. Also, the machines that can run my HPC workloads generally live in places that are hard to connect monitors to.
The hot new term for this is megatasking. The classic example is freelance artists who alt-tab into a browser or game while a video is rendering or whatever. This is a good fit for how I work. I personally like to leave everything running at once and that means I need lots of RAM and lots of cores. You really _can_ tackle both workflows on the same machine, but you’ve got to be ready to spend some money.
I do wonder if the reposurgeing is inherently serial job, or if there are any trick that would make it possible to parallelize it. Perhaps trick similar to parareal method (a parallel algorithm from numerical analysis and used for the solution of initial value problems).
Graph operations like breadth-first-search, or pagerank, etc. can be parallelized (even on GPU).
>I do wonder if the reposurgeing is inherently serial job, or if there are any trick that would make it possible to parallelize it.
I’ve been studying the problem closely since 2010, and I don’t think so.
>Graph operations like breadth-first-search, or pagerank, etc. can be parallelized (even on GPU).
Yeah, unfortunately those aren’t the ones I need.
Most of the things reposurgeon does can be broadly characterized as: grind through a time – or topologically-and-time-ordered list of commits doing something with each one. The thing that makes these tasks intrinsically serial is that the “something” may – often does – require lookback arbitrarily far into ancestor commits that may have been modified by previous steps of the operation. Anything that needs to know about the DAG structure is like this; “know about” unpacks to looking backward for branch splits.
In some cases you have the same non-locality problem going forward in time. Any operation that needs to query or modify branch merges will be like this. Those are much less common than the operations that are nonlocal backwards, though.
This is also why you can’t section huge repos into chunks to process separately. Your lookback (or lookforward) fails at the chunk boundaries. Algorithm fall down go boom.
The few exceptions are the operations for which there’s no lookback or lookforward, notably search or search and replace.
> This is also why you can’t section huge repos into chunks to process separately. Your lookback fails at the chunk boundaries.
Would it be possible to do the parareal method trick, namely chunk it, take the result as a first approximation (e.g. simplify by decoration, that is leave only the structure of the graph), then go once over it, hopefully faster?
Though the problem that reposurgeon solves is discrete in nature, not the continuous problem of differential equiations that parareal (parallel-in-time) method is for.
>Would it be possible to do the parareal method trick, namely chunk it, take the result as a first approximation (e.g. simplify by decoration, that is leave only the structure of the graph), then go once over it, hopefully faster?
Probably not. What makes parareal work, if I understand it correctly after a few minutes of research, is the combination of continuity and limited nonlocality, Graph computations fail both conditions, being discrete and requiring arbitrarily far lookaround.
I was thinking more about combining parallel-in-time with predictor-corrector approach, namely converting each chunk of history in parallel, then fixing that conversion.
That assumes that most of the result doesn’t need fixing, and that fixing is less time consuming than doing it from start.
I don’t know what issues require arbitrary far lookarounds.
Could you do it simultaneously from both ends? That is, start from the end and go backwards at the same time as you start from the beginning and go forward?
I understand you can’t parallelise by commit, but perhaps you could by branch? You’d have a serial operation at startup to build a representation of the graph without having to actually operate on each node. Thereafter, you have N worker-threads, each capable of operating on a single node as long as its ancestors are already processed, and one manager-thread looking at the graph and assigning nodes to the worker threads.
The manager thread’s job looks something like the following:
1. Mark as “processed” any nodes on which the worker-thread successfully completed its operation.
2. Find all nodes currently “available”, that is, having no unprocessed direct ancestors.
3. If there are multiple available nodes, prioritise by the path length to the most distant descendant (counting both sides of any intervening loops).
4. Wake a worker-thread and assign it the highest-priority available node, marking that node as unavailable.
5. Repeat the above step until you run out of either available nodes or worker-threads.
6. Sleep until woken by a worker-thread, resuming from the beginning.
The worker thread’s job looks like this:
1. Operate on the assigned node.
2. Wake the manager-thread to report successful completion of the job.
3. Sleep until assigned another node, resuming from the beginning.
I’m not sure whether operating on a node can change its list of ancestors, but that would raise the possibility of a situation in which a job is begun, and only partway through is it discovered that not all required ancestors are processed yet. In this case, the worker-thread should end by reporting failure, prompting the manager-thread to add the necessary dependencies to the graph and recompute the available nodes. The prioritisation scheme can’t foresee such modification, but I don’t think that’s avoidable – and in any case, you still end up processing all the nodes, just in a potentially suboptimal order.
In the case of a straight-line graph, this collapses to serial processing (processing one node marks the next one available). In the case of a tree-shaped graph with long-lived branches and minimal merging between branches, this is effectively parallelising by branch (each linear chain of nodes can be handled in parallel). In other cases, it’s somewhere in between (each side of a loop can be handled in parallel, but merges have to wait until both are completed).
>I understand you can’t parallelise by commit, but perhaps you could by branch?
That would work for some operations…but, unfortunately, they’re the ones that are cheap enough for processing time not to be a big issue.
The things you really want to parallelize are expensive operations like translating the parse AST built from a Subversion dump file to a gitspace commit sequence. Your branchwise hack could do that – but then you’d take an O(n log n) hit to sort all the results into commit-number order at the other end, you have to do that or you could emit a commit before one of its ancestors and crash git’s importer.
When you add the cost of that sort phase to the complexity cost of the manager for the parallelism, it starts to look like Not A Good Idea.
Maybe I’m not understanding you – some commits may change order if processed in parallel, but I don’t see how you can emit a commit before its ancestor, since no commit gets marked “available” until its ancestors have all been processed. The commits that change order should then be independent of one another (in the sense that neither is the other’s ancestor) and free to commute.
>no commit gets marked “available” until its ancestors have all been processed.
Ah, right, I missed that. OK, no sort phase. But very tricky to parallelize the expensive part. One reason is that the parser for the dump file can’t be de-serialized at all.
Looking at the profiling results, the only place that might be worth this effort is mergeinfo handling.
Is it possible to do some opportunistic parsing? Like the parser starts somewhere in the middle recognizes something that could be parsed, parses it but is ready to throw it away if invalidated by data closer to the beginning.
>Is it possible to do some opportunistic parsing? Like the parser starts somewhere in the middle recognizes something that could be parsed, parses it but is ready to throw it away if invalidated by data closer to the beginning.
Just barely possible, I think, but insanely complicated. The only way to do this would be to have multiple opens into the stream file and manage a bunch of seek pointers. The threads would have to share a set of parsed revision numbers so they didn’t duplicate work…and none of this would work on a stream piped in. Which is a real case.
As someone who’s currently doing something vaguely similar in my day-job, it’s absolutely not worthwhile trying to rework this case. Handling the data serially is a challenge, and the common case is going to be repositories which can be processed on my smartphone.
A quick bit of pricing online shows that you can build a completely new machine with Lotsa memory for under $6k. Though not cheep, it’s less than the cost of a good software developer for a month.
In my mind, the question is:
Would the cost to multithread the thing, with all the associated complexities, plus debugging, plus all future development overhead due to code complexities, be less than a month.
The answer is almost certainly no. The algorithmic improvements Eric is talking about will help all cases and probably not add much complexity to the code.
And if you have spare slots and can simply add another 64GB, the cost is around $1k, which is a week’s worth of effort. Definitely not worth the hassle when there is a straight-forward work-around.
>In my mind, the question is: Would the cost to multithread the thing, with all the associated complexities, plus debugging, plus all future development overhead due to code complexities, be less than a month.
Agreed. That is the real question, and I concur with your answer. The two and a half days I spent reducing the maximum working set were efficient, but a lifetime cost over two months of additional maintainence overhead would be bad.
>And if you have spare slots and can simply add another 64GB, the cost is around $1k, which is a week’s worth of effort.
Alas, not quite that cheap. The Beast’s slots are full up at 64GB – I’d have to fill ’em all with 128GB memory. So, two week-equivalents of effort, roughly.
Can the parser itself be made to do less, delaying some of the work to later stages where parallelisation is possible? So on reading data, it parses out only some minimal information to add a corresponding node to the graph (this is revision ID X; its parents are revision IDs Y and Z) and treats the rest as a binary blob. Then using the branchwise-parallel approach above, the worker-threads examine each binary blob to determine its meaning. The cheaper you can make the graph-building phase, the more you stand to benefit from opening up the remainder of the problem to attack by multiple cores at once.
>Can the parser itself be made to do less, delaying some of the work to later stages where parallelisation is possible?
Heh. What “later stages where parallelisation is possible”, is the problem.
The existing parser isn’t one-phase, it’s more like six-phase. Only phase 1 reads the disk and intrinsically cannot be parallelized because sequential I/O. The five passes following are all in-core transformations of the data structures – the expensive ones are where gitspace commits get built in pass four and mergeinfo properties get handled in pass six.
Here’s where we run head on into the Global Interpreter Lock problem. You effectively can’t parallelize anything in Python that doesn’t wait outside the GIL. If I’m ever forced to translate the code to Go, this will probably be the reason why.
But even before where we get to that problem there’s the nasty nonlocality issue I keep trying to explain and all you hopeful people cannot seem to grasp. In general, you can only parallelize modification operations on a graph that have well-bounded locality – that is, you can break the graph up to subdomains such that there won’t be race conditions at the edges when you modify nodes because of data dependencies reaching across subdomain boundaries.
With only one or two trivial exceptions, reposurgeon operations are never like this – they’re all potentially extremely nonlocal and there’s no way to know until you actually do them. The time-expensive operations are especially nonlocal. Almost viciously so – their superlinear time cost is directly tied to their nonlocality.
This nonlocality problem is what drove the design of the Great Beast. It’s exactly why I optimized for speed of access to memory rather than total processing throughput from multicores. If you want to do fast work on a dataset with vicious nonlocality you have to slurp it all into core and then be able to hop all over that working set without falling off a performance cliff too often because of cache misses.
I know you all mean well, but if there were easy wins here I’d have collected them five years ago. This kind of algorithm-dense semi-mathematical programming is right smack in my wheelhouse – if it weren’t I wouldn’t have been able to write reposurgeon at all.
I’m beginning to smell more than a whiff of Too Clever By Half stench in all these solutions. While there’s a strong temptation to “do it better (somehow)”, each proposed solution adds almost unmentionable complexity to a problem that could be solved by throwing more of an easily available resource at it. (Note: “easily available” doesn’t necessarily mean “cheap”, but hardware is still cheaper than programming talent.)
Sometimes, when you have a Damned Big Problem®, about the only way to solve it is with A Bigger Hammer®.
>I’m beginning to smell more than a whiff of Too Clever By Half stench in all these solutions. While there’s a strong temptation to “do it better (somehow)”, each proposed solution adds almost unmentionable complexity to a problem that could be solved by throwing more of an easily available resource at it.
I was intending to be nice to people who are, in a perhaps midirected way, trying to be helpful. But since you bring this up…yeah. People, reposurgeon is already extremely complex and algorithmically dense. Suggestions that would make its internal behavior even more difficult to comprehend are Not A Good Idea.
>Sometimes, when you have a Damned Big Problem®, about the only way to solve it is with A Bigger Hammer®.
Ken Thompson: When in doubt, use brute force.
> I was intending to be nice to people who are, in a perhaps midirected way, trying to be helpful.
Oh, you were being nice, and I appreciated seeing the walk-throughs of the various possibilities. But having followed (via this blog) most of the history of the reposurgeon, ISTM that this was not a problem that could be solved by fancy-assed coding.
“More <STRIKE>Voltage</STRIKE> Memory, Igor!”
;-}
“Ken Thompson: When in doubt, use brute force.”
This story reminds me of an former colleague who told me in the 1980s how he used to do BIG matrix inversions in the 1970s on a computer with 8k (?) of RAM.
I think I am glad I do not have remembered how he did it. But I believe it involved writing and reading tapes repeatedly (and data entry was by punch card).
> reposurgeon is already extremely complex and algorithmically dense
These facts may be true, but I wonder if they’re necessary or important.
I once attempted to convert a repo four times the size of the GCC repo using reposurgeon. I let reposurgeon run for a few months before, then while, writing a one-off Perl script to do the job instead.
Granted, it was really several one-off Perl scripts, to cope with specific idiosyncrasies of various tools that came in and out of fashion over the length of the SVN history, and the one or two commits which were just easier to handle with an ad-hoc exception case. The resulting “tool” is as repo-specific as the equivalent reposurgeon command stream; however, the total time including development time for this repo import was less than the equivalent run time in reposurgeon, and I currently blame reposurgeon’s complexity for that.
My tool was sed-like, processing each SVN commit individually and feeding it to git fast-import before moving on to the next. I had a “save/restore git repo refs state” feature that let me work on the repo a few tens of thousands of commits at a time, and rework parts of the repo that weren’t converted satisfactorily the first time by an earlier iteration of the tool.
Unfortunately, the result, while complete and technically correct, was unusable. Nobody wants to clone a 224GB git repo, especially me.
>total time including development time for this repo import was less than the equivalent run time in reposurgeon, and I currently blame reposurgeon’s complexity for that.
You should probably blame CPython (mostly – see below). The speedup I’m getting from PyPy is pretty impressive – as I’ve noted before, I think I’m now getting throughput only 2 to 3x under compiled C and pretty certainly much faster than interpreted Perl (which, granted, is faster than CPython). JIT compilation to machine code, man – when it works, it really works. Perl won’t give you that.
But. This is where I admit there was a performance boojum in my code, until last week, that could have bit you hard. There was a pass doing commit canonicalization that never made it out of the measurement noise even on the Emacs history but blew up hideously on GCC’s. I think it was O(n**3) on something though I’m not sure what and now probably won’t find out.
There’s still a bad spot at handling mergeinfo properties. Your Perl probably didn’t do that. If I manage to linearize that pass, or even bring it down to O(n log n), I think my GCC translation times will halve.
>My tool was sed-like, processing each SVN commit individually and feeding it to git fast-import before moving on to the next.
OK, I know what limitations that has. There are a couple of common operator errors that will make it fall down hard – I’m thinking particularly or botched branch creations where a shell copy is followed by svn file copies. You can’t do mergeinfos (which may be OK; older SVN repos seldom used them). There will be trouble near inheritance of the executable bits on files, or any other metadata carried by per-node Subversion properties.
It’s possible you made a good trade by hand-patching around those issues, then – depends partly on how many metadata malformations there were in the history. It would be academically interesting to see how reposurgeon does now under PyPy with the canonicalization blowup gone. At a guess, scaling up from the size of GCC’s repo, I’d predict through time of somewhere between 18 and 81 hours.
>Unfortunately, the result, while complete and technically correct, was unusable. Nobody wants to clone a 224GB git repo, especially me.
I hear that. I think the GCC conversion is going to be teetering on the edge of usability. I’m thinking of suggesting that they pick a recent cleave point, like GCC 4 or 5 maybe, archive the older part, and only take the newer part live.
They can always use filter-branch to split the history into multiple repos and then use git-replace to connect them together.
Pretty soon you’ll have reinvented the mainframe.
250k revisions is smaller than either pkgsrc and NetBSD src. We do the automatic conversions from CVS to Git and Mercurial for NetBSD on a machine with 64GB RAM and that’s primarily so that the far majority of all write operations stays entirely in RAM. One of the central design issues here seem to be depending on Python for string storage. It’s bad at that. Very bad. The per-object overhead is 24 Bytes for integers and ~35 Bytes on 64bit platforms, so memory use adds up very fast. That’s one of the reasons why cvs2fossil was designed around SQLite as storage layer — the main memory consumption is small and as long as the working set fits comfortable into the cache, the scaling works well.
>One of the central design issues here seem to be depending on Python for string storage. It’s bad at that. Very bad.
On the one hand, that is so, and it is a problem I’m well aware of.
On the other hand, Python has virtues that pay for its faults. A major one, back when I was first proving the concept, is that it is extremely friendly to exploratory programming. Nobody had ever tried writing anything much like reposurgeon before; choosing a language that bought runtime efficiency at the cost of raising the overhead of rapid experimentation would have been a bad trade. If I’d been restricted to C, forget it – reposurgeon wouldn’t have happened at all, it would have been just too difficult to get the resource management right.
Later, when it was a production tool, the value of exploratory flexibility fell less relative to efficiency than you might expect. Each conversion brought challenges that often required significant extensions – for example, Emacs need a macro facility, and GNUPLOT needed ChangeLog analysis.
On the gripping hand, reposurgeon looked like it was was hitting a hard performance ceiling on the Beast – but then PyPy turned out to be an apparently ideal match for the job load. I’m glad now that I didn’t hare off on a Go translation when the idea first looked appealing, because getting performance within 2x to 3x of C and keeping Python’s late-binding flexibility is a really excellent outcome.
What do you use for CVS to git lifting?
“cvs2fossil” and then “fossil export”. I could likely write a direct converter with output for git-fastimport, but since the code works and it is the occassional history mess-up desirable to not go directly to git (*cough* forced commit mess), it was never high enough on the priority list.
Exploration for handling issues was the other big reason why I went with SQLite initially. Being able to stop after the initial RCS parsing and being able to script processing is a huge win. Given that I have the option of fixing up the real repository, it means I can solve an issue permanently and don’t have to add ad-hoc logic for it in later conversion steps. That simplifies the code :)
Man, you were not kidding about it being costly:
https://www.newegg.com/Product/Product.aspx?Item=9SIA7S667E4006&cm_re=128GB_DDR4_RAM_with_ECC-_-9SIA7S667E4006-_-Product
> Man, you were not kidding about it being costly:
Yeah, we (I) had been looking at some Crucials from newegg.com, probably organized 8 x 16GB (the GBoM’s mobo has 8 memory slots, and you get somewhat better performance if you fill them all and let the memory decoder hardware interleave across all 8), and the best price I could find was very close to that (if not a tiny bit more).
o_0
Even at that price, it’s probably cost-effective. Programmer time is much more expensive than hardware. It sounds like Eric is doing this job gratis, but if he weren’t, he would have a very good case for going to the bill-payer and saying “look, you can buy a pile of memory for me, or you can keep paying me for another N days while I figure out how to do without it; your call.”
>Even at that price, it’s probably cost-effective.
I’d say the working-space problem cost me about 2.5 days, but there’s another reason to be glad I put in that time. PyPy, which was the key to pulling maximum working set down, sped up my test runs by a large factor, shortening what would be the billable hours if I were billing.
Damn. I’m used to that being the price for a whole computer, not just one part.
Initially, I was going to make a smart-ass remark about getting some crowd-funding going, but given the importance of GCC to the wider open-source ecosystem, isn’t perhaps asking for funds to throw some hardware at the problem justified?
@esr
The most important move was probably running reposurgeon under PyPy
How difficult was this? Any lessons learned?
>How difficult was this? Any lessons learned?
Not even a little bit. The only port problem I ran into was one predicate that used “is” where I should have used “==” – in retrospect I think it only worked accidentally in CPython.
Something that could have been a problem if I hadn’t been persnickety about writing pretty code well before the move is a subtle effect of the fact that PyPy uses mark-and-sweep GC while CPython does reference counting. This changes the timing of object finalizers in a way that is usually invisible but can cause problems with file closes.
In CPython, if you write
the close finalizer for the file object returned by open() will fire as soon as its reference count drops to zero, which should happen as soon as the whole expression is done evaluating. In PyPy the finalizer isn’t called until the unreferenced file object is noticed and its memory tossed in the free pool by the next GC pass. Hello, resource leak!
The correct, idiomatic way to write this in modern Python is:
where the point of the “with” block is that wp’s finalizer is called on the way out. This works in both dialects.
Hmf. I got it right the first time. I like Python ‘with’ a lot; the way it enlists visual scoping to make data lifetimes explicit is sweet. I miss it in Go.
Good to hear. Looks like PyPy has become a serious contender.
>Good to hear. Looks like PyPy has become a serious contender.
Oh, hell yeah. I’ve been seeing the results and reading their documentation and that devteam has got some serious game goin’ on. I’m impressed.
I’m reminded of a legend from my youth of the early days of the addition of virtual memory (paging) to the DEC-10 / TOPS-10 system. A LINK-10 run that normally took tens of minutes suddenly took tens of hours instead when the job “went virtual”.
TOPS-10 object code had in-place back-chaining for external symbols, so the process of resolving a symbol required a backward scan through arbitrary amounts of memory. Multiply that by hundreds of symbols and tiny working sets, and instant owie.
… virtual memory (paging) to the DEC-10 / TOPS-10 system
On VAX/VMS the truism was “when it swaps it stops”.
The most indelible memory for me of a machine thrashing itself to death on swap is at some point when I was a kid (before I knew what swapping was), when I was drawing something big in MS Paint. I discovered that the (already huge) image size I was using wasn’t big enough for whatever I was doing, and increased the size of the image in both dimensions by some fairly large factor (I want to say something like 10x, for an image already thousands of pixels wide). I was rather bewildered when the computer locked up. Years later, I learned about swap, and the latencies involved in disk vs. RAM access.
Some time after that, I remembered that incident:
“Ohhhhhh. *That’s* what happened.”
@esr –
> Nearly two years after it booted ….
Uh, try a bit over three.
“The Great Beast is here!
Posted on 2014-12-07 by Eric Raymond”
Funny how the time flies when you’re hacking repos….. ;-}
>Uh, try a bit over three
Fixed, thanks.
> Over 255000 commits
> 14K commits
> you’d have to lay 168 NTPsec histories end to end to even approximate the length of GCC’s.
I don’t understand how these numbers match up. 255/14 = 18. A missing zero? A distinction between ‘NTP’ and ‘NTPsec’?
>I don’t understand how these numbers match up. 255/14 = 18. A missing zero? A distinction between ‘NTP’ and ‘NTPsec’?
No, just a stupid typo. Fixed.
gcc? The coming posts are going to be interesting.
And if you had written Reposurgeon in C, you would have had a smaller footprint and better control of memory. So much for C being obsolete. :)
@IGnatius
“And if you had written Reposurgeon in C, you would have had a smaller footprint and better control of memory.”
Probably. But would we have Reposurgeon at all if Eric, or anyone else, had tried to write it in C?
As they say, premature optimization is the root of all evil. A non-working program will not win from a working program, however inefficient.
So much for C being obsolete.
esr already answered this:
“If I’d been restricted to C, forget it – reposurgeon wouldn’t have happened at all”
http://esr.ibiblio.org/?p=7792&cpage=1#comment-1923240
That’s what C++ is for. :)
>esr already answered this:
“If I’d been restricted to C, forget it – reposurgeon wouldn’t have happened at all”
I should be more specific about this, since I think the underlying problem is one the C-forever diehards have failed to grasp. And, well, take me seriously when I speak of C’s limitations; I’ve been programming in it for 35 years and some of my oldest C code is still in wide production use.
Speaking from that experience, there are some things only a damn fool tries to do in C, or in any other language without automatic memory management.
This is another angle on Greenspun’s Law: “Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.” Anyone who’s been in the trenches long enough gets that Greenspun’s real point is not about C or Fortran or Common Lisp. His maxim could be generalized in a Henry-Spencer-does-Santyana style as this:
“Those who do not have automatic memory management in their language are condemned to reinvent it, poorly.”
In other words, there’s a complexity threshold above which lack of AMM becomes intolerable….
…you know what? Screw explaining this in a comment. This wants to be a post.
As every true hacker knows, the only appropriate tool to use on emacs code is “rm -rf emacs/” as you watch the stallmanian pestilence of emacs go into the bit bucket while you then open up vi and continue on with real work.
I was looking at $$$ cost of getting new memory modules or the political capital favors and credits cost of getting some AWS credits on a high-memory EC2 instance type machine for this work. I’m glad that 3ish days of Thinking Hard on ESR’s part solved this problem instead, uncovering and fixing a couple of performance bugs in Reposurgeon and also adding PyPy to ESR’s ready-to-hand toolbox.
Now I have to Think Hard for a while about what generalizable lessons I have learned here.
>Now I have to Think Hard for a while about what generalizable lessons I have learned here.
This is me trying to think like a manager.
The lesson I think I would take home is that when your talent goes into a situation like this, you want to think hard about the break-even point between the cost of his time and the hardware required to reduce it. Kind of set a tripwire in your head – if he looks like he’s going to go over X days, screw it, you send him a CARE package of expensive DIMMs.
You may not want him to know exactly where the tripwire is, though, or even that there is one. An acquisitive sort might game that knowledge to get the shiny hardware even when he doesn’t actually need it.
The payoff calculation is a little tricky, because the moment you reach tripwire time you also know that you would have been more efficient to ship the DIMMs sooner.
I’ve seen multiple cases of companies and projects flush with money (or worse, credit) constantly purchasing their way out of growth exposed constraints, and in so doing plant the seeds of their own self-poisoning destruction, either crashing hard when they hit a problem that can’t be purchased away or blowing away like a puffball in a drought when the money-well even barely begins to start drying up. And I’ve seen projects wither and die because all they had was Hard Thinking but didn’t reach or (or didn’t have to hand) enough bullets and butter when they were needed. It’s a Hard Problem, and the more workable solutions are cheats and lies.
>I’ve seen multiple cases of companies and projects flush with money (or worse, credit) constantly purchasing their way out of growth exposed constraints, and in so doing plant the seeds of their own self-poisoning destruction,
Point. But if you’re estimating the break-even point correctly, that won’t happen because you’ll save more in opportunity costs than you’ll spend on resources.
What your experience seems to say is that a lot of people are very bad at this kind of estimation. I guess that’s not news. I wouldn’t be confident I could do it well myself.
What make it really hard, as you pointed out, is by the time a manager, even a good one working with good information, knows where the breakeven point is, it’s now in the past.
So, instead, they have to predict the future. Which is hard.
And that is when the information is good and the trust is high. More commonly, the information and the trust is corrupted by everyone to try to bias preferred outcomes about deliverable, schedules, workloads, budgets, and buying shiny new toys.
@esr: Out of curiosity, who asked you to assist on the GCC repo conversion ?
Was this a case similar to emacs where RMS got behind it and pushed to attract new contributors for whom not using git was a barrier to entry?
>Dennis
>@esr: Out of curiosity, who asked you to assist on the GCC repo conversion ?
I heard they were speculating about moving and volunteered.
It’s critical infrastructure. And who was, or is, better qualified to get them over the hump? Seemed like a no-brainer.
Ah. The coder version of the Cajun Navy!
>Ah. The coder version of the Cajun Navy!
Not at all an unusual thing in the open-source world. We have a strong ethos of stepping up for a job when you detect that you’re the person qualified to do it and on the spot.
Are you using __slots__ to reduce the memory usage of Python objects?
>Are you using __slots__ to reduce the memory usage of Python objects?
Yes, just as the PyPy documentation advises.
Eric,
Have you found any more exposed-by-sheer-scale problems, or was that lurking cubic bit the only one to speak of so far for this conversion?
>Have you found any more exposed-by-sheer-scale problems
There’s one in the mergeinfo handling, a particularly arcane corner of the subversion dump file reader.
It is definitely blowing up O(n**2); I have not yet been able to determine whether this is intrinsic or something that could be reduced with better code.
Dumb question again – what is mergeinfo handling quadratic _in_? By looks of things, to my untutored, definitely-not-cometary-contributor eyes, maybe number of (subversion-side?) merges?
ISTR you have a collection of regression-test repos – do you have enough to do a rough curve fit? (and possibly get a handle on whether that problem is intrinsic or reducible)
>Dumb question again – what is mergeinfo handling quadratic _in_?
I don’t know; I didn’t write that code, and my attempts to understand it have failed so far. I think what’s going on is that the cost of building FileMaps is quadratic, but I don’t grok what the FileMaps are actually doing so my visibiolity into what actually gets executed is limited.
>ISTR you have a collection of regression-test repos – do you have enough to do a rough curve fit?
Nice idea but this is one of those cases where the timing is lost in profiling noise until it’s suddenly the size of Jupiter….
If the mergeinfo stuff has been lost in profiler noise, even on huge repos like NetBSD, Emacs, etc, until now, where it emerges and declares “ALL YOUR COMPUTRONIUM ARE BELONG TO US”, that sounds an awful lot like that “lurking cubic bit”, to use my words.
Quadratic might only be a _lower_ bound on mergeinfo growth. As you said in the previous comment, damned if you know because you’ve hit it and bounced.
>that sounds an awful lot like that “lurking cubic bit”, to use my words.
Yes. Yes, it does.
Perhaps you should get some GPU’s into the mix?
>Perhaps you should get some GPU’s into the mix?
Er, to do what? Vector calculations aren’t going to help here.
My bad, intended to say ASIC’s. I have many times wondered if C/C++ could be optimized into ASIC’s for specific use cases. This seems like such a possible well constrained problem. No?
>My bad, intended to say ASIC’s. I have many times wondered if C/C++ could be optimized into ASIC’s for specific use cases. This seems like such a possible well constrained problem. No?
C is already so close to being optimal for present CPUs (and anything else approximating what Blaauw and Brooks call the “standard archotecture”) that I doubt there’s much to be gained here.
just as a side note, given the overwhelming size of this repo, perhaps you can spot useful optimizations for git itself in manipulating it.
>just as a side note, given the overwhelming size of this repo, perhaps you can spot useful optimizations for git itself in manipulating it.
Unlikely, alas. I don’t know that code at all well.
Anyway, that’s unlikely to be a good investment of time – there have been a lot of very capable people looking for speed wins there, enough that my chances of just walking in and spotting one seem low.
Another dumb question – presuming such exists, is it worth trying to chase down a repo that is 3x smaller than your current Everest (and thus ~3x bigger than previous OMGHEUG repos that have made reposurgeon strain at its previous limits)?
Idea is to both get a handle on how remaining nasty bits scale and speed iteration on fixing said nasty bits. Am I getting too clever by half?
>Am I getting too clever by half?
I’m not sure. I’d rather just apply a profiler. Python seems short of good ones, alas.