The SCCS back end to SRC doesn’t support named symbolic references to numbered revisions, because SCCS masters don’t include a symbol table. This is one of the things RCS added.
Goddess help me, I’ve figured out how to shoehorn in this feature. And probably should not do it.
A nearly forgotten feature of both SCCS and RCS masters is the “descriptive text” field, one per master. When this was used it was normally set at master creation time to convey the purpose of the file for which the master is keeping history. But it can be modified after master-creation time.
The trick: swipe this field. if empty, and use it as an attribute-value dictionary encoded as a JSON object. If it’s nonempty, and doesn’t begin with a ‘{‘, exit with a warning rather than stepping on legacy data.
Now, in the SCCS back end, one attribute (initially the only one) can be a symbol table. Voila!
I shouldn’t do this. The odds of anyone wanting to use the SCCS back for production in 2016 are minimal. But I am tempted by a mad completist urge…
You’re not gonna need it……you’re not gonna need it…..repeat after me….you’re not gonna need it!
Take a deep breath. Now go play with Zola.
Holy heck that is dirty. I like it.
Does the text field have a large fixed length or unlimited? If so, make the is-this-json check work on a longer chunk of text (maybe arrange for a magic key value pair to come first in the dictionary). I’ve had to do tricks like your proposed hack before and got hosed fairly badly with a single character check.
>I’ve had to do tricks like your proposed hack before and got hosed fairly badly with a single character check.
True. Obvious things to do:
1. Run a trial parse on the field to check if it’s syntactically correct JSON.
2. Add a required “src-version” attribute.
I just looked at the CSSC source and it appears that the description field is a dynamically allocated string list. No length limit.
Must…resist…temptation…
My suggestion is: Add it at the moment there is an actual concrete use case for something like tagging (I assume that’s what you have in mind). It’ll still be an equally ugly-but-clever hack, but at least then it’ll be defensible, from a maintenance point of view.
>My suggestion is: Add it at the moment there is an actual concrete use case for something like tagging
Maybe there already is. SCCS actually has branching, but it’s ugly to the point of unusability because branches are named by dotted numeric quads. The main argument for hacking in a symbol table is so every branch can have a symbolic name (this is how the RCS backend already works).
The argument against this is that branching is superfluous for SRC’s use cases to begin with, and supporting it in the RCS back end was over-engineering. I don’t think this is quite true, but it’s close enough to make the expected utility of retrofitting branches into SCCS reallly low.
This is me resisting temptation…
@Joey: “Add it at the moment there is an actual concrete use case for something like tagging”
This has now become a contest to see who can come up with a plausible use-case for tagging in SCCS first. I’ll try one:
Tagging known-good versions of /etc configuration files per major Ubuntu release. That is, known-good-trusty
Important for dealing with crappy package/config management. This would help address some of the issues I occasionally run into.
>This has now become a contest to see who can come up with a plausible use-case for tagging in SCCS first.
Yeah but, why not just use the default RCS back end that already supports tags?
The real challenge here is to come up with a plausible case where you need tags and must use SCCS. Good luck with that.
The Gods of the Unix Way are frowning.
Go take a cold shower.
;)
Give in to the dark side! The OCD is calling!
>Give in to the dark side! The OCD is calling!
Dude, if Force lightning from my hands were really one of the perks, I’d be so there…
Y’know, if you had a real filesystem with versioning built in, like say, VMS ODS, hacks like this wouldn’t be necessary, trololol.
Do it and be done with it. You want to, and you’re spending time and effort talking about it and publicly resisting temptation that could be better applied elsewhere. It never gets used? Who cares? You had fun doing it.
______
Dennis
…or you could just ‘tag’ it by scribbling a note on a post-it…with one of those durn fangled ‘pen’ contraptions.
Like a boss.
Are there any other things you could use the data dictionary for besides the symbol table?
As for use cases? That woukd be any system old enough where RCS won’t ever be available. So maybe the retro-computing people could use it fo managing configs?
>Are there any other things you could use the data dictionary for besides the symbol table?
I thought of one about half an hour ago. I could use it to get rid of .srcstamp files, which are my work around for the sad fact that Python utime is unreliable. To speed up src status, whenever you commit a change to a file the length and MD5 hash are written to a corresponding srcstamp, which “src status” can look at without doing an entire checkout to see if the file should be shown as modified.
(This has the desirable consequence that bumping the file mod date doesn’t make it show as modified, only an actual content change.)
I could put that MD5 and length in the JSON, obviating the requirement for a separate file and ensuring the state info moves with tn master.
/me whispers “haaaaackk vaaaaaalluuuuuue” softly…
/me whispers “haaaaackk vaaaaaalluuuuuue” softly…
And when the odds are against him and there’s dangerous work to do, you bet your life ESR’ll see it through.
Go, ESR! Go, ESR! Go, ESR, go!
@John Dougan I have to wonder about the intersection between that set vs systems where Python is available.
“exit with a warning rather than stepping on legacy data.” – isn’t any such legacy data likely to be human-readable, and thus not really “stepped on” by, say, bundling it up into an attribute of the JSON dictionary?
>“exit with a warning rather than stepping on legacy data.” – isn’t any such legacy data likely to be human-readable, and thus not really “stepped on” by, say, bundling it up into an attribute of the JSON dictionary?
Yes, but the presence of such data implies a real interoperability issue with tools that aren’t SRC.
In situations like that, it’s rude to just assume that you can scribble on the metadata or lift it into your own private format without consequences. Better practice is to throw a warning so the user gets the option to clear the field himself, signaling that it’s OK to write on it.
Even if you don’t use the symbol table I think the other uses of the data dictionary are compelling enough. I hate it when I have to split data across files, that should be together.
@random832
The problem is less python availbility, as the version of python. The 1.x series could be compiled onto a lot of older iron and OSs, as was the need and custom at the time.
esr: “SCCS actually has branching, but it’s ugly to the point of unusability because branches are named by dotted numeric quads.”
Um, dotted numeric quads? You mean, like IP addresses?
Obvious humor omitted here…
I haven’t touched SCCS since 1994 at the latest.
@cathy “like IP addresses”
Only sorta. As I remember it:
1.0 – initial trunk version
1.1 – next checkin on trunk
1.1.1.0 – First branch from trunk v1.1
1.1.1.1 – Next checkin on that branch
1.1.2.0 – Second branch from v1.1
1.2 – next checkin on trunk.
etc.
Branching only goes one layer deep.
Been years for me too.
All I can say is that this is a really neat hack of a source control system.
And the best use of force lightning is for spot-welding stuff (Thing Mr. Welch is no longer allowed to do #1041).
How do you know it isn’t such a perk? Maybe the people who have gone wholly to the Dark Side just are good at keeping it secret. If you don’t make the experiment, you’ll never actually know.
>Maybe there already is. SCCS actually has branching, but it’s ugly to the point of unusability because branches are named by dotted numeric quads. The main argument for hacking in a symbol table is so every branch can have a symbolic name (this is how the RCS backend already works).
The argument against this is that branching is superfluous for SRC’s use cases to begin with, and supporting it in the RCS back end was over-engineering. I don’t think this is quite true, but it’s close enough to make the expected utility of retrofitting branches into SCCS reallly low.
This is me resisting temptation…
@esr:
Well then I guess what it comes down to is a cost/benefit analysis (using the term a but loosely here) to decide whether or not you are willing to maintain the feature :)
Oh, go ahead and do it. And while you’re at it, fulfill your destiny on the Dark Side (or at least Zawinski’s Law) and make it read mail, too.
>Oh, go ahead and do it. And while you’re at it, fulfill your destiny on the Dark Side (or at least Zawinski’s Law) and make it read mail, too.
Latest news: I’ve written the code to use the description field as a key-value store, because that seems like a good idea in order to get rid of .srcstamp files, which are an ugly wart on the design.
Have not yet actually got rid of .srcstamp files, as I have real work to do and can’t spend too much time on this thing.
Wouldn’t requirement to rewrite whole file be a problem, or do you use some trick that makes it possible to avoid it?
>Wouldn’t requirement to rewrite whole file be a problem, or do you use some trick that makes it possible to avoid it?
The trick is that you compute a length/md5-hash combo whenever you modify the file. (Presently that’s stored in an associated .srcstmp; soon it may move into the JSON dictionary.)
Then, when you want to check if it’s modified, you run the same computation on the workfile. You don’t have to check out its base version, because you have the base version’s hash. Actually, in many cases you don’t even have to compute the hash, because the length will differ.
What about padding? Then you’d only have to rewrite the whole file to grow it, which you wouldn’t have to do on every operation.
>What about padding? Then you’d only have to rewrite the whole file to grow it, which you wouldn’t have to do on every operation.
What about padding what? I’m sorry, I don’t understand this at all.
@esr: Python utime is unreliable
Are you referring to the lack of nanosecond resolution? FWIW, that’s fixed in Python 3.3.
>Are you referring to the lack of nanosecond resolution? FWIW, that’s fixed in Python 3.3.
Sigh. I tried moving SRC to 3x and failed utterly – incomprehensible string-vs.Unicode errors. (I had previously succeeded with some 2.x stuff not much smaller; SRC seems to be a particularly hard case.)
If anyone with more experience at 2.x-to-3.x porting wants to take this on, I’d take that patch.
For (any?) sccs/cssc users, it would be interesting to have a Python 1.x port. Anyone? Bueller?
> What about padding what? I’m sorry, I don’t understand this at all.
Pad the text field you are repurposing (with blank spaces after the JSON object) to some larger number of bytes in length, so that when you rewrite it (to add a new property to the JSON object) you can, unless the new JSON object is longer than the padded space allows, write it by seeking to the location of the JSON object in the file and overwriting it, without having to shift down everything after it.
Since the concern @ Jakub Narebski seemed to be raising was that making any modifications to the JSON object would require rewriting the rest of the file after it.
>Since the concern @ Jakub Narebski seemed to be raising was that making any modifications to the JSON object would require rewriting the rest of the file after it.
Oh, I thought he was talking about dumping the entire base version in order to compare for a status check.
Anyway, description-field writes aren’t done with direct writes to the master-file writes, but rather by an rcs(1) invocation. I’d rather leave that sort of optimization as its problem.
@esr: I tried moving SRC to 3x and failed utterly – incomprehensible string-vs.Unicode errors.
On a quick look, I suspect it’s because in Python 3, the file object returned by os.popen reads and writes Unicode strings instead of bytes. Which doesn’t really make a lot of sense when you think about it. Unfortunately, there isn’t a “binary mode” option to os.popen the way there is with the open builtin; the only way to get that kind of control over the input/output stream is to use the subprocess.Popen class and construct your own binary mode file objects to be the stdin/stdout for the child process.
If anyone with more experience at 2.x-to-3.x porting wants to take this on, I’d take that patch.
I’d be willing to try it but I won’t have the time in the near future.
the only way to get that kind of control over the input/output stream is to use the subprocess.Popen class and construct your own binary mode file objects to be the stdin/stdout for the child process.
Actually, a little experimentation shows that it isn’t quite that bad; using the stdout=PIPE option to subprocess.Popen appears to return bytes (at least on the Python 3 versions I have easy access to). This behavior isn’t documented, however.
I’d be willing to try it but I won’t have the time in the near future.
Well, it turns out I have some time :-), so I’m trying to set up a gitlab account so I can fork src and work on a Python 3 branch. Unfortunately, gitlab.com’s account setup seems to be either broken or braindead. Steps tried so far:
(1) Entered my name, desired username, desired password, and email address in their new user signup form.
(2) Their website told me I would get a confirmation email “shortly”. That was about an hour ago. (I have twice done the “resend confirmation email” thing, no joy.)
(3) I noticed that I could sign in using my existing github account, so tried that; got a 442 error from gitlab.com complaining that that email was already taken.
(4) I noticed that I could sign in using my Google account…but that account is also tied to the same email address…you can see where this is going. Tried it anyway; got same 442 error.
(5) Looked at the support forum for the gitlab.com website; unfortunately, it has no search function so I can’t try to see if anyone else is having this issue (none of the issues showing up as most recent were relevant), and I can’t create a new issue without…signing in.
(6) Looked in vain for any kind of tech support email or contact form for people who are having problems signing up on gitlab.com.
So, does anyone know how I can tell gitlab.com that I can’t sign up and ask them to fix it?
Alas, “gitlab signup trouble” is not yielding good hits.
>Well, it turns out I have some time :-)
Feel free to just close the repo and send me git am patches.
Well, I finally got a confirmation email from gitlab and was able to set up an account and fork the src project and add a “python3” branch to test the idea I described upthread. Making it work under python3 turned out to be fairly straightforward; in fact, it now runs under both python2 and python3 and passes all regression tests. The branch is here:
https://gitlab.com/pdonis/src/tree/python3
It’s ironic that, now that I’ve done this, the original reason for moving to python3 has evaporated, since the modified check now uses the brute force method and doesn’t care about python’s utime behavior. :-) However, I think there are a couple of good reasons to port anyway. One is that os.popen is deprecated in both Python 2 and Python 3, so switching to the current best practice, subprocess.Popen, seems like a good idea. The other is that, as I said, the same code runs on both Python 2 and Python 3, so the script’s shebang can be changed to /usr/bin/env/python (which I’ve done in the branch), and the user doesn’t have to worry what his distro symlinks “python” to. This seems like a nice future-proofing of the code.
I want to let this incubate at least overnight, just to make sure nothing else occurs to me that I should look at. But if you’re still interested in a patch, esr, I can put in the merge request through gitlab when I’m convinced it’s ready.
>But if you’re still interested in a patch, esr, I can put in the merge request through gitlab when I’m convinced it’s ready.
Please do, because I am.
@esr: Feel free to just close the repo and send me git am patches.
Or I can do that instead of putting in a merge request through gitlab. (I had cloned the repo and done most of the work anyway while I was waiting to see if gitlab would get its act together.)
@esr: Please do, because I am.
Ok, will do. Probably some time tomorrow evening.
(I love it when posts cross…)
Peter, I looked at your changes. FYI, the reason popen_or_die has the unused write mode is because it’s a utility class I originally wrote for reposurgeon than I paste in to a lot of my projects.
I don’t know how you did this so easily. Is your algorithm conscious enough that you could write it down?
@esr: the reason popen_or_die has the unused write mode is because it’s a utility class I originally wrote for reposurgeon than I paste in to a lot of my projects.
Ah, ok. I included code to handle the “w” case, so the patched version should be a drop-in replacement in case you want to switch to subprocess.Popen on other projects.
I don’t know how you did this so easily. Is your algorithm conscious enough that you could write it down?
Sure. A key thing I’m relying on is that all the data being dealt with is ASCII (the reasons will be clear from what follows). I’ll first describe the basic operation of the code after I changed it, and then the key steps I took to get there.
In order to run as-is under both Python 2 and Python 3, the simplest method is to use the defaults for string literals and console I/O. In Python 2, those are byte strings; in Python 3, those are Unicode. For all ASCII data, that’s acceptable, even if it would give some purists fits. :-)
OTOH, when doing I/O with a subprocess, or reading from the filesystem, I wanted to force byte strings in both Python 2 and Python 3, because those operations are used for things like the modified check, where it seems most robust to not do any Unicode conversion at all, not even with ASCII data–that way you’re absolutely sure that you’re doing a straight byte-by-byte comparison between old data and new data–or, in the case of, e.g., the fast_export method, you’re sure that you’re counting the size of the content as number of bytes. Also, forcing filesystem and subprocess I/O to be binary eliminates any potential issues with using the system default encoding, which might possibly be something incompatible with ASCII.
In Python 3, making those two things work together for ASCII data is simple: whenever byte string data that was read from a subprocess or the filesystem needs to be mixed with string literals or console I/O, decode it to Unicode (using the ASCII encoding). For example, in the fast_export method (and other methods with direct console output), the byte string data that was read is decoded before being dumped to stdout. And in the parse methods of the backends, each line read from the subprocess is decoded before being tested for various initial strings.
It turns out, though, that for ASCII data, the above strategy also works in Python 2! The reason is that all of the Python 2 operations we need to use let you mix byte string and Unicode data without complaining (for example, string formatting with the % operator, the startswith method of the str and unicode types, the differs from difflib). Of course for data in an arbitrary or unknown encoding, this can be disaster, which was one of the main motivations for changing how all that works in Python 3. But for ASCII data, it works fine, and lets us avoid a lot of hassle–though, again, it would give purists fits. :-)
The key steps I took to implement this were:
(1) Switch to subprocess.Popen in the popen_or_die function; this gives byte string I/O in both Python 2 and Python 3. Switch to binary filesystem I/O. Make fixups as needed so all regression tests pass under Python 2.
(2) Run the 2to3 tool to fix up syntax that is incompatible with Python 3. Most Python 3 compatible syntax (including all of the syntax we’re using) is also Python 2.7 compatible (possibly with some __future__ imports like print_function), so the tool might be better named 2to2.7. :-)
(3) Convert all non-default string literals (i.e., all b”” as opposed to “”) to default (i.e., “”). Decode byte string data to Unicode where needed. Make fixups as needed so all regression tests pass under Python 3.
(4) Add from __future__ import print_function so script will run under both Python 2 and Python 3. (Other 2to3 syntax changes were already compatible with 2.7.)
>(3) Convert all non-default string literals (i.e., all b”” as opposed to “”) to default (i.e., “”). Decode byte string data to Unicode where needed. Make fixups as needed so all regression tests pass under Python 3.
This is the step where I crash and burn. When I get to “make fixups as needed”, I find myself in an incident pit where every change seems to throw more errors rather than fewer.
Are you brave enough to try forward-porting reposurgeon? That is actually more important than SRC, because reposurgeon has performance bottlenecks on very large repos that might be addressed in 3 but will never be fixed in 2.
Btw, that now-unused annotation code that you are leaving in as aji :-) can also be simplified under the new approach; there’s no need to encode the Unicode you receive from the JSON loader as ASCII bytes. I’ve updated the python3 branch accordingly.
(I agree with the aji strategy, btw; I hate having to recreate code that I should have kept the first time, even if it wasn’t immediately useful.)
@esr: When I get to “make fixups as needed”, I find myself in an incident pit where every change seems to throw more errors rather than fewer.
I’ve had this happen too. In fact I almost had it happen here; I explored trying to enforce byte string data instead of Unicode everywhere, on the theory that the safest thing would be to have the same internal data model (byte strings) regardless of Python version. That proved to be a bad idea. :-) Fortunately, I had an alternate plan up my sleeve, which, as I said, is viable because all of the data involved is ASCII, so I was able to quickly abandon the “byte strings everywhere” plan when it became clear that it was a hairball.
Are you brave enough to try forward-porting reposurgeon?
I’ll take a look. With what I know of repository structure, a similar strategy to what I did here should be workable, because repository metadata, AFAIK, is ASCII. (Actual blobs are not, but I would expect that the blobs can be treated as opaque binary data that never has to be decoded–it just has to be byte-compared for things like modification checks.) The main question I would have is, is my understanding correct? Are there any places where reposurgeon has to parse and massage non-ASCII data in order to do its work?
>With what I know of repository structure, a similar strategy to what I did here should be workable, because repository metadata, AFAIK, is ASCII. (Actual blobs are not, but I would expect that the blobs can be treated as opaque binary data that never has to be decoded–it just has to be byte-compared for things like modification checks.) The main question I would have is, is my understanding correct?
Your understanding is correct. Reposurgeon does, and should, treat all data as uninterpreted byte streams. For a minor and obvious exception, see the transcode command. (Which, BTW, I added to deal with Latin-1 characters in comments in the Emacs history.
I too tried to apply the make-it-all-byte-streams approach and failed. Oh well, at least I feel a bit less like an idiot now. (That is, knowing someone much more clueful about forward-porting than I am couldn’t make it work either.)
I am really pretty unhappy with Python 3’s decision to break the world this way.
@esr: reposurgeon has performance bottlenecks on very large repos that might be addressed in 3 but will never be fixed in 2.
Can you briefly describe what these are?
>Can you briefly describe what these are?
There are reposurgeon operation which could in principle be parallelized to threads running on multiple processors, but can’t be in practice because of the Python GIL.
Where this has actually bitten me is doing search-replace commit-metadata modifications on extremely large repositories. It came up in the Emacs conversion, and again working on GCC.
It is possible that the GIL problem might be solved in Python 3 someday, but effectively certain that it won’t be in Python 2.
(I have a bit set that I need to kick Guido’s butt about this. If Python doesn’t fix the GIL problem, I can foresee having to abandon it for a language that can handle parallelism, fairly soon.)
> It is possible that the GIL problem might be solved in Python 3 someday, but effectively certain that it won’t be in Python 2.
What was changed in the internal structure of Python’s implementation between 2 and 3 which may make it possible to remove the GIL? (Or at least make it not the contention bottleneck between parallel threads?)
>What was changed in the internal structure of Python’s implementation between 2 and 3 which may make it possible to remove the GIL?
Nothing, yet. I’m going by the sociology – if they fix it, they’ll do it in the leading edge version. probably wuth a C API break, and leave the trailing-edge version backwards-compatible.
@esr: There are reposurgeon operation which could in principle be parallelized to threads running on multiple processors, but can’t be in practice because of the Python GIL.
Is using multiple worker processes not an option? On Linux, at least, there isn’t a lot of difference in overhead between threads and processes. The multiprocessing module in Python was designed for this purpose and has a similar API to the threading module.
I have a bit set that I need to kick Guido’s butt about this.
I suspect that the reason not much has been done on this, in CPython at least, is that the Python dev team feels that the multiprocessing module is a sufficient response to the need.
>Is using multiple worker processes not an option?
Not in this case. There’s a dilemma because the natural process working set is the entire metadata graph of the repo, which in real cases can be more than 32GB wide. If you replicate this per worker you’re going to OOM pretty fast. So you either need threaded access to one copy in a single process (which means the GIL has to go) or you need to carve it into chunks before assigning them to the worker processes, then stitch them together afterwards. The latter is theoretically possible but so fiddly and complex that I would despair of ever being really confident it was correct.
>the Python dev team feels that the multiprocessing module is a sufficient response to the need.
If that’s what they think, they’re kidding themselves. That approach only works when working sets are small and/or easily partitioned. It fails on a case like reposurgeon – which, in a world of increasingly big data, we should expect to become the typical case.
@esr: I am really pretty unhappy with Python 3’s decision to break the world this way.
I think part of the issue is that Python has such a diverse user base, and therefore a diverse set of user opinions about how the language ought to handle things. Users who can take an “everything is ASCII” view of the world (or even an “everything is UTF-8” view of the world) tend to feel the way you do–Python went through these huge gyrations to “fix” something that wasn’t broken. But other users, who are forced to deal with Unicode data coming at them in various wild and wooly encodings, were apparently getting bitten by Python 2’s default conversions often enough that they were able to convince the Python dev team that it was a major issue that Python 3 needed to fix.
For Unix geeks, there is also a darker side to this. The Python 3 Unicode handling, at least to me, seems to be largely driven by Windows users, not Unix users. For example, in Windows it does make a kind of sense to have console and filesystem I/O default to Unicode text, since the Windows OS takes the same view; whereas in Unix it makes no sense at all. Armin Ronacher appears to have a similar theory in this article:
http://lucumr.pocoo.org/2014/5/12/everything-about-unicode/
So if you get to have that conversation with Guido, one thing you might want to ask him is why all this pandering to Windows when it’s Unix that’s holding up the sky. :-)
@esr: There’s a dilemma because the natural process working set is the entire metadata graph of the repo, which in real cases can be more than 32GB wide. If you replicate this per worker you’re going to OOM pretty fast.
Ouch! Yes, that would pretty much kill multiprocessing. Unless you want to try to spec out a super duper big brother to the Great Beast. 1TB memory, anyone? :-)
Other possible options would be alternative interpreters that don’t have the GIL, like PyPy or Stackless.
I’ll take a look
I’ve forked a copy of reposurgeon on gitlab and cloned it locally so I can browse the code. If I have any questions about the internal workings, esr, should I ask them here or email you? If the latter, is esr@thrsys.com good, or is there a better email?
>If I have any questions about the internal workings, esr, should I ask them here or email you?
Email me. No point in clogging the thread. esr@thyrsus.com will work.
Showing my ignorance here, but is it possible to throw the metadata graph into shared memory and have the worker processes use that?
>Showing my ignorance here, but is it possible to throw the metadata graph into shared memory and have the worker processes use that?
If we could figure out how to exile a bunch of Python structures to shared memory but keep pointers to them, maybe. Hard problem.
That is doable…I had to do that on a smaller scale with read only data in Smalltalk. I found it easier to spray the data into the shared section flat, and build abstractions with offset pointers to wrap the shared section with a DB-like API. There were some performance issues with data format conversion from the shared format to the format supported by the runtime (endianness mostly) but I still won because of the extra processes.