To do large code changes correctly, factor them into a series of smaller steps such that each revision has a well-defined and provable relationship to the last.
(This is the closest I’ve ever come to a 1-sentence answer to the question “How the fsck do you manage to code with such ridiculously high speed and low defect frequency? I was asked this yet again recently, and trying to translate the general principle into actionable advice has been on my mind. I have two particular NTPsec contributors in mind…)
So here’s a case study, and maybe your chance to catch me in a mistake.
NTP needs a 64-bit scalar type for calendar calculations; what it actually wants is 32 bits of seconds since a far-past epoch and 32 bits of fractional-second precision, which you can think of as a counter for units of seconds * 1e-32. (The details are a little messier than this, but never mind that for now.)
Consequently, one of the archaisms in the NTP code is an internal type called vint64. It dates from the era of 32-bit machines (roughly 1977 to 2008). In those days you couldn’t assume your C compiler had int64_t or uint64_t (64-bit integer and unsigned-integer types). Even after the 64-bit hardware transition, it was some years before you could safely assume that compilers for the remaining 32-bit machines (like today’s Raspberry Pis) would support int64_t/uint64_t.
Thus, a vint64 is an NTP structure wrapping 2 32-bit integers. It comes with a bunch of small functions that do 64-bit scalar arithmetic using it. Also, sadly, there was a lot of code using it that didn’t go through the functional interface, instead exposing the guts of the vint64 structure in unclean ways.
This is, for several reasons, an obvious cleanup target. Today in 2016 we can assume that all compilers of interest to us have 64-bit scalars. In fact the NTP code itself has long assumed this, though the assumption is so well-hidden in the ISC library off to the side that many people who have worked in the main codebase probably do not know it’s there.
If all the vint64s in NTP became typedefs to a scalar 64-bit type, we could use native machine operations in most cases and replace a lot of function calls and ugly exposed guts with C’s arithmetic operators. The result would be more readable, less bulky, and more efficient. In this case we’d only pare away about 300LOC, but relentless pursuit of such small improvements adds up to large ones.
The stupid way to do it would have been to try to go from vint64 to int64_t/uint64_t in one fell swoop. NSF and LF didn’t engage me to be that stupid.
Quoting myself: “A series of smaller steps such that each revision has a well-defined and provable relationship to the last.”
Generally, in cases like this, the thing to do is separate changing the interface from changing the implementation. So:
1. First, encapsulate vint64 into an abstract data type (ADT) with an entirely functional interface – un-expose the guts.
2. Then, change the implementation (struct to scalar), changing the ADT methods without disturbing any of the outside calls to them – if you have to do the latter, you failed step 1 and have to clean up your abstract data type.
3. Finally, hand-expand the function calls to native C scalar operations. Now you no longer have an ADT, but that’s OK; it was scaffolding. You knew you were going to discard it.
The goal is that at each step it should be possible, and relatively easy to eyeball-check that the transformation you did is correct. Helps a lot to have unit tests for the code you’re modifying – then, one of your checks is that the unit tests don’t go sproing at any step. If you don’t have unit tests, write them. They’ll save your fallible ass. The better your unit tests are, the more time and pain you’ll save yourself in the long run.
OK, so here’s you chance to catch me in a mistake.
https://gitlab.com/NTPsec/ntpsec/commit/13fa1219f94d2b9ec00ae409ac4b54ee12b1e93f
That is the diff where I pull all the vint64 guts exposure into a ADT (done with macro calls, not true functions, but that’s a C implementation detail).
Can you find an error in this diff? If you decide not, how did it convince you? What properties of the diff are important?
(Don’t pass over that last question lightly. It’s central.)
If you’re feeling brave, try step 2. Start with ‘typedef uint64_t vint4;’, replacing the structure definition, and rewrite the ten macros near the beginning of the diff. (Hint: you’ll need two sets of them.)
Word to the newbies: this is how it’s done. Train your brain so that you analyze programming this way – mostly smooth sequences of refactoring steps with only occasional crisis points where you add a feature or change an assumption.
When you can do this at a microlevel, with code, you are inhabiting the mindset of a master programmer. When you can do it with larger design elements – data structures and entire code subsystems – you are getting inside system-architect skills.
Nb, can/could you write semantic patch for this refactoring (Coccinelle, Undebt, …)?
>Nb, can/could you write semantic patch for this refactoring (Coccinelle, Undebt, …)?
I was unaware of these tools. Now that I’ve taken a fast look at Undebt, I think it is quite likely I could have written most of it (except for the macro definitions at the beginning) in a version of Undebt tweaked to be intelligent about C syntax.
Yes, I can. Or, at least I think I can. It looks to me like there’s a ‘q’ != ‘s’ problem.
Ideally, something like this should look mechanical – like a sed script ran and replaced one bit with another. Ideally, it *was* a sed script, but getting the corner-cases for that to work is very difficult and usually not worth the effort.
On a related personal-preference basis from someone who’s done this thousands of lines at a time, I’d recommend going with C99/C++ inline functions instead of macros. They allow for explicit type-checking to be done by the compiler and allow you check your assumptions about data directions. M_NEG() and its friends are simply evil IMO.
Also, the integer data casts being used extensively worry me. It’s code explicitly overriding what the compiler would normally do and also potentially bypassing automated checks. If required, it would be preferable to have that bit of cruft put into a small library somewhere so that the rest of the code need not do that. This would allow for cleaner and more obvious code exposing the logic being performed to be written, and the memory-format-specific code off in a corner where it need not occupy brain-space and can be extensively unit tested.
>Yes, I can. Or, at least I think I can. It looks to me like there’s a ‘q’ != ‘s’ problem.
Can you be more specific? (If there’s something there, the unit tests didn’t catch it.)
Just to clarify: From the wording of this point, one might get the impression that most of the work was done on vint64 and the utility functions that came with it. Am I correct in assuming that this impression would be false — that most of the actual work consisted of cleaning up tons of breakage in the code that touched the guts of vint64 and now could no longer do that?
>Am I correct in assuming that this impression would be false — that most of the actual work consisted of cleaning up tons of breakage in the code that touched the guts of vint64 and now could no longer do that?
Yes. You can see this in the precondition bands of the diff.
The names of macros preclude mechanical change:
> #define vint64lo(n) (n).d_s.lo
Why not vint64los(n), to be explicit, and consistent – use (s)igned, (u)nsigned suffix.
> #define vint64his(n) (n).d_s.hi
> #define vint64hiu(n) (n).D_s.hi
Where is vint64lou(n)?
I guess that’s because lower part is the same for signed and for unsigned.
Maybe inc/dec macros would be worth it?
Also, why did you flatten conditional operators, see e.g. https://gitlab.com/NTPsec/ntpsec/commit/13fa1219f94d2b9ec00ae409ac4b54ee12b1e93f#61b762d1e673f1ae3229f3c514638457abc5de84_389_383
https://gitlab.com/NTPsec/ntpsec/commit/13fa1219f94d2b9ec00ae409ac4b54ee12b1e93f#61b762d1e673f1ae3229f3c514638457abc5de84_806_797
WTF with the original code? res.q_s = days*SECSPERDAY + secs; would be to clear, or what?
>I guess that’s because lower part is the same for signed and for unsigned.
Correct. This is one of the things the function-call-like macros express better than the original inline structure references.
>Maybe inc/dec macros would be worth it?
I did consider it. But remember that the macros are scaffolding to be removed; as long as I can resolve that expression to an increment of the final scalar type, it’s OK.
>Also, why did you flatten conditional operators?
Not sure what you mean here.
>WTF with the original code? res.q_s = days*SECSPERDAY + secs; would be to clear, or what?
Got me. Dave Mills’s choices were sometimes a bit odd.
> I was unaware of these tools.
Coccinelle (OCaml, mainly) was/is used quite extensively by Linux kernel development, and I think it could be better for C code. Git started to use it too, for example for literal sha1 -> object_id abstraction translation. There is also a repository of transformations.
Undebt (Python) is newer, and examples are for Python, not C.
And those are not the only open-source tools for semantic patching…
P.S. I think it is a good idea to post semantic patch, or a transformation, in the commit message.
> Can you be more specific?
https://gitlab.com/NTPsec/ntpsec/commit/13fa1219f94d2b9ec00ae409ac4b54ee12b1e93f#61b762d1e673f1ae3229f3c514638457abc5de84_396_384
You replaced:
res.Q_s -= 0x80000000;
with:
setvint64u(res, vint64s(res)-0x80000000);
which I think should be instead:
setvint64u(res, vint64u(res)-0x80000000);
>NTP needs a 64-bit scalar type for calendar calculations; what it actually wants is 32 bits of seconds since a far-past epoch and…
Wait, I’m not familiar with the guts of NTP, but doesn’t this run you into a 2k38 problem (or whenever the rollover happens with your epoch)?
In general, don’t you need at least 38 or 39 bits for any second count representing a date to be robust against overflow? The logic I’m using here is that 32 bits gets you roughly a century, half on each side of the epoch using signed integers, and each bit doubles the amount of time you can cover. 32 bits from the Unix epoch puts the overflow point closer in the future than the epoch is in the past so, it’s obviously inadequate. 38 or 39 bits will contain all of recorded history on the past side of the epoch (depending whose figures you use for the invention of writing), and a similar length into the future, and so ought to be relatively safe.
>Wait, I’m not familiar with the guts of NTP, but doesn’t this run you into a 2k38 problem (or whenever the rollover happens with your epoch)?
It does. Which is why I referred to messy details. The epoch of internal NTP time is actually variable. (The code expresses this with an implementation detail called a “pivot point”.)
Somehow this is reconciled with shipping a constant timescale in packets between machines. I don’t know exactly how yet (that part of the code is very poorly documented) but I think the trick is noticing that we don’t care about the high bits of the cycle – if a packet contains a date that is off by an integer number of time cycles plus a few seconds and a fraction, the time cycle difference is ignored.
@esr: >Nb, can/could you write semantic patch for this refactoring (Coccinelle, Undebt, …)?
I was unaware of these tools.
I’d say there’s a whole other blog post on what tools people use, and how they become aware of new tools they might want to add to the toolkit, or use instead of current tools.
Just keeping up on development tools can be a full time job, aside from keeping up on changes in the languages you develop in.
______
Dennis
Heh. Garret Kajmowicz found a bug. I will neither confirm nor deny any conjecture that I left it in deliberately, but it will be remarkably useful for explaining how one should spot errors in a patch like this.
In the Linux kernel at least, and probably in any large system, one system-architecture analogue of this is: to do large design changes, factor them into a series of smaller steps such that each patch series brings a well-defined benefit without reference to the changes that will follow it. If you want to rewrite a subsystem or internal API, you have to be able to break your rewrite up in this way so that each step can be meaningfully reviewed — with the result that you might end up somewhere better than where you thought you were going.
(AFAICT, that seems to have been behind the rejection of CML2.)
>(AFAICT, that seems to have been behind the rejection of CML2.)
Would that it had been that rational.
In fact, that rejection was such a clusterfuck of bad judgment that Linus apologized to me about it, in person, afterwards. And when else have you ever head of that happening?
Patches like this make my eyes glaze over. They literally defocus until the entire screen is a blur. The code usually looks a lot better at that point.
Line 384:
setvint64u(res, vint64s(res)-0x80000000); /* unshift of half range */
Here it looks like you’re reading the value using the signed macro, but writing it back using the unsigned macro. I suspect that could lead to a problem. Would the compiler warn about it?
> P.S. I think it is a good idea to post semantic patch, or a transformation, in the commit message.
An alternative is to add semantic patch to the collection of author tools (like author/distribution tests). Git for example sometimes add such patch to contrib/coccinelle/ as *.cocci semantic patch file, see e.g.:
https://public-inbox.org/git/f7294ac5-8302-03fb-d756-81a1c029a813@web.de/
Worth looking might be semantic patch gallery: http://coccinellery.org/
esr:
Secrecy is the beginning of tyranny.
> I was unaware of these tools.
Another tool that *might* be of interest is Frama-C (Framework for Modular Analysis of C programs), at least using Jessie to automatically prove correctness of program based on C code (via compilation t CIL intermediate language) and ACSL (ANSI/ISO C Specification Language) annotations as comments, using an automatic prover, or interactive prover.
The question is if Jesse covers all the areas of interest, and if it supports constructs used in the parts of code that are of particular interest. One thing it can reason about is integer overflow…
The “M_NEG(vint64hiu(res), vint64lo(res));” line is suspicious. Even without looking at definition M_NEG macro it looks like it must be setting something, otherwise why it is here?
#define M_NEG(v_i, v_f) /* v = -v */ \
do { \
(v_f) = ~(v_f) + 1u; \
(v_i) = ~(v_i) + ((v_f) == 0); \
} while (false)
From the comment it looks like M_NEG assumes that its arguments would be parts of the same number. I hope that it is true; if it is, it would be better to replace it with negvint64(x) macro.
>From the comment it looks like M_NEG assumes that its arguments would be parts of the same number. I hope that it is true; if it is, it would be better to replace it with negvint64(x) macro.
You are correct. I figured that out this morning.
The code will still work as is because the present expansions of the macros are still lvalues that can be assigned to. But it will need refactoring along the lines you suggest before I can actually change to a scalar representation.
> This is the closest I’ve ever come to a 1-sentence answer to the question “How the fsck do you manage to code with such ridiculously high speed and low defect frequency?”
How about “git gud”?
Fun fact: if you type “git gud” at shell, Git will download and install the personal-use version of Perforce.
…I’ll see myself out now… >_>
My guess is that politics has played a major part of lkml since well before Sarah Sharp appointed to herself the role of den mother and started scolding people — including Linus — for not playing nice, and the CML2 patchset was extensive, changed a lot, and that ruffled quite a few feathers among the higher ranks of the kernel hierarchy, which probably engendered resistance to adopting it.
Just goes to show that programming is a social activity — just like anything else — and the master programmer would be wise to take that into account, submitting only changes that the rest of the team can easily get behind (or be convinced to get behind).
Eric,
Regarding CML2, I wasn’t familiar with the acronym and had to look it up. One of the top three-ish Google results is a piece by you in which you describe the motivation for CML2:
“As Linux has grown more features and more driver support, the number of menus and prompts one must navigate to choose the appropriate subset of those features has become forbidding even to expert users, and outright impossible for the novice users Linux increasingly seeks to attract.”
This is my biggest peeve with the kernel even today, but frankly, speaking purely from the user side (not being a dev), I don’t think that this is a problem that can be solved with configuration languages (I have no doubt that CML1 compounded the problem, though it was before my time). The really annoying part of rolling a custom kernel for a new machine is trying to figure out what drivers my inside-the-case hardware actually needs, and the difficulty is not in the menu system itself, but in figuring out if “Foobartronics Widget Ethernet Adapter” on the box for my Ethernet card refers to the same the same device as the kernel configuration description “Foobar Holdings W series Ethernet NIC” for the CONFIG_FBRT_W kernel configuration option. What I *really* want is a script that will take an existing kernel config, sniff /proc/cpuinfo, run lspci, etc, and from the output of that process create a new kernel config with the same options as the old config for non-hardware dependent features and peripheral hardware (e.g, USB devices), but with options selected for internal hardware that match the results of the sniffing process. I could be happier with a much clunkier config system than currently exists if it just had that one feature.
Curious non-programmer here: When revising old programs from years back, is it common to rename variables to make them conform to more modern variable-naming conventions? Or are old variable names never (or hardly ever) changed, even if they’re from the stone-knives-and-bearskins days, because even if they’re ugly, changing them courts distaster?
>Curious non-programmer here: When revising old programs from years back, is it common to rename variables to make them conform to more modern variable-naming conventions?
No, because variable-naming conventions have been pretty stable over time. In C, at least. The story may be different in other languages.
@Deep Lurker
It is good practice to make the names of the variables conform to what they are actually measuring / holding, and make (thinking of flags here) their state conform to the positive logic truth value of the condition.
E.G., I just had a script like this:
terminateSubProcess = FALSE
While terminateSubProcess True Do
stuff
If processMustDieNow() = TRUE Then
terminateSubProcess = TRUE
End If
Done
I changed “terminateSubProcess” to “isSubProcRunning” and the logic state from FALSE to TRUE everywhere. It made it much easier to understand – no double negatives.
(No points awarded for guessing the ”language” I had to program in…. :P )
When doing similar modifications for assembly code, I compare the before/after object files. If they match, I couldn’t have broken anything. If not, diff the disassemblies/hexdumps and verify the differences.
gcc -S before.c; gcc -S after.c; diff before.s after.s will tell you what really changed.
C programmers can’t decide whether a string variable should be called fname, file_name, or lpszFileName. Which gets chosen depends on the platform, the community, the standards of the team, and the experience of individual team members. I know for a fact that there are people in the embedded space writing 1970s FORTRAN in C, slinging around variable names like z3 and gamma0.
In such cases, updating the variable names to something more meaningful is indeed a valid and often necessary refactoring step.
>In such cases, updating the variable names to something more meaningful is indeed a valid and often necessary refactoring step.
Granted, but the people you’re calling out are writing 1970s FORTRAN in C; my point is that naming conventions for C programmers writing C have been pretty stable for decades. Tho oldest versions of NTP or giflib don’t have naming patterns noticeably different than today’s.
I guess I should qualify that to say that I’m only sure this is true of Unix C. There are probably less uniform practices (camelCasing, Hungarian notation) common among those who have had their brains damaged by Microsoft tools.
> Nb, can/could you write a semantic patch for this refactoring (Coccinelle, Undebt, …)?
Here is an example of using Coccinelle to perform transformation in C code;
this is from Git project:
https://public-inbox.org/git/869cff97-9e5f-ec17-6b64-bd1e4d9d1947@web.de/
@esr: Why, in your opinion, have Microsoft tools caused brain damage?
And which Microsoft tools were you referring to?