No, not the Central Intelligence Agency. I refer to CIA.vc, a nifty free service that monitors commits on open-source repositories in real time and echoes notifications to IRC. And not really abuse, either – rather, I just implemented a way to make it do something else useful. Others might consider doing likewise.
My GPSD project does a lot of fiddly things with packed binary protocols, and is thus more than usually sensitive to platform idiosyncracies – little- vs. big-endianness, word length, that sort of thing. For the same reasons, it’s also prone to trip over toolchain bugs and unusual compiler decisions about char signedness, structure packing, and the like.
We have a pretty good regression-test suite, but it has had the limitation that any dev hacking on the code would tend to run regression tests only on the hardware nearest to hand. Which meant, in practice, that the code got washed on i86_64 and am64 pretty frequently, but oddball platforms like sparc64 and older 32-bit machines didn’t get exercised much.
Two days ago I decided to fix this.
The devutils directory of gpsd now contains two small programs and a file containing a list of remote-test sites. One of the programs, flocktest, is the one a developer will call. It walks through the flock-sites file and uses ssh to remote-execute the second script, flockdriver, on each one. flockdriver updates a slave git repo, bullds, runs the regression tests, and captures an error log. Both are quite small; flocktest is 160 lines of Python, flockdriver is 150 lines of shell.
For efficiency, flocktest backgrounds all the remote flockdriver processes as it spawns them, so they run in parallel. This means the total test time is the maximum of the individual ones, not their sum, a big advantage. But it makes getting the results back to the caller a bit of a problem. My original plan was to simply have flockdriver email them all back, but that thought made me increasingly unhappy. I imagined developers, weary of being spammed with long logs full of boring positive results, tuning out. The last thing the world needs is another spambot, even if it’s a well-intentioned one.
The obvious way to reduce the lossage seemed to be to turn the successes into one line emails – test passed, you win, kthxbye. Better. Still not happy-making. Using email for these notifications seemed…heavyweight. Creaky. Old-school.
And then it hit me. I can make CIA do this!
See, over the weekend I’d just written a pair of Python and shell CIA hook scripts for GPSD’s git repo. Yes, there were pre-existing ones, but they were dusty and buggy in minor ways and didn’t autoconfigure themselves as cleverly as I though they should. So I decided to do new ones right. You can see the results here, and they’re going into git’s contrib/ directory next point release (I just missed one yesterday).
The point was, the light XML hackery needed to feed CIA commit notifications was fresh in my mind…and I realized that CIA basically doesn’t care about the field contents. From its point of view, “Regression test succeeded” and “Regression test failed” are perfectly reasonable log messages. The commit ID for the revision tested can go in the same display space it would for a commit.
A few hours of hacking and testing later, here’s how it works: both regression-test successes and failures get shipped to CIA.vc to be announced on the project IRC channel, but you only get a log mailed to you in case of failure, when you actually want it. Win!
The scripts have very little in them that’s GPSD-specific. I might very well spin them out as a mini-project for other projects with similar requirements to use. Because I think, actually, this is in the spirit of what CIA is trying to enable. The point of the service is to have commit activity be part of the conversation on the project’s real-time channels. Test results as they happen are a very natural thing to add to the mix.
Sounds to me like you’ve turned CIA into a more general programmers’ “twitter”, or, at least, revealed its more general nature as such.
>Sounds to me like you’ve turned CIA into a more general programmers’ “twitterâ€, or, at least, revealed its more general nature as such.
Indeed. That parallel did not escape me. The thought “Something like twitter…” actually formed in my mind just before I thought of using CIA to do it.
So my question is: why weren’t you using Hudson for this? Hudson does all of this out of the box, no hackery involved, and gives you nice reports on which builds are running, waiting, and completed.
>So my question is: why weren’t you using Hudson for this? Hudson does all of this out of the box, no hackery involved, and gives you nice reports on which builds are running, waiting, and completed.
And my answer is: “Er…what’s a Hudson?”
I’ve never heard of whatever you’re talking about.
Later: Ah, a Java-land thing. That explains it. I know very little about Java tools.
What about a simple Atom or RSS feed to make announcements? It is much more passive medium and can be read by most common browsers.
As to Hudson, a quick google revealed this: http://en.wikipedia.org/wiki/Hudson_%28software%29
@Eric, it’s not a “Java” tool so much as a Continuous Integration (http://en.wikipedia.org/wiki/Continuous_integration) tool that happens to be implemented in Java. Earlier CI tools included AntHill and CruiseControl (and its .NET implementation, CruiseControl.NET) and Mozila’s Tindebox. There are commercial implementations of CI tools, but they don’t have the versatility of the Open Source versions, for reasons that should be all too clear.
The big reason to use a CI tool is what you rediscovered (and described here): once someone checks in a change, you want to ensure that regression tests are run. The more platforms you support, the more important it is to have a distributed tool to run the regression tests. The more distributed your developers, the more important it is to have a tool that can distribute the results of regression testing.
CI tools have many ways to initiate builds (manual intervention, scheduled builds, polling repositories) and many ways to notify developers (RSS feeds, XMPP and SMS messaging, email, IRQ). Hudson has a huge number of plugins (http://wiki.hudson-ci.org/display/HUDSON/Plugins) to integrate with other tools and technologies.
BTW: I included a link to Hudson’s website (http://hudson-ci.org/) in the previous post, but apparently your blog filtered it out for me.
I read the article on “Continuous Integration”, and my jaw dropped open.
They advocate integrating changes “at least once a day”. Er…what the fucking fuck? In what universe of collective brain-damage do people not routinely integrate on the order of once per hour?
This seems to me kind of like observing that it is best practice not to repeatedly stab yourself in the abdominal region. It’s true, but suggests something disturbing about the speaker and/or audience.
On a more positive note, I was interested to read Martin Fowler’s deprecation of branching in favor of continuous integration with trunk. I have arrived at this philosophy independently, and been in a couple of arguments about it on projects where I’m lead or senior dev. It is, I guess, not coincidence that I tend to be the most active advocate of automated testing wherever I am.
> They advocate integrating changes “at least once a dayâ€. Er…what the fucking fuck? In what universe of collective brain-damage do people not routinely integrate on the order of once per hour?
Cathedral-oriented eight-hour workdays in a single timezone with mediocre blind developers who essentially hack together large chunks of middleware-generated crap?
Seems to work for the entire commercial game industry. :)
@Eric, Dallas is right. Remind me to tell you about my experiences at Lockheed re: Waterfall / Spiral vs Agile. My practice is to use CI to integrate on commit — and I spent most of my time at Lockheed beating that into the heads of everyone I was working with. What was sad was moving to an Agile Practices consultancy, and having to do the same thing there.
Not the entire game industry, at least not as long as Carmack is still around. :)
Also, there is Naughty Dog, a bunch of crazy Lisp hackers who manage to somehow outperform their peers, innovation-wise, in that way Lisp hackers often do despite all their detractors saying they shouldn’t be able to.
But you’re right. Mostly, it’s a ghetto.
Interesting, I enjoy reading your blog because you’ve always got interesting projects going on. Thanks for your work in advancing open source software. You’ve done a lot of good for computing, keep up the good work.
Actually, the “once per day” thing is more likely to be a concession to enormous build times. We have Hudson polling SVN every five minutes and it is happy. But go to Massive Java House and claim you should build every five minutes and they will just stare at you. Or worse, Massive C++ House.
>Or worse, Massive C++ House.
I hear you. The core code of Battle For Wesnoth is about 145KLOC of C++ and I find those build times tough to take.
esr: the ciabot.py script on your page doesn’t download, it gives ‘Internal Server Error’. Maybe Apache is trying to execute it?
>esr: the ciabot.py script on your page doesn’t download, it gives ‘Internal Server Error’. Maybe Apache is trying to execute it?
Hrm. Dunno what’s going on there. I just pinged my site admins
The script itself just went into git contrib/. You can clone git://git.kernel.org/pub/scm/git/git.git to get it,. if all else fails.
Tom Dickson-Hunt Says:
April 3rd, 2010 at 11:59 am
… ‘Internal Server Error’
I eMailed ‘admin’ a couple a days about this error. wget pointed to the link to the ‘ciabot.py’ file yields a file containing the html of the page error, as well. April Fools?
.
Gutted….I read the post title and wondered “could esr actually have balls that big?” …only to discover that it was about vc/build hackery.
Interesting, but a major comedown….damn it man, I wanted to believe ;)
…
As far as your “WTF” moment is concerned….if you’re getting your head around git, you should know that it’s not always possible to constantly integrate. You can always integrate any of the multiple branches you have available to you personally, but there’s no guarantee that this won’t conflict with something else somebody is hacking around with. I think the “integrate once per day” regimen may well be better thought of as a ‘best practice’ (*glaaark*) to encourage project members to make a best attempt at ensuring no unknown fecal blobs get lobbed at rotary agitators.
Fuck it, let’s go back to CVS….what the fuck does Torvald know about shit? ;)
…what the fuck does Torvald know about shit? ;)
Other than how to spell his own surname, that is….d’oh.
Innocent typo, Mr. Torvalds :)
> They advocate integrating changes “at least once a dayâ€
This is obsolete. Modern CI best practice is to run on each commit. (Or more precisely, each commit with a small blackout window, so that if several come in at the same time they get lumped together)
Modern CI tools offer a vast array of build triggers, including time, repo events, other builds in the server, email messages, and so on and so on.
One common practice is to integrate on each commit and then have a daily ‘bare metal’ build where you delete everything and start from a fresh checkout. Because most often, ‘make clean’ doesn’t.
If the Java parts of Hudson make you feel a little out of your comfort zone, you might prefer buildbot; python based, and quite extensible.
I would recommend adopting one of these systems for GPSD. In the long run the superior record keeping and easier management of your build and test slaves will pay off for you.
You sir, have just re-invented a wheel.
>You sir, have just re-invented a wheel.
I looked at buildbot, and while it looks like a good design it would be overkill for my requirements. The current version of my buildbot-equivalent is just 306 lines of Python and requires no preparation on the slave machines beyond a login account to do the testing under. Being small and comprehensible is a significant advantage over more heavyweight systems.
@Sean C.
buildbot is cool. But using buildbot for what esr is using it for is a bit like swatting fly with a tac nuke. I’d say the same thing about Hudson and other CI-tools.
Better yet, I’ll say that the problem I have with a lot of workflow tools is that many of them require you to change your workflow to use the tool. Tools like these should be simple and elegant: they should fit into your existing workflow, not require you to bend yourself in knots to use the workflow tool.
@Eric, the preparation for Hudson on the master was “apt-get install hudson”; the preparation on the slaves was to create a shell account with Java and SSH access. (Also, in my case, sudo access since I’m testing Linux kernel modules.) Hudson automatically installs and updates the slave’s software as necessary.
You’re also forgetting the worst part of reinventing the wheel: maintaining it. Hudson, buildbot, and all of the other existing tools are maintained by other teams, which means you don’t have to do it yourself.
@Morgan, if you have to test on multiple hardware platforms with multiple configurations (32 vs 64 bit, RHEL vs Ubuntu, etc) then you pretty much have to bend parts of your workflow around the tool, and not the other way. For that matter, if your development practices don’t include CI, then you will probably need to bend somewhere to make best use of it (ie: writing unit tests in a consistent manner, commit changes only if it builds on your dev machine, etc.)
Apparantly there are people who think the appropriate answer to build merge issues is freeze the software for a week and to do it as infrequently as possible. True Story! *insert bitter mutterings here*
Chalk it up to one more thing that a) you can say about cathedral thats brain damaged and b) that agile in general agrees with you on.
Let’s say… all commits that come in during a build get so lumped into the next build.
In practice, I have found it necessary to lump commits in a window ranging up to 5 minutes, depending upon the VCS in use. As an example, neither ClearCase nor CVS support atomic commits, so committing a large number of changes results in a series of commits that don’t have the same exact timestamp. Even with VCS that supports atomic commits, a developer may choose to break different groups of changes into multiple changesets, but commit them back-to-back. Unfortunately, the more likely case is sloppy developers who don’t package all of their changes into a single changeset. In all of these cases, initiating a build on the first changeset, but not waiting for the VCS to stabilize results in double builds, with the first build broken. While that will (eventually) induce developers to improve their practices, it doesn’t meet the ideal that the tool should conform to the developers and not vice-versa.