Scenes from the Life of a System Architect

I’ve been doing some heavy work on the core code of gpsd recently, and realized it would be a good idea to explain the whys and wherefores to my co-developers on the project. After I wrote my explanation and reread it, I realized I had managed to generate something that might be relatively accessible, and perhaps even interesting, to people who aren’t intimate with the GPSD codebase.

I guess I’m aiming this at junior programmers and particularly curious non-programmers. It’s a slice of what software systems design — the thing that project leads and architects do — is like in the real world, where the weight of history is often as pressing as today’s requirements list. I think this note shows an example of doing it right, or at least not getting it too badly wrong.

If you find the technicalese in here difficult, it may be useful to refer back to some of my previous posts about this project:

GPSD and Code Excellence

GPSD-NG: A Case Study in Application Protocol Evolution

Why GPSes suck, and what to do about it

Those of you following the commit list will have noticed a flurry of changes to the gpsd core code recently. It occurred to me this afternoon that I should explain what I’m doing, because it’s not a good thing that I still seem to be the only person who really understands the part of the architecture I’m modifying.

0. Prerequisites

To understand what follows, you have to remember that GPSes have reporting cycles. Start of cycle is when the firmware computes a fix. It then ships a burst of sentences reporting the fix. Wait a bit – usually 1 second – and repeat. This becomes an issue because when your device speaks NMEA 0183 you have to aggregate reports from all the sentences in a reporting cycle to get a complete fix.

You also need to know that the part of the code I’m modifying is the dispatcher layer. This is where the daemon’s main select loop lives, waiting on input from client sessions, active GPSes, and the special control port. The dispatcher allocates and deallocates sockets for the client sessions and the control port, and handles user commands. It calls the packet sniffer to accumulate traffic from the GPSes into packets that can be analyzes for fixes, applies some policy and error modeling, and ships reports to client sessions.

To do these things, the dispatcher calls out to a core library that assembles packets and a bunch of drivers to analyze them once assembled. The details of that level aren’t important for this discussion. What’s mainly significant here is the fact that they’re separated from the dispatcher logic by an API I was pretty careful about designing.

1. The motivation: bring back polling

The modifications I’m doing have been triggered by Chris Kuethe’s request that I bring back support for client-initiated polling in the new JSON-based protocol. At the moment, all clients are expected to set watcher mode, then poll their socket often enough to handle the bursts of time-position-velocity JSON that gpsd will send at them once per cycle.

The original gpsd protocol was polling based, but had significant design defects. One of them was that it was normal to poll for a bunch of partial reports (latitude/longitude, altitude, speed, etc.) but they weren’t timestamped, so it wasn’t easy to tell when the data was stale.

There were also some unpleasant edge cases arising from the fact that, in order to conserve power on battery-driven devices, the daemon didn’t try to activate any attached device until you polled for position data. Thus, you couldn’t actually do a single-shot poll reliably under old protocol – first time you tried it would wake up some device but return no data, and you would probably start getting good data on the second or third poll (depending on the timing of the next cycle edge).

There were other problems as well, and I eventually gave up on client-initiated polling entirely; when I redesigned the client API for use with the new protocol, I left it out. Chris has a decent use case for it, though, and I’m aiming towards bringing it back. The way it will probably work is that the client starts by setting a ?WATCH that doesn’t do the once-per-cycle JSON bursts, but just clues the daemon to activate devices. Thereafter the client will be able to poll for the state of the cache from the last fix(es) – that’s more than one if there are multiple sensors attached.

2. Clearing the way

But there’s major work needed to prepare the ground. The good news is that it’s entirely in the dispatcher layer and the core library, and won’t destabilize the packet sniffer or the device drivers. Changes in that lower layer are *messy* and I approach them with trepidation.

Basically, before I can re-implement client polling in a clean way I need to clear away a lot of scar tissue and complexity. I ripped out the command interpreter for old protocol weeks ago, but there’s stuff underneath the new one that’s pretty tangled. Most (but not all) of it is a result of the infamous – and now dead – J command.

Those of you who have been around for a while will remember that old protocol used to have a per-user-session policy switch controlling the circumstances under which old data was held over and merged into new data; you could have display-jitter reduction at the risk of occasionally seeing stale data from the previous reporting cycle, or you could guarantee fresh data only but pay for it with display jitter.

The reason J was needed was that the sentence mix shipped by NMEA devices is highly variable, and we didn’t have a reliable way to pin down the end of cycle – users had to know about their devices and choose the right policy. I was never happy with this, because zero configuration is one my goals, but that UI issue was nearly trivial compared to the complexifying effect on the dispatcher internals.

Because the J policy switch was per user, you could have user sessions with opposite smoothing policies (minimize jitter vs. avoid staleness) getting data from the same underlying device. Several steps of implication later, this meant the dispatcher layer had to maintain a pair of this-fix and last-fix buffers per client session, copy data up into them from the device-driver layer, and apply some policy-specific logic before emitting an actual report.

Eventually I figured how to do automatic end-of-cycle detection and the J command went away. But the tricky buffer-shuffling used to implement it didn’t! I knew I’d have to clean that up someday, but didn’t want to tackle that sooner than I had to. I knew it would require major surgery on the dispatcher.

When Chris made his request, though, I decided that it would be begging for obscure bugs to reimplement polling on top of the user-session-level buffering, and concluded that I needed to rip the latter out first.

One of the hard parts is already done. There still needs to be a buffer pair (this fix and last fix) for things like computing speed and course if the GPS doesn’t supply them. But it can be per-device and live in the core library rather than per-user-session and live in the dispatcher. I successfully dropped those buffers down a level a couple of days ago; as a happy side effect, this has already reduced gpsd’s memory footprint some.

What I’m working on now is getting rid of the channel structure. Under old protocol client sessions listened to a specific list of devices; under new protocol they listen to everything unless they’ve selected a *single* device. The difference means that the channel structure array, which exists to manage (potentially multiple) subscriber-to-device links, isn’t really needed anymore

Once I abolish channels I’ll have a clean, fairly simple dispatcher architecture that’s matched to the way new protocol actually works. That will be a good base on which to re-implement polling.

3. Removing code is good

One very nice thing about the work I’m doing is that it’s mostly ripping out chunks of code that aren’t needed anymore, with some refactoring to separate out those chunks first. While it’s possible to introduce bugs during code removal, they tend to be the unsubtle kind that lead to immediate crashes or gross regression-test failures. It’s thus relatively easy to have confidence in the results when I take a step forward in the process.

The line count, memory usage, and overall complexity of the dispatcher is going to drop significantly even after I have re-implemented polling. Polling will be handled by a relatively small, contiguous span of code in the command interpreter that raids the device-level buffers to generate a report; the removed code, by contrast, will have been from all through the dispatcher, especially the grottier parts that kept data structures with different roles in proper synchrony.

This is all good. Decreased line count means increased maintainability, and given the high reliability requirements of something that’s going to be used in navigational systems (and thus potentially life-critical) that’s especially important for gpsd.

UPDATE: I did successfully remove the channel structure, and I added a polling command – in 23 continuous lines of code.

40 comments

  1. > still seem to be the only person who really understands the part of the architecture I’m modifying.

    Hey, beats a quite-usual situation where NOBODY (including leads) really understands the part of the architecture they are modifying :)

  2. DVK said:

    > Hey, beats a quite-usual situation where NOBODY (including leads) really understands the part of the architecture they are modifying :)

    and esr said:

    > I damn well ought to understand it. I wrote it :-)

    The GPSD project is lucky that the lead architect is still around. In my experience, many projects have so much staff turnover that (in sometimes as little as a couple of years) *no one* who was there at the beginning is there now. Coupled with the usually abysmal quality of documentation (especially when management is screaming “just SHIP IT!”), and you can easily reach a state where there is only a “it’s voodoo” level of understanding about key pieces of the software. This leads to comments like “DON’T TOUCH THIS CODE!”, etc. See “The Daily WTF” for numerous examples.

  3. But it can be per-device and live in the core library rather than per-user-session and live in the dispatcher. I successfully dropped those buffers down a level a couple of days ago; as a happy side effect, this has already reduced gpsd’s memory footprint some.

    This may be a “side effect” but it is entirely predictable. It’s a cleaner design, which should also reduce the CPU cycles gpsd requires to do less work.

  4. Here is a question for you Eric. What are you doing to test the system, and in particular, what are you doing to make sure that as you change the protocol that clients depending on old versions of the protocol still work? What have you done to ensure that gpsd continues to work with all the various devices you support, absent the physical access to those devices?

    FWIW, in my opinion, this is far and away the hardest part of these sorts of projects.

  5. BTW, the keyword that prompted my question was “refactor.” Read into that what you will.

  6. >What are you doing to test the system,

    One of the first tools I wrote uses log files to simulate one or more GPSes talking to a gpsd instance. There are around 60 regression tests that use this and log files from a large variety of the devices we’ve seen. I run them frequently.

    Coverage is not 100%, because there are a couple of idiosyncratic device types that handshake with gpsd drivers in a way gpsfake cannot simulate. Those have to be live-tested.

    >FWIW, in my opinion, this is far and away the hardest part of these sorts of projects.

    As a general rule you’re correct, but not in this case. All gpsd ever sees is serial data streams; writing test jigs for it is dead easy compared to (say) testing a GUI.

  7. >The GPSD project is lucky that the lead architect is still around. In my experience, many projects have so much staff turnover that (in sometimes as little as a couple of years) *no one* who was there at the beginning is there now.

    The superior sustainability of open source strikes again. :-)

  8. # esr Says:
    > There are around 60 regression tests that use log files from a large
    > variety of the devices we’ve seen. I run them frequently.

    Awesome. My curiosity is piqued though. In my world (Windows and related) the normal practice is to have a nightly build process, which is to say an automated process that checks out a clean copy from source control (possibly multiple branches if several are live), builds them in all delivered configurations all the way to the install media, and runs all the regression tests. This is scheduled to happen every night.

    Is that sort of thing common practice in your world, or is it just considered a waste of energy?

    1. >Is that sort of thing common practice in your world, or is it just considered a waste of energy?

      Projects don’t normally do things that way because our distribution chain is quite different. Usually, an individual project’s deliverable is a source tarball; that code is presumed when delivered to have passed the project’s own functional tests. These range all the way from “er, we ran it and it didn’t crash” through elaborate unit- and regression-test suites like GPSD’s. The tendency in recent years is towards the latter, but practice is not codified anywhere.

      There’s a separate tier of packagers, associated with distributions, that takes these tarballs and turns them into installable binary packages, which are then served over the Internet to each distribution instance’s package manager. There are a number of reasons this separation makes sense; it’s adaptive in a world of multiple distributions running on multiple hardware architectures.

      Some of these packagers and distribution integrators may do nightly builds, but I don’t think the practice is common. It strikes us as unnecessary because, despite the multi-architecture deployment problem (or actually, because of it), our build toolchains are extremely reliable. If it builds and runs for the project testers, it’s generally going to build and run just fine even if the packager is compiling and linking on a different architecture.

      The most common class of exceptions to this rule is due to version skew in service libraries. We have tools for managing this problem (package dependency declarations) and actually getting bitten by it is pretty rare. Once in a blue moon you get genuine cross-architecture issues due to (for example) big-endian vs. little-endian byte order, but those have to be caught by runtime testing anyway.

  9. esr said:

    > The superior sustainability of open source strikes again. :-)

    I would respectfully submit that it is a deeper cultural issue, not merely whether or not software is developed in an open- or closed-source fashion. There are commercial operations that encourage longevity in both employment and project support (although they may be mostly smaller boutique shops – think of “foghat.com”). OTOH, there are also open-source projects that have become “abandonware” due to loss of interest by the parties that started them, and no one yet picking up the torch. (See, for instance, “http://www.unmaintained-free-software.org”)

    Eric, would you care to comment on this cultural difference? Do you believe that open source leads more naturally to continuity? (Not merely in the theoretic sense of “well, someone *can* just pick it up, or fork it to their own desires”.) If so, why?

    1. >Eric, would you care to comment on this cultural difference? Do you believe that open source leads more naturally to continuity? (Not merely in the theoretic sense of “well, someone *can* just pick it up, or fork it to their own desires”.) If so, why?

      Hell yes it does. Why? Because open-source projects are run by people who are self-selected to actually care about the software. As opposed to resentful wage slaves for whom control over their work product is minimal and development is just another stretch of cubicle time.

  10. >The GPSD project is lucky that the lead architect is still around.

    On reflection, this deserves more comment.

    I’m actually GPSD’s second lead; the first was Remco Treffkorn, who handed me the baton in late 2004 (he still pops up on the project’s developer mailing list occasionally). Successions like these aren’t uncommon, and the new lead is expected to grok the project thoroughly – excuses for not achieving this are simply not acceptable in the open-source world and it is vanishingly rare for a handoff to fail in that way (I’ve never heard of it actually happening).

    I don’t think GPSD is “lucky” at all. In my world, I think it’s normal. Thirteen years of history and two leads is not particularly exceptional in any way. Yes there a lot of mayfly projects, but when you look at the ones that are actually carried in distributions stability over long periods and orderly successions are the rule.

    Pressed, I’ll grant that GPSD is somewhat towards the high end of the length-of-life distribution. It’s also better run than average, because I’m very good at this game (as I should be – I codified some of the rules, after all). But GPSD is not freaky – there are lots of other projects as long-lived and well-run and no one in my world is surprised to encounter them.

    It is also worth noting that “DON’T TOUCH THIS CODE!” would, in my world, be viewed as an admission of such shocking incompetence and sloppiness that hackers would be too ashamed to actually have such a comment in their code. There is one place of deep black magic in GPSD where a comment says “You are not expected to understand any of this”, but that doesn’t mean quite the same thing; it’s not that the code is fragile, it’s that the protocol it’s decoding is profoundly weird and poorly documented.

    Part of what I’m trying to convey here is that the open-source world has higher minimum acceptable standards for code quality and who gets to lead than the closed-source world does. Anyone can start a project, but you get other programmers to collaborate with you only by actually being good at coding and leading.

  11. esr Says:
    > integrators may do nightly builds, but I don’t think the practice is common.

    Interesting. One of the reasons Windows software projects tend to do nightly builds is to detect incompatible check ins early on. That is to say Eric checked in this change to module 1, and Jess checked in this change to module 2. Independently they work, together they do not. I would have thought that would be more common in the distributed environments you work in, but perhaps your DVCS systems are better than we use.

    1. >I would have thought that would be more common in the distributed environments you work in, but perhaps your DVCS systems are better than we use.

      Er, much better. As in, difference between a Conestoga wagon and a Ferrari Testarossa better. This is not accidental; hackers eat their own dogfood, so they tend to turn in best performance on development tools and improve them at a speed you might find dizzying. Contrast this with, say, UI, where we don’t perform as well because we aren’t representative of the entire audience for the programs.

      But even if our DVCSes sucked, the kind of patch reconciliation you’re talking about takes place way upstream of the integrators, at the individual-project level before release tarballs get shipped. The integrators can and do assume that release tarballs will build clean. The logical people to do nightly builds would be each project’s release manager – often, but not always the project lead. I’m both the lead and the release manager for GPSD, but I’ve given my two lieutenants release authority and one of them has used it. One of my other projects, Battle For Wesnoth, has a release manager but no single lead – unusually, key decisions there are made by a loose committee of senior devs (which happens to include me).

      In practice, nightly builds don’t happen. Well, not on any project I know of, anyway. Instead, developers do their own test builds very frequently – before every commit to the VCS is normal. On a DVCS like git, here’s the typical way a patch conflict plays out:

      1. You attempt to push a commit to the project repository.

      2. The DVCS tells you you can’t because there’s a conflict with the commits since you last pulled history.

      3. You pull history from the project repo. You now have two branches in your repo, diverging at the first conflict point.

      4. You merge the branch heads. You recompile and test and all is happy.

      5. You push. The DVCS instance on the project repo sees the branch merge as a valid conflict resolution and accepts your history.

      After step 5, your repo and the project’s are synced, and there’a a bubble in the history with two paths around it leading to your merge commit.

      With this workflow there isn’t a lot of point to nightly builds. Every dev is expected to push to the project repo only source-code states that build, and the rare occasions when this doesn’t happen are occasions to point and laugh. In six years of running GPSD I think I’ve seen committers break the build all of twice. That’s in line with the frequency on other projects I’ve been in.

  12. “I damn well ought to understand it. I wrote it :-)”

    Don’t you ever have these moments when you look at your code written a year ago and think “WTF I even_meant_ to do here?” Never?

    Ah.

    Of course.

    It is a hobby project, so you can afford to take the time to make it self-documenting and logical and transparent.

    You don’t have a client calling you the phone at 11:30 that they need to send a sales quote until 12:00 via an interface you wrote and has to handle a new kind of case you didn’t expect before…

    Actually… I wonder if I should find some hobby projects like you found(ed?) this GPSD – even though there are a thousand other, non-programming related things that need my time and attention, such as getting less fat. But maybe if one has only that sort of programming experience I described above it could lead to a burn-out. And then making a living will be hard, learning a new kind of profession etc. So perhaps it could be a good idea to do some programming for fun as well in order to avoid the burn-out. Don’t know. I’m generally not motivated by fun and enjoyment but by safety and stability coupled with the riskless sorts of intellectual curiosity, but mayhaps you sometimes have to focus on creating some fun for yourself or you will be too burnt-out to provide safety and stability. Don’t know. Maybe it is so.

    1. >Don’t you ever have these moments when you look at your code written a year ago and think “WTF I even_meant_ to do here?” Never?

      Only very rarely. In thirty-three years of programming I’ve has that experience…hm, perhaps a dozen times that I can remember. Curiously, I don’t recall any of them happening when I was writing closed-source code. Even then, I took the time to make it self-documenting and logical and transparent, because I knew that would wind up being a net saving of time over my entire development cycle. I think the few exceptions mainly happened because I was exceptionally tired or stressed or ill and really shouldn’t have been coding at all.

      While it helps that I don’t usually have a client calling me on the phone with a half-hour deadline, I think it has much more to do with good mental habits. The right motto for coding is “Festina lente”; hasten slowly. Even when those around you are in a panic, if you let yourself be rushed you will pile up more problems than you solve. Think. Plan. Do it right, so you don’t have to do it over.

  13. Do it right, so you don’t have to do it over.

    But also plan on throwing one away and doing it over, because you will. With that in mind, if you can modularize your design correctly, you might get away with only having do to over one module instead of the whole thing.

    “No plan survives contact with the enemy.”
    “No software survives contact with the end user.”

    1. >But also plan on throwing one away and doing it over, because you will.

      Yes, that’s true too. The thing is, you don’t want to have to do things over again because your code doesn’t do what you intended, which is the kind of error you make in a rush.

      You do need to be relaxed about throwing away code that matches your intentions, when you find your intentions were based on a misunderstanding of the problem.

  14. The right motto for coding is “Festina lente”; hasten slowly. Even when those around you are in a panic, if you let yourself be rushed you will pile up more problems than you solve. Think. Plan. Do it right, so you don’t have to do it over.

    Or as my great-grandfather once said: “The hurrier you get, the further behind you get.”

    Another thought along those lines is something my father used say: “with accuracy comes speed.”

    But the sentiment is the same: slow down to speed up. If you take the time to do it right, you won’t have to redo it when you screw it up from being in a hurry.

    It’s a lesson I’m still struggling to teach my stepson. Unfortunately, kids never seem to listen. *shakes head in despair*

  15. esr> Part of what I’m trying to convey here is that the open-source world has higher minimum acceptable standards for code quality and who gets to lead than the closed-source world does.

    Hm… I’m not certain that’s 100% true. I’d say it is SOMEWHAT independent as far as code quality (although I would agree that leadership qualities would be self-selected for higher quality). Within my own universe of open source code I looked over (CPAN modules), I have seen just a wide dispersion between great code quality and utter excrement as I have seen in commercial stuff.

    I think the process here is somewhat different, although related – the main/most famous open source projects attract the best coders, and due to open source context the pool of available best coders is much wider than in a random company. Thus, the top-level quality of open source code is higher than a random closed source code, whereas the lower level would be just as bad or probably lots worse (due to lack of natural selection killing off awful stuff that lingers forever).

  16. … Also, forgot to mention one other factor – great coders generally have more side bandwidth to work on open source projects for a variety of fairly obvious factors compared to merely good/very good/competent coders who tend to invest much lower proportion of their effort on open source. So the best stuff gets inordinately high level of effort, but then the curve drops sharply to hobbyist coders who don’t have either the talent of top flight people OR the craftsmanship of lower level people.

    This has two consequences:
    – Before-mentioned higher standard deviation of quality of open source (less “not brilliant but solid” developers to keep the quality of many projects)
    [ for lack of a better term at 4am I have dubbed such developers “sergeants” ]
    – OTOH, the wider pool of really top level people – in full accordance with mythical man-month – will improve the quality of prjects they are on due to less needed density of developers per projects.

  17. I think hobbyist geeks like me don’t worry too much about code cleanliness as long as it gets the job done.

    But I must admit, the engineer in me often feels that taking shortcuts is wrong. However, I’ve often felt I’m not good at “designing” software so much as getting a task done.

    That’s why I took to python because it allows for all kinds of approaches without intruding itself on my personality.

  18. The Monster Says:
    > But also plan on throwing one away and doing it over, because you will.

    I am familiar with this sentiment, however, I have more and more come to the conclusion that it is a mistake. The problem with “throwing one away” is that oftentimes you throw away a lot of valuable stuff that you don’t recover. You know, the nasty bug you patched, because this algorithm failed when the network went down on the third iteration of the for loop. Or that ugly patch that fixed the SQL injection attack, or the ugly hack that prevents this system library from manifesting the pointer error when you call it with such and such a parameter. And so forth.

    Programmers like “clean” code, but often “clean” code is clean because it doesn’t deal with the nastiness of real life, which is rarely clean.

    I should also say that there is such a thing as investigative code that is not designed for production, but is a small test program to try out some important concept or technique. That certainly can be thrown away. I am referring throwing away production code that has been refined in the fire of real world usage.

    Of course, I still think you should “throw one away”, but I think you should throw it away piece by piece by the process of test driven refactoring, rather than “rm *.c; vi main.c”.

  19. > Programmers like “clean” code, but often “clean” code is clean because it doesn’t deal with the nastiness of real life, which is rarely clean.

    Often the (time and effort) cost of maintaining cleanliness is greater than the cost of the right solution in the first place. I am 99% sure that I would never have written many of my useful programs had I not just dived in and solved the problem as it appealed to me rather than try to design a solution.

    People so often look down on good old copy/paste as a reusability mechanism, but it’s often so time saving for medium complexity problems it’s amazing. I’m sure of course the software engineers would have something to say about that though!

  20. hari Says:
    > had I not just dived in and solved the problem as it appealed to me rather than try to design a solution.

    I’m not saying that is wrong, in fact, big design efforts are often of limited utility. However, the desire to start over is strong in programmers, and is, in my opinion, almost always wrong. Improve what you got with carefully structured and regression tested refactors, don’t throw all you have learned in the garbage.

  21. Here’s a freebie invention idea for you. The #1 use of cruise control is to avoid speeding tickets.

    I’ve been wanting a cruise control which has a speed display and a number pad. I’d like to be able to type in a speed I want to go, rather than match a speed I’m at. I’d like a few programmable presets, including 18mph for school zones.

    Reading your post, I realized programmed speed zones at GPS coordinates would come in handy. Especially if there was a switch for ‘relaxed’ and ‘paranoid’ modes.

    I’d also like a button, ideally parallel linked to a radar detector, that will shift my selected speeds to “paranoid” mode at the push of a button, or a beep of the radar detector.

    And this is a public forum, and my idea, so have at it, and nobody needs to pay me anything. I just want it on the market.

    Also, currently the GPS system has the intentional inaccuracy turned off, but it’s selectable to various radii of error by the government. What happens to all the neat GPS speedometers and whatnot when the error modes are turned back up?

  22. @hari:

    People so often look down on good old copy/paste as a reusability mechanism, but it’s often so time saving for medium complexity problems it’s amazing. I’m sure of course the software engineers would have something to say about that though!

    I don’t deny the speediness of copy and paste, but it can lead to trouble. Often when I’m prototyping the code, I’ll start out copying and pasting, and then refactor all the common code into separate classes or whatever as I start to write more formalized code. But while it is tempting to keep it the way it is, I always do the refactoring because it’s easy to make the mistake of changing one routine and then forgetting to fix the other copies. That’s one of the advantages of object-oriented programming in the first place. When you need to make a change, you can often change it in just one place.

    @Jessica:

    Bleh. You don’t ever really throw all you learn in the garbage. Once you’ve tackled the problem once, you tend to realize your first understanding of the problem wasn’t quite right. When you start over, you’re starting over with a renewed and refined understanding of the problem.

  23. # Morgan Greywolf Says:
    > When you start over, you’re starting over with a renewed and refined understanding of the problem.

    That’s true, but you also forget a lot of the details. The devil, along with the bugs are often in the details.

  24. I think a very basic problem in programming for someone else is in the following.

    We are not chess masters – correction *I* am not one. Some of you might be. Anyway, I can only visualize a limited number of possible future steps, combinations, whatnot. Writing code is like “X surely leads to Y, Y *maybe* leads to Z, Z leads to… to… fuck knows what”. I just have to try it out. By copy-paste and other such undijkstrian means I can get a reasonably working prototype. At this point I have to present it to the customer/boss because I have no way of knowing whether it is what they wanted or the wanted something completely different so there is no point to spend time on refactoring. I show it and it works. This is where THEY think the work is done, and I think HELL NO, the work is just to BEGIN, because what I did before was not programming but concatenating texts together that give the expected result in exactly one test case – a prototype. Of course we always argue about why do I want to spend a lot more time and thus money on that.

    This ur-problem of programming comes from the fact that code is pure information, like, a novel or a poem, yet works like physical product. If a lawnmower can mowe lawn ONCE it is truly 80% done but for code it is not true. Just like a novel that describes a story *roughly* similar to Rome and Juliet can still be a 100 times lower quality than it.

    Our craft is to do the impossible – to shape pure information as if it were a physical product, to people who expect something like physical product. Fun, innit?

  25. @Jessica:

    It’s like Shenpen said. Also, when I get working prototype, it’s usually nowhere near bug free. It works in for a test case or two, but it’s certainly not going to work for every test case. I know that going in; once I have a deeper understanding of the problem, the code I’m writing will have less bugs than my first approach. (I won’t ever class any program significantly more complicated than “Hello, world!” as being bug-free. In fact, I’m reasonably sure that even a program as consistently excellent and throughly tested, mature and stable as GPSD isn’t bug-free.)

  26. I don’t deny the speediness of copy and paste, but it can lead to trouble. Often when I’m prototyping the code, I’ll start out copying and pasting, and then refactor all the common code into separate classes or whatever as I start to write more formalized code.

    Yes sure, I don’t deny that refactoring is inevitable. In fact I think the natural leaning of most programmers is to start building up a library of commonly used functions and try to make it more generalized by not tying it down to one application.

    However, we need to figure out what level of abstraction works best for us. Sometimes putting everything into classes and super-classes is too abstract for me and I cannot visualize its usage properly. Also the potential benefits as stated by OOP experts never appealed to me.

    A good example is database access. More often than not, I simply embed the raw queries directly into the program I am using because it’s the simplest solution. Writing a whole library of “DB abstraction” seems to be not worth the effort for the potential benefit. Besides most programming languages themselves offer off-the-shelf abstraction tools which one can use effectively.

  27. @hari: Have a look at Storm. The advantage of DB abstraction with something like Storm is that you don’t have to “mode switch” between the SQL and the application language (Python in this case). Another advantage is that since you’re writing at a higher level of abstraction, the implementation of the back-end isn’t very important: you could target Storm and your app will work with SQLite or MySQL or Postgres. And you could always implement a different back-end for Storm itself, which means your app could work with potentially any relational database.

  28. @Morgan: thanks for the link. I’ll take a look at that.

    Regarding abstraction of relational DBs, I think the issue I mentioned here is that SQL itself is a sufficient abstraction for accessing and manipulating RDBMSes so theoritically if you know SQL you should be able to access all kinds of databases, only the method of connection and the internal storage should differ. The (relatively) smaller benefit of using an even higher level than SQL is offset by the time and effort cost of learning the framework.

  29. @Hari:

    In theory, you’re right. In reality — no, not reality, actuality — there are enough differences between RDBMSes that SQL written for MySQL won’t work for Oracle, for example. Also, what about databases like Google’s BigTable or other NoSQL? databases? While you’re not as likely to be using a ORM there, in theory you could write a backend for those, especially since Google exposes BigTable APIs for those wanting to develop software on Google’s platform.

    As far as learning the framework, something like Storm doesn’t require much learning curve at all, and like I said, I think the time saved in avoiding mode switching outweighs any learning curve associated with the framework.

  30. But if we are talking of actualities, the fact is my little programs are unlikely to support so many database backends and my little ordinary SELECT, INSERT and DELETE queries would probably not require much tweaking at all at least on enough of those (SQL-based) backends. :)

    But I get your point on using abstraction (to an extent). To me that extent is defined by the practical cost-benefit in relation to the current problem, not future ones.

    Of course, I don’t claim to be an expert or even an intermediate programmer, and I am at heart an imperative (procedural) programmer who uses OOP concepts occasionally for practical benefits. Even my C++ project at school could have been written in C by replacing classes with structs and functions manipulating those structs. But since the course was C++ I was forced to use them.

    I never feel comfortable trying to solve future problems and even less future problems that are more abstract than the present one,

  31. Forgot to mention one more reason why I would prefer a lower level but “standardized” approach to achieve something over a less common higher-level abstraction is that reading and understanding the lower level code is easier when you don’t need a third party library reliance which hides a lot of stuff under the hood and forces others to read more documentation.

    So this is a particular situation in which I would avoid the higher level approach, but in general I am not too dogmatic about it. As I said before, it all depends on the cost-benefit analysis.

  32. “In practice, nightly builds don’t happen. Well, not on any project I know of, anyway.”

    Oh yeah, no important or prominent open source projects do nightly builds …

    http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/

    http://www.mozilla.org/developer/

    “…
    Nightly Builds

    Created most weekdays from the previous day’s work, these builds may or may not work. Use them to verify that a bug you’re tracking has been fixed.

    We make nightly builds for testing only. We write code and post the results right away so people like you can join our testing process and report bugs. You will find bugs, and lots of them. Mozilla might crash on startup. It might delete all your files and cause your computer to burst into flames. Don’t bother downloading nightly builds if you’re unwilling to put up with problems.”

Leave a comment

Your email address will not be published. Required fields are marked *