…but GPSD will survive!

Four days after I got the word that Berlios is dying, I have saved GPSD from being pulled under as it sinks. A couple of observations on the project migration follow.

First, for all those with an interest, the GPSD website is now at http://catb.org/gpsd/ (in my personal space at ibiblio); the repo and mailing lists are on Savannah. The old page on Berlios announces the move.

The web stuff is on ibiblio because Savannah’s static-web support sucks big time. To update your web pages, they want you to check changes into a per-project CVS repository. Then, some time later, a cron job will check out copies and put them in your visible webspace.

I know why they’re doing it; they fear security breaches if users can reach the host filesystem with scp or friends. But “clumsy” massively understates the awkwardness of this method; the cron-job delay is injury, and having to tangle with the ancient brain-damage that is CVS adds insult to it. Fortunately they let you point your “Home Page” link offsite as an alternative to dealing with this crap.

But we must be fair. Yes, Savannah is a data jail and its internals are an architectural disaster area, but as I’ve noted before essentially all existing forges have those problems. Within those limitations Savannah’s UI is quite nice, as I know from years of experience with the close variant at Gna! that’s used by Battle For Wesnoth.

The worst part of this migration was scraping my mailing list state out of Berlios. Mailman has a decent UI and feature set, and it’s nice that you can mass-subscribe people by uploading a file of email adddresses, but – dammit – you can’t get the list back out as a file! I had to grab each successive HTML page in the user list display sequence and run them through a script to strip out the names.

Ironically, the tool I wrote two years ago to extract project state from Berlios and other forges was no use because the GPSD bugtracker is empty. Not that it would have been very useful anyway without an injector on the receiving end. That foregeplucker project stalled due to — wait for it — flakiness at the hosting site; I’m thinking I should go back there, hound the admins until they fix the problem, and pick up that project again.

The temptation to write a forge system myself is returning, too. They all suck so badly. It’s like no decent system architect has ever tackled the problem. I don’t just believe in a general way that I can do better, I know exactly how to go about it. Maybe one of these years…

58 comments

  1. I’ve never known you not to be a man who fixes the problem he sees within the culture he values. Why should now be an exception? Do it.

    1. >Does this mean you’re back to working on other projects, Eric?

      Still cleaning up stuff today. Likely to have some spare bandwidth tomorrow or Wednesday.

  2. If someone has or would publish their ideas about an optimal forge design, perhaps that would help spur implementation. I for one may be headed this direction. I would like to know what is on your mind, so perhaps I can incorporate those ideas, if I do any work in this area.

    1. >There a reason you didn’t choose GitHub? I’m using it myself, I think it’s great.

      It’s proprietary, somebody else has ‘esr’ there, and it doesn’t have the entire range of services I wanted.

  3. Would this forge project be git only, or also support Hg? If it would be built in an agnostic way to support DVCS’s, I’d be interested in lending a hand.

    Knowing your tastes, I assume it would be Python-based?

    1. My design sketch would support any DVCS. Not sure about Subversion. Most likely implementation path would be via a Python ORM such as Django.

    1. >What do you think of the current Sourceforge (now based on the open source Allura)?

      Haven’t looked at it in a year. If they had added scriptability I’d have heard about it. Without scriptability, it’s just another broken, excessively web-centric forge.

      (By scriptability I mean that any operation you can do through the web interface is also available via a mail robot or RPC interface.)

      >How about Launchpad?

      I don’t like bzr, which shoots down launchpad.

  4. Well, I think GitHub has quite a few good ideas worth borrowing. Not all of them make sense if not using DVCS (pull requests, forks), not all of them are possible in VCS other than Git (orphan “gh_pages” branch to hold webpage for a project), not all of them would be easy to port (rendering (m)any lightweight markup language: Markdown, reStructuredText, POD as README etc., or in project webpage). Some of them are stupid for DVCS, like default “tree” view with information about last comit ttaht touched file, as if Git was file-based version control system in the likes of RCS…

  5. In the not-so-distant future, I’m going to rework Savannah–getting as far away from PHP as I can… You’re more than welcome to help… I’d love for users to be more empowered…

  6. Which raises an interesting question; why is this sort of excessively web-centric design such a common failure mode today?

    You might expect people coming out of a UNIX tradition to know better… is the problem that the LAMP stack (it’s not always PHP or MySQL of course, but that basic structure seems very common) doesn’t provide a natural alternative to putting core application logic in the user interface?

    1. >You might expect people coming out of a UNIX tradition to know better…

      Yes, you might. I think – or hope, anyway – that this is a temporary episode of lunacy induced by the novelty of the Web.

  7. esr: would REST interface (with output either XML, or JSON formatted) would be enough in your opinion?

    1. >esr: would REST interface (with output either XML, or JSON formatted) would be enough in your opinion?

      Someday I will read a non-obfuscatory explanation of what REST is. Then perhaps I will be able to answer you.

    2. >esr: would REST interface (with output either XML, or JSON formatted) would be enough in your opinion?

      Now that REST has been explained, I’ll say yes. My design sketch is REST-like with JSON.

  8. I suppose there needs to be an abstraction between the database model and the various DVCS algorithms, so something like an extensible generalized model of a record (scriptable set of field names) and plugin algorithms for query response. Then both DVCS and UI plugins (skins) can be built orthogonally. Need to think about how to model relational fields and indexes, and probably many other details I am not aware of. The essence of my thought is to attempt to drive towards the rudimentary model of the data, so it is abstracted away from everything else. Ideally it would employ a language agnostic plugin architecture. (It is 1am where I am, so 1 canonical day since my prior comment.)

  9. mailman comes with a nice set of command line tools, if you can get at them.
    In particular, list_members -f gives the members with their freeform names.
    Recent mailmans seem also to have grown a number of list members per page on the “general options” page.
    But now that extraction is done, that’s not useful any more….

  10. esr,

    Warning: I have read about REST interfaces. I have never used or implemented one. This means I am surely ignorant in important ways.

    As I understand it, the idea behind REST is to use HTTP methods for a computer to manipulate data on the server, thus:

    Data Manipulation: HTTP Method
    Create: POST
    Read: GET
    Update: PUT
    Delete: DELETE

    All the rest is details (lots of those) and obfuscation.

    If I understand it correctly, which I may not, REST interfaces have the wonderful property of allowing you to use a standard web browser, perhaps with an additional text file editor and a Mark I eyeball (in my case, Mark II, since I wear glasses) to directly use the API.

    Yours,
    Tom

  11. Someday I will read a non-obfuscatory explanation of what REST is. Then perhaps I will be able to answer you.

    REST is essentially a descriptor for an RPC that’s cacheable and server-side stateless; NFS over UDP qualifies. These days, it’s mostly implemented over flat HTTP, though that’s not a requirement (SOAP’s also hanging on to a bit of the space).

    The “representational” part is supposed to indicate that the expression of the content of the resource being used is orthogonal to the resource’s identity—so, for example, a server might offer return values in XML, JSON, or Hessian for the same resource.

    The “state transfer” part is supposed to indicate that responses to requests provide sufficient knowledge for the client to manipulate the resource in question; i.e., the server generally shouldn’t return partial content.

  12. Tom, there are two key details that you’re leaving out, which are essential to achieving the property you’ve described. The first is that for an API to be properly RESTive, the body of the request needs to consist of nothing but the object being accessed or manipulated; no wrapping it in JSON or SOAP or anything else. The second is the lack of dependence on session state; every request is atomic and self-contained.

  13. Re: forge: More SCMs are starting to become little micro-forges. See fossil and Veracity for proprietary examples. I like the idea, though it may be biting off more than it can chew in some other ideas that aren’t well established: p2p bug tracking and wiki mainly. I’ve thought before that figuring out how to package up bugseverywhere and a web UI addon to gitweb and some kind of extensible wiki to go along with git would be a cool project.

    Re: REST: REST is based on a 10yo dissertation (http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm) that basically advocates restricting verbs to the four HTTP originally defined: GET/PUT/POST/DELETE and thereby limiting web interface design to defining/describing the nouns ( URIs ). Kind of the exact opposite of RPC: don’t define remote procedures, define remote objects you can call the four functions on. Also: use the stuff HTTP already gets you: HTTP Auth, content negotiation, language preferences, etc… and then sit back and relax when your app continues to scale and work through layers of proxy and caching while others who didn’t heed these design principles fall over.

  14. To my understanding, REST neither requires HTTP either as a transport or as a vocabulary nor mandates a “simple” style of representational return, though it does require that the representation include enough information to be properly interpreted by the client (e.g., by including a MIME type). Most REST services have been built on HTTP simply because, as the author identified, REST is a meta-protocol to which HTTP conforms quite nicely, and HTTP is well-understood and eyeball-hackable.

  15. Off-topic: Can you make head or tails of what Nokia’s doing now?

    I’m guessing that Elop thought things over and decided he didn’t want to go down in history as the guy responsible for the total destruction of Nokia, so they’re going for a quick-and-dirty Android clone and Microsoft can go rot.

  16. REST is the web world’s answer to Agile: If you’re failing, it’s because you’re not doing enough of it, but apparently nobody has ever actually implemented it 100% correctly, based on the fact that whenever somebody claims to have complied with it experts come pouring out of the woodwork to point out the ways in which you didn’t, so it’s not clear where the evidence for the assertion is coming from.

  17. a Mark I eyeball (in my case, Mark II, since I wear glasses)

    I wear -8.00 contacts to correct myopia, which much of the time I partially counteract with +1.50 reading glasses due to presbyopia. Does that make mine Mark III?

    (There’s something about having one thing to partially undo what another does that offends my sensibilities, but I can’t figure out a better way to handle it.)

  18. Consider using qmail and ezmlm as your esrforge mailing list software, Eric. It’s rock-solid, needs no hand work, sets up an ezmlm list using a single command with -options. Qmail is eminently scriptable, with attention paid to permanent vs. temporary failures, ownership and permissions are always clear, and it’s free of bugs, stable, and public domain.

    1. >Consider using qmail and ezmlm as your esrforge mailing list software, Eric.

      The whole concept of mailing lists as separate from bugtracker items, forum posts, and RSS feed notifications is one of the things fundamentally broken about present forges.

      In my design sketch, there are only message streams. The messages may happen to be mailed to you if you have chosen mailing list presentation, but you might equally browse them through a web interface or see them as an RSS feed. Presentation is orthogonal to content; as a user, what you’d do is (a) subscribe to a tag, and (b) choose a presentation style (often implicitly by what tool you use to access the stream).

      As for content…if it’s tagged “bug” it’s a buglist item. If it’s tagged “developer-list” it’s probably a message in a development-list discussion. If it’s tagged “release” it’s probably a deliverable with a tarball as an attachment. The engine won’t care. All the engine cares about is tags and relationships like “is a response to”.

  19. I love it how a side comment in the blog post becomes the center of discussion in the comments thread :-)

  20. The whole concept of mailing lists as separate from bugtracker items, forum posts, and RSS feed notifications is one of the things fundamentally broken about present forges.

    1000 times yes!

    Now, how were you planning on implementing the backend storage mechanisms for this? Would it be a database? Or some series of files that gets checked into a .forge-data folder of the repo? Particularly in DVCS’s, that would be kindof nice, to to see all of those items as merely changesets, and everything locally browseable. Alternatively, it could be a seperate sister-repository to the main one?

    1. >Now, how were you planning on implementing the backend storage mechanisms for this?

      Look into Ka-Ping-Yee’s Roundup system. That’s the right kind of architectural base.

  21. @Monster: Don’t they make contacts that change their curvature as you move from the center to the edges? If you could get some with -8.0 at center and -6.5 near the edge, you wouldn’t need the reading glasses, just look downwards like when using bifocals.

  22. I always liked Git’s content addressable file store with relations (directory structure) and meta data ( commits, tags) as files.

    Could that not be the basis of a backend? But that is github.

  23. I’m curious why you don’t consider github. I see many others are mentioning git-based solution. Good enough for Linus, good enough for me. ;)

  24. esr sz “Presentation is orthogonal to content;”.

    As a web coder, that is Truth with a capital T. That it applies to information delivery is new to me, and glaringly obvious. So, why didn’t I make the connection myself?

    Thanks.

  25. @esr:

    In my design sketch, there are only message streams. The messages may happen to be mailed to you if you have chosen mailing list presentation, but you might equally browse them through a web interface or see them as an RSS feed. Presentation is orthogonal to content; as a user, what you’d do is (a) subscribe to a tag, and (b) choose a presentation style (often implicitly by what tool you use to access the stream).

    Or even as messages in a social network. Sounds like what Eclipse Mylyn does on a UI level you’re doing on the backend.

  26. Or even as messages in a social network. Sounds like what Eclipse Mylyn does on a UI level you’re doing on the backend.

    When you do too much of that kind of work on the front-end, you get huge beasts like Eclipse. It would be nice if our front-end tools could really be simple front-ends, and require doing a lot less heavy lifting. If the meta-data is just accessible in a multitude of easily-digestible forms, it would be much easier to develop tools, rather than writing painful and complex parsers and the like to strip the data from poorly formed presentation layers.

    One thing I’d LOVE to see on a forge is the integration of a modern HTML/JavaScript/CSS source editor. One with the proper credentials (or anonymous) could just log in to the forge, browse the file repo, select a file, make an edit, and “save” the edit and annotate the changes. Behind the scenes, this would be saved as a changeset, and sent to the maintainer for approval or rejection. This could even be integrated with some sort of automated build/test system. I think this could have the effect of having more time-constrained developers make quick bugfixes and contributions without having to have a fully set-up development environment, VCS of choice, source editor, etc. all installed and set up. Log in, push changes to maintainer, and leave. Real frictionless.

    If the HTML-based source editor was sufficiently advanced, and could replicate a good percentage of the functionality of modern IDE’s, it could also replace them. I could see great value in this; every time I get a new workstation, I spend DAYS configuring my environment. Since I also do a few side-jobs, often on the go, this would be nice as well, as I wouldn’t have to bring my laptop down to visit my parents’ place to do work.

  27. These are very rough quick ideas that I would need to spend much more time investigating.

    1. Each message/change could have a URI, and there could be optional parameters to select rendering context. This is inverted from the usual way of a URI to a specific (buglist, mailing list, wiki, source code) page, and then an optional parameter to select the message. So that the message’s reference is orthogonal to rendering context.

    2. To track threads, messages need an “in reply to” field, which could be a list of URI(s).

    3. All message actions in the system should function like a DVCS, so the system is decentralized. Users should be able to download a local copy and edit it locally.

  28. For a forge, you would need a back-end, middleware and front-ends (this is called boxology).

    I would start with the back-end. Ideally, the forge should be distributed. That would instantly solve the migration part. Every object in the forge should be globally unique. And everything should be an object. So this brings me to content addressable files with content snapshots (commits) like the git back-end.

    The back-end would implement the post, get, put, but not delete! The middleware implements the creation and unpacking of the content addressable files, and the commit, checkin/checkout etc. The front end would then add semantics to the different objects, like tags, text meta-data etc.

    The one problem with this is that it cannot handle the destruction of objects. It will only grow. This actually works, as Github proves.

    So why is this not a good forge structure?

  29. @Winter
    “The one problem with this is that it cannot handle the destruction of objects. It will only grow.”

    I’m not sure that is a problem. The entire point of a VCS is that you can go back to any arbitrary point in the past and recover the exact state at that time, including code snippets or even entire source files that have been since “deleted”. An object may no longer be referenced in the current snapshot, but it remains as a historical fact. Unless you go back and edit out history, that historical record can do nothing but grow.

    Absent something really extraordinary (e.g. court order to remove all references to certain content that was illegally contributed), there is no reason to edit out history.

  30. @The Monster
    “I’m not sure that is a problem. The entire point of a VCS is that you can go back to any arbitrary point in the past and recover the exact state at that time, including code snippets or even entire source files that have been since “deleted”.”

    Neither am I sure. Just that things that grow without bounds make me nervous. Just “adding” information tends to lead to low signal to noise ratio.

    Btw, you can delete objects from git. But only objects that have no descendants. You could have a policy to move things not referenced for X years and having no descendants to off-site storage. But with things like btrfs such things can be handled by the underlying layer of the file system. You could also split the forge into non-cross referenced objects that could be moved to separate forges.

    So a forge could be build on top of btrfs with git. That would get you a distributed forge with history and a global file system. But that suggestion comes from ignorance, as I have no idea what else is available.

  31. @Winter, I am also ignorant of the details of DVCS, but it seems logical to me that one wouldn’t need to access the entire changelog history in order to create distributed changes on a chosen snapshot in the history. In other words, there may not even be a master copy of all the histories. This is more like real life, where individuals have thoughts and actions that the collective is not aware of. Real life couldn’t possibly function any other way. This is another example of my assertion from the theoretical physics discussion, that noise is signal to some observer and vice versa. And that no frame-of-reference is absolute.

  32. @Aaron, I don’t get it. GitHub does implement this, but I was left completely cold when they added it. This would seem to involve a workflow of
    1) Write code
    2) Commit
    3) Checkout
    4) Test
    The whole idea of committing *before* you test seems completely backwards.

  33. @Winter
    “Neither am I sure. Just that things that grow without bounds make me nervous. ”

    Me too, but that’s because one of the things I’ve done for a living is help people figure out why they’re running out of space on their servers’ filesystems or even just accumulating so much crap it won’t fit on one piece of backup media. I rail against certain parts of my company’s software that creates log files tracking every bit of <REDACTED> done, with no corresponding system in place to dispense with those that are so old that they are of no value to anyone for any reason. Alternatively, my co-workers and I are that system!

    But we’re not talking about the kind of constant deluge of logs I fight at work. All in all, I tend to think that the value of being able to recover the exact state of any file at any checkpoint is worth the disk space used. I don’t know what’s going on under the hood, but if I were doing a VCS, I’d fully store only the current version of each source file and each major version, but all other versions as diffs from the fully-stored files. (Ideally you’d have diffs both directions, so that you can recover any given version from either of the major versions that bracket it.) Then each major version and its diffs could be archived as you suggest, once the maintainers are comfortable with it.

  34. @The Monster
    “I’d fully store only the current version of each source file and each major version, but all other versions as diffs from the fully-stored files. ”

    That was how CVS et al worked. Linus shared your feelings and did not care and stored EVERYthing. However, then they build some ZIP magic that packed the whole store into diffs of the old files. So in the end, Git takes even less room than subversion.

  35. @Winter, okay but until I learn otherwise, my logic thinks it isn’t necessary to require change history in order to forward difference from a snapshot of the undifferenced (flattened) content. I must be missing something because I read the following on wikipedia, yet I still think it could be a feature of DVCS to download a compressed snapshot, instead of the change history:

    As a disadvantage of DVCS, one could note that initial cloning of a repository is slower compared to centralized checkout, because all branches and revision history are copied.

    Also by definition of “distributed”, it is the case that not all of the history of changes in the world are required to be in one repository.

  36. Curious that nobody mentioned Gitorious or Indefero as Open Source options. Gitorious is gratis AND libre while Indefero is libre but not gratis.

  37. Indefero

    I’ve been experimenting with Indefero for a while. It’s a bit difficult to get working, but it does work well enough. The wiki markup is markdown so, it is portable in that pandoc will transform it into other formats. Alas, there is no “project export” function.

    http://projects.ceondo.com/p/indefero/

    I’ve also looked at Fossil-SCM. This is in the category of “distributed forge” kind of thing. You have your own instance of the issue tracker and wiki that you sync with others or to a central repo. It exports to git.

    http://fossil-scm.org/index.html/doc/trunk/www/index.wiki

    If you are contemplating doing forge from scratch where a goal is avoidance of “data jail”, then let’s talk.

Leave a comment

Your email address will not be published. Required fields are marked *