Three Systemic Problems with Open-Source Hosting Sites

I’ve been off the air for several days due to a hosting-site failure last Friday. After several months of deteriorating performance and various services being sporadically inaccessible, Berlios’s webspace went 404 and the Subversion repositories stopped working…taking my GPSD project down with them. I had every reason to fear this might be permanent, and spent the next two days reconstructing as much as possible of the project state so we could migrate to another site.

Berlios came back up on Monday. But I don’t trust it will stay that way. This weekend rubbed my nose in some systemic vulnerabilities in the open-source development infrastructure that we need to fix. Rant follows.

1. Hosting Sites Are Data Jails

The worst problem with almost all current hosting sites is that they’re data jails. You can put data (the source code revision history, mailing list address lists, bug reports) into them, but getting a complete snapshot of that data back out often ranges from painful to impossible.

Why is this an issue? Very practically, because hosting sites, even well-established ones, sometimes go off the air. Any prudent project lead should be thinking about how to recover if that happens, and how to take periodic backups of critical project data. But more generally, it’s your data. You should own it. If you can’t push a button and get a snapshot of your project state out of the site whenever you want, you don’t own it.

When berlios.de crashed on me, I was lucky; I had been preparing to migrate GPSD off the site due to deteriorating performance; I had a Subversion dump file that was less than two weeks old. I was able to bring that up to date by translating commits from an unofficial git mirror. I was doubly lucky in that the Mailman adminstrative pages remained accessible even when the project webspace and repositories had been 404 for two days.

But actually retrieving my mailing-list data was a hideous process that involved screen-scraping HTML by hand, and I had no hope at all of retrieving the bug tracker state.

This anecdote illustrates the most serious manifestations of the data-jail problem. Third-generation version-control (hg, git, bzr, etc.) systems pretty much solve it for code repositories; every checkout is a mirror. But most projects have two other critical data collections: their mailing-list state and their bug-tracker state. And, on all sites I know of in late 2009, those are seriously jailed.

This is a problem that goes straight to the design of the software subsystems used by these sites. Some are generic: of these, the most frequent single offender is 2.x versions of Mailman, the most widely used mailing-list manager (the Mailman maintainers claim to have fixed this in 3.0). Bug-trackers tend to be tightly tied to individual hosting engines, and are even harder to dig data out of. They also illustrate the second major failing…

2. Hosting Sites have Poor Scriptability

All hosting-site suites are Web-centric, operated primarily or entirely through a browser. This solves many problems, but creates a few as well. One is that browsers, like GUIs in general, are badly suited for stereotyped and repetitive tasks. Another is that they have poor accessibility for people with visual or motor-control issues.

Here again the issues with version-control systems are relatively minor, because all those in common use are driven by CLI tools that are easy to script. Mailing lists don’t present serious issues either; the only operation on them that normally goes through the web is moderation of submissions, and the demands of that operation are fairly well matched to a browser-style interface.

But there are other common operations that need to be scriptable and are generally not. A representative one is getting a list of open bug reports to work on later – say, somewhere that your net connection is spotty. There is no reason this couldn’t be handled by an email autoresponder robot connected to the bug-tracker database, a feature which would also improve tracker accessibility for the blind.

Another is shipping a software release. This normally consists of uploading product files in various shipping formats (source tarballs, debs, RPMs, and the like) to the hosting site’s download area, and associating with them a bunch of metadata including such things as a short-form release announcement, file-type or architecture tags for the binary packages, MD5 signatures, and the like.

With the exception of the release announcement, there is really no reason a human being should be sitting at a web browser to type in this sort of thing. In fact there is an excellent reason a human shouldn’t do it by hand – it’s exactly the sort of fiddly, tedious semi-mechanical chore at which humans tend to make (and then miss)finger errors because the brain is not fully engaged.

It would be better for the hosting system’s release-registration logic to accept a job card via email, said job card including all the release metadata and URLs pointing to the product files it should gather for the release. Each job card could be generated by a project-specific script that would take the parts that really need human attention from a human and mechanically fill in the rest. This would both minimize human error and improve accessibility.

In general, a good question for hosting-system designers to be asking themselves about each operation of the system would be “Do I provide a way to remote-script this through an email robot or XML-RPC interface or the like?” When the answer is “no”, that’s a bug that needs to be fixed.

3. Hosting Sites Have Inadequate Support for Immigration

The first (and in my opinion, most serious) failing I identified is poor support for snapshotting and if necessary out-migrating a project. Most hosting systems do almost as badly at in-migrating a project that already has a history, as opposed to one started from nothing on the site.

Even uploading an existing source code repository at start of a project (as opposed to starting with an empty one) is only spottily supported. Just try, for example, to find a site that will let you upload a mailbox full of archives from a pre-existing development list in order to re-home it at the project’s new development site.

This is the flip side of the data-jail problem. It has some of the same causes, and many of the same consequences too. Because it makes re-homing projects unnecessarily difficult, it means that project leads cannot respond effectively to hosting-site problems. This creates a systemic brittleness in our development infrastructure.

Addressing the Problems

I believe in underpromising and overperforming, so I’m not going to talk up any grand plans to fix this. Yet. But I will say that I intend to do more than talk. And two days ago the project leaders of Savane, the hosting system that powers gna.org and Savanna, read this and invited me to join their project team.

50 comments

  1. At risk of hopelessly broadening the scope, most social-networking sites, or even more generically, hosted applications have these issues. With social networking sites some of it may be intended lock-in, but…

  2. Mike Earl Says:
    > At risk of hopelessly broadening the scope,

    Jumping quickly on Mike’s bandwagon, ESR’s “data jail” expression, (which is both new to me and very descriptive) reminds me of what I think is a very serious problem that is just not setting off enough alarm bells, namely the growing influence of Apple. Many on this blog on the places where guys like ESR frequents are constantly railing against the evils of Microsoft, and MS has plenty of shortcomings. But I am getting to the point where I think we need to start rooting for Windows Mobile 6.5. It is undoubtedly a terrible operating system, but it has one thing that is vital: the right to program it.

    What I mean by that is that I can create a program for WM 6.5 and sell it or give it away to anyone who also has WM 6.5. It is deeply disturbing to me that Apple, Google and RIM have locked things up so tight that you need their permission to install software on your own phone. Permission that, by all indications, is both capricious at at times, and deeply self serving at other times.

    Those of you who hate Bill Gates and Steve Balmer need to take a serious look at Steve Jobs. He is far more aggressive in controlling his platform and users that Microsoft ever was. Can you even imagine being in a situation where you needed Bill Gates permission to install software (including for example Linux) on your PC? Bill Gates was bad. Steve Jobs is much, MUCH worse.

    There is no doubt in my mind that a lot of personal computing is moving to the phone, and probably ultimately to the cloud. Google is a little better, but they have shown either an ambivalence or incompetence in deploying their platforms and app store. What sort of world do we live in when Microsoft makes the most open and free operating system for a platform?

    What price, I would ask, for pretty icons?

  3. No, let’s not go down that rathole. Yes, the data-jail effect on social networking sites is bad, and the iPhone is worse. But I can’t solve those problems, so there is little point in trying to beat them to death in this comment thread. Don’t go there. Let’s stick to the open-source infrastructure issue, a real problem that we actually have some chance to address constructively.

  4. Jessica,

    A tightly controlled platform worked out enormously well for game consoles. Why not the iPhone?

    I think that this kind of thing is going to become more commonplace in the future: controlled platforms where the vendor serves a gatekeeper role of sorts. People are learning the harsh lesson that complete openness leads to secondary problems like profound lack of integration (Linux) or malware (Windows). In a more interconnected world, closed platforms will prevail. Sad to say, but I think the sort of freedoms the open source movement advocates are something 90% or more of end users neither want nor are prepared to handle.

    And that’s leaving aside the fact that Macintosh, and not Linux, is becoming the preferred development and personal-use platform of even open-source hackers…

  5. I’ve heard the term “Hotel California” used to describe certain “cloud” services like Gmail :)

    I think GitHub has an interesting approach. Almost by definition, your Git repo on GitHub is a copy of some other repo you pushed from, likely on your home box. Part of the problem comes from a cultural institution that stems from the “free hosting” of the 1990s, where access to your own stuff was limited in some way. That caused us to develop bad cultural habits. In an ideal world, we’d be able to ssh into our hosting accounts, and thereby scp or rsync down our entire repositories, data, etc. with a single command line; contrariwise, to maintain our sites we’d simply make local changes and rsync them up. I don’t see why f.e. email or bug reports would be any different: you have the email archives or the SQLite database of bug reports in your home directory on the remote server, and both they and the scripts which prettyprint them for browsers are captured in the global snarfdown.

    But again, bad habits proliferated, and we began to see our sites as something to be maintained remotely rather than locally.

  6. Jeff Read Says:
    > A tightly controlled platform worked out enormously well for game consoles. Why not the iPhone?

    Because you don’t store your life on you PS/2. However, I will certainly respect Eric’s wishes and leave this thread alone.

  7. I’m both quite inexperienced in such issues and a bit drunk (it’s late night here), so forgive me if I’m saying something stupid but isn’t the deeper problem behind the whole problem set is hosting sites forgetting the basic principles of Unix philosophy?

    And while I know hackers tend to be sceptical about the currently trendy stuff, but if we are looking for a way to implement Unix philosophy in the Internet and solve the problems you mentioned, isn’t the current trend of SOA, Service-Oriented-Architecture, a good way of thinking about it? That every major GUI function of the hosting site should be exposed as a (not necessarily, there are other options, but for example as a) XMLRPC function – not hand-coded, but using a framework that automatically makes it so – and according to the Open Source traditions, they could leave to to other people to write utilities for migrating in, migrating out etc. ?

    1. >I’m both quite inexperienced in such issues and a bit drunk (it’s late night here), so forgive me if I’m saying something stupid but isn’t the deeper problem behind the whole problem set is hosting sites forgetting the basic principles of Unix philosophy?

      That’s one way to describe the design failure, yes. And it did occur to me when I was writing the rant, I just decided not to take the rhetoric in that direction.

    2. SOA is a good idea as far as it goes, but with that approach of automatically exposing service interfaces on a per-page basis you risk ending up with unwanted cohesions between the flow of your web GUI and the shape of your service API. IMO.

  8. @Shenpen

    Absolutely — open interfaces web service interfaces solve most of the problems Eric is talking about.

    I’ve actually been working with XMLRPC over HTTP a bit — much better way to do stuff. Makes life a breeze, especially since it’s plain text going over the wire that I can see in Firebug. Big step up from the annoying AMF-based remoting I was doing prior. Best still, it is easily parsed and consumed by ANYTHING, from a fancy Flash RIA to grep.

    ESR: Have you looked at Google for mailing list serving? They focus on making it easy to migrate away from their services. Bug tracking… I guess run TRAC or something on a generic web hosting company that you contract cheaply and make regular backups…

  9. So essentially if you consider the big ball of wax that is the aggregate project files, such as repository snapshot and histories, mailing lists (archives, and published html archives), wiki content, static content such as general web site, etc, bug-trackers and histories, some of the infrastructures to run them such as mediawiki, trac, bugzilla, svn, git, that have site-specific config files…

    Woah, hold on there, that is a big ball of wax. And yes, essentially, migrating from on hosted solution to another involves careful and skilled manual labor.

    So you want to take this big ball of wax and allow it to be moved much more seamlessly from one host to another.

    Oh and while it is hosted, you want to be able to script it, assuming something like perl, python, ruby, or the usual unix sed, awk, shell even, so you can do things like autobackup, auto-notifications, get data in and out, regenerate static content.

    The way I would do it is to create something like a virtual machine image running debian with the stuff pre-installed, and moving from one host to another would simply entail: load this image on your VPS, and a script would then check and log into the dns service and fix all the entries so everything would point correctly to the new IP address.

    Thoughts?

    1. >The way I would do it is to create something like a virtual machine image running debian with the stuff pre-installed, and moving from one host to another would simply entail: load this image on your VPS, and a script would then check and log into the dns service and fix all the entries so everything would point correctly to the new IP address.

      I’m looking at an approach on a completely different level – essentially, an object-broker daemon that speaks JSON to clients and has back ends to manipulate the host SQL database, a Mailman instance, and so forth. Client and daemon exchange JSON objects;some are interpreted as reports, others as requests to edit state.

      >By the way, I know you wrote a xml-rpc tool to get bug info into bugzilla. Is there to your knowledge, and xmlrpc interface for getting the stuff out in a sane way?

      Not to my knowledge.

  10. Given the state of virtualization technology, a hosting company ought to be able to offer its customers virtual servers to which they can use ssh, rsync, or whatever, and run whatever scripts locally as suit their needs. If they do a lousy job and hose their VM, it shouldn’t affect other customers.

    Another advantage of virtualization is the ability to offer high availability even when individual physical servers may fail. If everything is in the SAN, the loss of a physical server or three from the farm shouldn’t even be noticable to outsiders. And customers whose needs must be scaled up can be given bigger time slices and more bandwith with no changes on the customers’ side.

    So hosting companies should be doing it anyway.

  11. Apologies in advance but I wonder what RMS would think about this thread?

    What are the chances that the technical solutions being discussed here might be developed sufficiently to respond to the RMS concerns about the cloud?

  12. The suggestion that project hosting sites offer full-blown VPS misses the point of project hosting: Hackers don’t like to be admins; it’s boring.

  13. A project called Bugs Everywhere, http://bugseverywhere.org/ , could solve (or work around) part of the problem. It incorporates the bug tracker in the vcs (multiple vcs supported) so pulling your code also gets you the bug history and outstanding bug list.

  14. Given the state of visualization technology and the fact there is a rapid commoditization of computer resources driven by multiple economic factors, you are receiving the exact quality you are paying for. The issues here are not about technology, but governance and trust. Your data, your business and other aspects of our lives we share in this electronic media are now surrendered to others. Many facets of our lives are held digitally and many times are held hostage by the partners we select to proxy our relationship with others. You are effectively “relying on the kindness of strangers” to promote and help manage your relationships. We rely on them to be a trusted custodian of the data that represents our parts of our lives. When something goes wrong, very wrong, most people feel violated. You should feel violated, they are violating your trust. How did we put ourselves in this position ?

    We blindly relinquished our sense of responsibility to others, whether it be teachers, doctors, day care centers, lawyers, clergy, political representatives and appointees, other civil servants, and salesmen. We find ourselves repeating these same patterns with hosting providers or cool looking web presences. Shouldn’t we stop handing over the value our relationships and many times the ability to earn a living to the untrustworthy ? How do we ensure the once trustworthy are still trusted ? What should we look for ? We entrust our partner/provider-proxy with many important aspects of our lives, why isn’t there transparency to ensure what we share is cared for the way we expect it should be.

    More times than not, governance is not about making data available, but denying access to it. It often said “possession is 9/10s of the law” the same is true for denying access to data. If you have a website that you are conducting business on, many providers will permit you to upload all the data you would like for free… Getting that data back, however, is another issue. Depending on providers, you may have to run a gauntlet of unclear fees and other costs. Cloud computing, the panacea of low cost compute are riddled with these practices. Its like an child’s amusement park where you pay on the way out, they don’t tell you about the fees and the fees can change while your in there. If you don’t pay, they keep your children.

    Does it really matter whether the the tech is corba, rcp, xml, webdav or the next grand pooba of tech ? All the protocols and technology in the world won’t help you if the provider denies you access to your data. It don’t matter whether its because they can’t manage the scale of their business or they are holding your data for ransom, you still can’t get your data which can stall portions of your life or income.

  15. Some hosting providers actually do make a commitment to provide you with all your project data upon request; I think the issue in many cases is not malicious intent, but simply one of resourcing, because designing and maintaining a full import-and-export system is a lot of work, and existing users often put more energy into asking for features and bugfixes than immigration insurance.

    1. >Why JSON, as opposed to XML? Unless I’m using a dynamic language that can handle the stuff natively (JavaScript, ActionScript) it’s really not that convenient.

      I’ve used both and found JSON pleasantly lightweight and easy to work with, even in C. I can’t sat the same thing about XML.

  16. @esr:

    Why JSON, as opposed to XML? Unless I’m using a dynamic language that can handle the stuff natively (JavaScript, ActionScript) it’s really not that convenient.

    I agree that exposing interfaces on a per-page basis is silly; one has to come up with an interface on the service level, and have the pages or other UI consume said services.

  17. I wonder if distributed bug / issue trackers, such as mentioned BugsEverywhere, or ditz, TicGit, CIL, Gerrit Code Review (I’m sorry for pro-Git bias here) which store bug / issue info inside distributed version control system repository, and wiki / blog / CMS engines such as ikiwiki, wikiri, WiGit, Nuki, Tekuti, Chuyen etc. (again I am sorry for pro-Git bias in this list) which store / can store revision information in distributed version control system can help here.

    For example with (proprietary and closed-source) GitHub you have (at least) two out of three (four): code is in distributed version control system (Git), and Git Pages solution uses either specially named repositories or specially named branches for storage of web pages in distributed version control system repository. I don’t know about built-in issue tracker, as I have not used it; I also think that wiki pages are not stored in git repository.

  18. What would be the currently best platform ?
    What would be the platform the most willing to implement the changes you (esr) advocate ?

  19. I have a question for this very knowledgeable group. Why wouldn’t you just keep an image of your system and if your current provider becomes unreliable just migrate the entire image, OS and all ?

  20. Eric,
    You might be interested to read the blog article “Time and space tradeoffs in version control” on Eric Sink’s blog (ericsink dot com — sorry I could never work out how to embed links in wordpress comments.) It discusses the effectiveness of different methods of storing multiple revisions of the same source code in minimal space and with respect to retrieval efficiency. Eric runs a company selling a source code control system, so his insight in interesting.

    Besides the version-ed source code, surely all you need is a SQL data dump, including a database schema and you are done?

    Add a cron job and an ftp server and no administration is required.

  21. JessicaBoxer: a very serious problem that is just not setting off enough alarm bells, namely the growing influence of Apple

    It’s probably because there aren’t as many Apple users, but people like Mark Pilgrim have written about the data format problem for Apple’s closed formats.

    The thing is, there really aren’t that many (for instance) bug-tracking systems out there; the total amount of code change to provide import/export functionality is relatively small. Launchpad contains pointers to all manner of trackers, but the vast majority of them are Trac and Bugzilla. (Debian’s BTS and Sourceforge have only one instance each, but each instance is very large.) The problem isn’t so much that hosts don’t provide these services; it’s that the trackers don’t. Extending Savane in this way is a very good start. Have you considered looking at Launchpad’s “Malone” tool? It does a lot of automated bug-status fetching; it’s not a full importer, but it looks like they’ve already done a considerable portion of the work you’re facing.

  22. I want to mirror from a number of different repositories, but I am not the owner of the code, or a developer. I read your post with interest because I can say that trying to script regular updates from the popular repos has been unworkable.

    For instance, for those sites only giving access via HTTP, of course, HTTP doesn’t give a directory listing or date, and version numbers cannot be parsed, or relied on. Further, download pages can’t be reliably scraped (as though that were a solution) because they change often. Some access of this kind, such as being able to rsync the latest bundle, would be a big help.

  23. I’ve used both and found JSON pleasantly lightweight and easy to work with, even in C. I can’t sat the same thing about XML.

    Indeed.

    One of Apple’s more boneheaded moves — a display of sheer fucktardedness rooted in the software fashion industry — was deprecating the NeXTSTEP proplist format (which resembles but JSON in all but superficial syntax details) in favor of an XML format for Mac OS X.

  24. “The Unix Philosophy” by Mike Gancarz, Chapter 4 “The Portability Priority”.
    “Choose Portability over Efficiency”
    “Store Numerical Data in Flat ASCII Files”

  25. JSON is a better match for this task. Consider a bug report. A minimal common set of attributes can be specified, but beyond that, there’s a lot of differences between bug reports. An easy invariant is that exporting the project, then re-importing the project into the same project framework/service should be a no-op, so each service’s exact definition of a bug report is going to be fairly different at the protocol level.

    XML has all the necessary support to handle this, but it’s locked up in the XML namespace mechanism. The XML namespace mechanism is, in my experience, very powerful, very well thought-out, very expressive, and pretty much botched up by every single “in-the-wild” programmer who ever touches it, which unfortunately really kills it for interop. You’re going to end up with XML soup on this project no matter what you do.

    You might as well cut straight to JSON. You’ll still have “JSON soup”, but the odds of a “normal” programmer being able to deal with it are much higher. Much, much higher. The simpler serialization format will help guide programmers down the same paths, whereas XML offers a bit too much power and freedom here, and my own experience shows few people know how to use it safely.

    XML would probably be a better choice for enterprise software where half your design involves defending yourself from the hordes of mediocre programmers working on the problem, where the validation tools XML has would be helpful (for suitable definitions of “helpful”), but I think it’s a poor match for open source.

    (The only caveat I have for JSON is that you need to be strict about it; loosey-goosey parsers are a terrible idea. The spec is simple and clear; do not violate it. This is not targeted at esr in particular, just a general observation.)

  26. This is why I’ve never hosted at a site that didn’t give me ssh access and let me see the raw data files. If you have access to these, replicating your site at a similar host is pretty simple. I opt for VPSs now though..

  27. There’s one other tricky bit of structured data there I haven’t seen discussed yet; developer accounts (identities, rights). Who has rights to check in code, or update/delete/add bugs, or make an official release?

    Even once you define a format for that data you have a bit of a problem in that some of the identify information possibly shouldn’t be available even to project administrators. Maybe this should tie into something like OpenID, with some kind of project-specific databaes handling authorizations seperate from the actual identify verification?

  28. The web-services solutions people are talking up here are fine, but for bug-tracking data I’d prefer to nuke the problem from orbit and solve it the same way that distributed version control systems solve it for source repositories: Allow me to simply have a full snapshot and history on my machine at all times. Something like JIRA backed by something like git would be a truly excellent piece of technology for those of us who do more coding than we like in remote and data-starved places (airplanes, vacations, etc.).

  29. ESR wrote:
    > No, let’s not go down that rathole. Yes, the data-jail effect on social networking sites is bad,
    > and the iPhone is worse. But I can’t solve those problems…

    Respecting your desire not to discuss that in depth here, but I just want to refute the notion that we can’t solve those problems in 2010. The solution will revolve around providing a more popular set of services that are open and are more generally re-useable, giving rise to a proliferation of integration momentum. I also want to underpromise and overperform, so I will just say I am in Asia now and deeply focused on this work…

  30. What’s funny is that old projects like Savane and Gna precisely appeared when similar hazards happened with the FSF’s savannah platform outage (bits of this history in http://en.wikipedia.org/wiki/Gna.org).

    Then 6 years later, esr you have same concern … welcome on board.

    I should also point, being an oldtimer in this field, that as early as 2001, there were already discussions about the CoopX project (http://fsffrance.org/news/article2001-06-14-01.en.html) that actually has never really delivered.

    Now, the subject comes to the front again : glad to feel less alone.

    I should introduce the COCLICO project (http://www.coclico-project.org/) which we have just started, which intends to help address these concerns too. I hope we can join forces (as discussed on IRC yesterday). Sorry, not all workpackage descriptions translated yet, bur babelfish is your friend.
    The interoperability one (http://www.coclico-project.org/index.php/WP2) precisely addresses such import/export tools to diminish projects lock-in to the forges. We’re just started for a few days and are funded to work for next 2 years… so expect more news from us soon.

  31. We started a project in New Orleans after Katrina and being able to move our data from one host to another, and having daily rsync backups was a first-order issue, but it’s very much solvable. The one thing I would say is that Apple’s time machine has rsync beat when it comes to space-conserving backups. Rsync writes hardlinks of your entire filesystem every time it runs backups. After 180 days of daily backups, the app server had grown from 8 GB to 8.1 GB, yet the backup (essentially /home, /var, /usr, and /etc) took up 100GB of space.

  32. >A project called Bugs Everywhere, http://bugseverywhere.org/ , could solve (or work around) part of the problem. It incorporates the bug tracker in the vcs (multiple vcs supported) so pulling your code also gets you the bug history and outstanding bug list.

    I’ve looked at this. It appears to me to be an extremely clever solution to the wrong problem. As in – what if you just downloaded a binary, not the source repo? It looks to me like you don’t get to use Bugs Everywhere.

  33. Bug reports can be stripped off trivially from any of the major sites via their APIs. While youre correct about the mailing lists, youd be tied into the mailing list no matter where you hosted it. I dont honestly see that as being any more of an armed-and-dangerous risk at a hosting site than at Google Code or your friend Bobs mail server down the hall.

  34. I’ve looked at this. It appears to me to be an extremely clever solution to the wrong problem. As in – what if you just downloaded a binary, not the source repo? It looks to me like you don’t get to use Bugs Everywhere.

    That’s all right — projects can simply adopt the Ulrich Drepper approach: a big fat middle finger to users unwilling to check the latest source out of version control.

  35. The obvious solution is an open source mailing list manager and bug database manager that is driven by a bunch of XML or JSON files that grow incrementally and a database. Download the files and database, and you can host the mailing list and bug database on any computer.

    Sounds like a lot of work. I fully support someone else doing that work, and if someone else did that work would assist by using the product and submitting bug reports.

  36. ‘I’ve looked at this. It appears to me to be an extremely clever solution to the wrong problem. As in – what if you just downloaded a binary, not the source repo? It looks to me like you don’t get to use Bugs Everywhere.’

    Hmm…. not sure why you think not, just post a bug to the main/central repo or use the web interface (in developemnt). This aside, the point was that the bug tracker state becomes a function of source control and you just need a copy of the source to also have the bug tracker state. In the post you mention that modern distributed version control has solved the problem for source but bug tracker state and mailing lists are still data jailed. A tool like bugs everywhere solves the bug tracker state and the source control (particularly when using git/hg/bzr as the source repository). Thus one only has to worry about the mailing list state being data-jailed.

    This idea could of course probably be extended to the mailing list but it might not be a good fit, having the source, bug reports and mailing list in the same data store.

  37. Sharesource addresses (some) of these.

    1 – All project wiki pages use Wiki Creole, which is easily ported to anything else. We should probably offer project administrators some kind of ‘download dump’ link to let them keep off site backups.

    2 – The code that runs sharesource is open source. You can download it and run your own copy for your own project(s). Its then your diligence, not ours to keep the data safe. We only require that your changes be made public. Otherwise, yet-another-sucky-forge.com will pour in tons of venture cap, fix everything wrong and the patches will never see the light of day.

    The idea of dumping mailman archives is a good one and I plan to bring it up. Currently, our bug system sucks .. so that’s something we’ll address after our bug system stops sucking :)

    As for the code .. we support Mercurial and soon Git. However, providing a snapshot for SVN users might be a good idea. A project using a DVCS has no need for repo snapshots.

    Sorry to comment in reverse order, thanks for your thoughts. They clearly point out room for improvement.

    Most effort lately has gone into a Xen based compile farm for projects, so repo heads can be built and tested on many operating systems. Should you check it out, please realize that its a very part time effort.

  38. There’ s another approach to bugs in a DVCS: Integrated issue tracking with Ikiwiki. This makes the BTS a structured section of the project wiki, editable from the web or using the DVCS. So a user of a distribution’s binary could post a bug on the web and a maintainer could pull, then make one commit that changes bug status, code, and documentation.

    Maybe too slick and integrated for real-world workflows, but all you lose if your hosting site goes down is whatever bug reports came in via the web and haven’t been pulled yet.

    1. >There’ s another approach to bugs in a DVCS: Integrated issue tracking with Ikiwiki. This makes the BTS a structured section of the project wiki, editable from the web or using the DVCS. So a user of a distribution’s binary could post a bug on the web and a maintainer could pull, then make one commit that changes bug status, code, and documentation. Maybe too slick and integrated for real-world workflows, but all you lose if your hosting site goes down is whatever bug reports came in via the web and haven’t been pulled yet.

      That is really, truly elegant — the most convincing attack on that specific problem I have seen yet. If I do end up building a forge system, this will quite likely be one of the components.

  39. A bit late, but I’d like to mention that ticgit solves the centralized issue tracker by including issues in a branch in your git repo. With using that and Github’s Pages (essentially, storing the webdocs for a project in another branch), I can have everything essential to my project mirrored on half a dozen of my computers, and potentially on thousands of others worldwide.

  40. Great article, I wholeheartedly agree that not being able to write your own script is starting to become part of new technology, especially in terms of Apple. And Xiong Chiamiov, thank you for explaining that, I now am backed up 200% and am feeling secure :)
    P.S.
    If you are having problems with your host-site, sign up with Little Nimbus! I have no affiliation with the site except that I recently signed up and am extremely happy with it. No more slow uploads, no more waiting for support that doesn’t even help, and no more BS. I hope you guys switch or at least find a host-site that helps you as much as I have been helped.

Leave a comment

Your email address will not be published. Required fields are marked *