Adobe in cloud-cuckoo land

Congratulations, Adobe, on your impending move from selling Photoshop and other boring old standalone applications that people only had to pay for once to a ‘Creative Cloud’ subscription service that will charge users by the month and hold their critical data hostage against those bills. This bold move to extract more revenue from customers in exchange for new ‘services’ that they neither want nor need puts you at the forefront of strategic thinking by proprietary software companies in the 21st century!

It’s genius, I say, genius. Well, except for the part where your customers are in open revolt, 5000 of them signing a petition and many others threatening to bail out to open-source competitors such as GIMP.

Fifteen years ago I pointed out in The Cathedral and the Bazaar and it sequels that buying proprietary software puts you at the wrong end of a power relationship with its vendor. And that this relationship will almost always evolve in the direction of more control by the vendor, more rent extraction from your wallet, and harder lock-in. Adobe’s move illustrates this dynamic perfectly.

But the response from its customer base highlights something else that has happened in those 15 years; open-source applications like the GIMP, and the open-source operating systems they run on, actually offer users a practical way out of these increasingly abusive relationships. Adobe’s customers aren’t being shy about pointing this out, and the company is going to feel heat that it wouldn’t have before 1998.

It’s not clear which side will back down in this particular confrontation. But the underlying trend curves are obvious; even if Adobe wins this time, sooner or later the continuing increases in the rent Adobe needs to claw out of its customers are going to exceed the customers’ transition costs to get out of Adobe’s jail.

The problem is fundamental; one-time purchase payments can’t cover unbounded downstream support and development costs. They can only even appear sufficient when your market is expanding rapidly and you can always use today’s new revenue to cover support costs from last year’s sales. This stops working when your markets near saturation; you have to somehow move customers to a subscription model to survive.

But doing that doesn’t solve an even more fundamental problem, which is that the stock market doesn’t actually reward constant returns any more; it wants an expectation of rising ones in order to beat the net-present-value discount curve. Thus, in a near-saturated market, the amount of rent you extract per customer has to perpetually increase.

But what can’t go on forever won’t. Eventually you’ll have to squeeze your customers so hard that they bolt. This may be happening to Adobe now, or it could take a few more turns of the screw. But it will happen. And as with Adobe, so with all other proprietary software.

578 comments

  1. Now, if only Inkskape and the GIMP didn’t utterly and completely suck for generating print-quality, professional-grade illustrations with proper CMYK support.

    (And yes, I am buying copies of Adobe’s last non-subscription versions, because I utterly detest Software as a Subscription model.)

  2. Other than getting greedy, what would prevent a company from developing software on a one-sale/one-license/no-indefinite-update-support model? They would have to keep producing new software as their markets saturate, but they could sell any new features they generate in new versions, and their costs would be some proportional function of their sales. That was the status quo for a while, and it seemed fair to me (developers got paid, we got software, and the DRM was limited to entering some letter-number-string).

    I absolutely hate the subscriber/thin-client model myself, (my software and data are the last tiny corner of the modern world where I’m *not* someone’s tenant! My computer is mine dangit!) and will take any and all alternatives.

  3. I own a copy of Photoshop CS 6 Extended. There’s no way in hell I’ll move to the subscription model.

    But there’s also no competition. No, the GIMP does not count. The UI, while not as sucky as it used to be, still sucks; the capabilities are not up to Photoshop’s, and the output quality just isn’t as good. I tried using the GIMP to do some work for Second Life, and wound up going back to PS.

    I’d love to be able to replace PS with something open source. It’s just not there yet. I have my doubts it will ever be.

  4. Note also: Cloud model for Adobe to date does NOT have your working data registered on Adobe’s server yet. It just has an authentication key that calls home every 30 days and threatens to turn off of the software if you don’t make your payment.

    Your older version of the software will still work. And will still (mostly) open the newer files…some of which are so well documented and industry standards that it will be…challenging…to embrace/extend/close off. Not impossible, but challenging.

  5. You’ll see more interest in open source because ot this, but you won’t see any sort of mass migration to open source alternatives to Adobe, for the same reason things like Open Office and Libre Office have not displaced MS Office.

    Photoshop is Adobe’s crown jewel. The Gimp is good, but it’s not Photoshop. Nothing else is. It has spawned an entire ecosystem around it. There are places making their living producing Photoshop plugins to expand the already considerable capabilities of the base product. When you look at job opportunities in the graphic arts, they will all specify knowledge of Photoshop and Illustrator.

    The biggest roadblock in the switch from Windows to Linux is that Linux is *different*. The same holds true for open source alternatives to MS Office and Adobe products. Users of any of them learn just enough about the OS and the applications to do what they want to so. I saw a reference to a graphic design student who was unhappy that he didn’t know all about Photoshop, and his professor correctly retorted that *nobody* knew *all* about Photoshop: you learned the parts of the application that did what you needed to do.

    Users are resistant to change once they’ve attained the basic mastery needed to use tools for particular jobs. The response of the average corporate user (and it’s the corporate user base we are concerned with) will be “I barely have time to get my work done now. I do *not* have time or desire to learn a whole new set of tools to do it.”

    The transition costs to get out of Adobe’s jail will go far beyond the difference in purchase price between commercial packages and open source alternatives. Adobe may well be charging more for what they sell under the new model. It will still be cheaper for the users than trying to switch.

    Not only will Adobe have to charge a *lot* more than they are currently charging: the open source alternatives will have to improve to become competitive to the commercial products. When you get into that range, price becomes the least important part of the purchase decision. It doesn’t matter how cheap it is if it doesn’t do what you want.

    I use the Gimp under Linux (and the open source Paint.NET under Windows), but my needs are fairly low end and simple, and they do what I need. If I were making my living in the graphic arts, they would not be adequate. The Gimp is not Photoshop. Inkscape is not Illustrator. Scribus is not InDesign. Until they are a lot closer to being so, Adobe isn’t going away.

  6. I imagine artists are as possessive of their work as anyone else is. The idea that they need to rent the products of their effort from some company, and that their work will eventually be sucked up to some server that they do not control, I think, will eventually spur them to an alternative, even if it isn’t quite as shiny.

    (There is *some* threshhold of exploitation where people will head for the border, provided that there *is* a border to head to!)

    But the alternative will have to be better than GIMP. The open source software will have to get better if they want to take in people fleeing some computational plantation. (I’ve used GIMP myself to do some doodling, and the way it handles selection and filling is pretty bad. I’ll have to see if new versions still have that problem. The million detached little toolbars floating about was a pain too, though apparently they’ve fixed that.)

    (I don’t have the money for photoshop – ye-gods! I’ve used Paint Tool SAI, which is commercial, but works much better (for its limited functions) than GIMP when it comes to simple illustration.)

  7. Just one comment: this is not a particularly new model. It is just new to the desktop.

    I use Evernote and Logmein, they are both subscription web sites. The former has an API to recover my data (the latter doesn’t have any significant data.)

    This really isn’t any different. In a weird sort of sense there are parallels with the open source model, in the sense that you pay for ongoing support rather than the actual software.

    Of course you are legitimate to freak about the data jail, which I suspect is your main concern.

  8. I agree that Adobe’s move will alienate users and possibly retard their revenue growth in the longer term. Surprisingly, they have leaped over 3 intermediate steps: more frequent version updates with lower discounts for existing customers, stripped-out functionality repackaged as pricey add-ons, and charging an annual support fee and/or pay-per-call support. Maybe they saw where the wining road leads and decided to take the highway.

    In any case, customers need a destination if they are being driven away. The market demand for Adobe’s products’ functions must be satisfied. I am eagerly awaiting such alternatives but, at least for now, remain an Adobe customer.

  9. I’m actually super happy about this move. It really helps people like me: I have a full-time corporate job, but I do some web development consulting on the side to bring in more cash. I only take on about 2 small projects a year, averaging about 8 hours total per month.

    I’m a developer, not a designer, so I either have to sub-contract the designs to one of my many talented friends, or my client finds their own designer. Either way, I get the design in one form: Photoshop files. I then have to slice up the layered file into smaller images that I can use in a CSS layout.

    I spend a very small percentage of my time doing this, however. Like, over the course of the year, I need to use Photoshop twice, and for 4-8 hours at a time. Photoshop CS6 can be had for about $600. That hurts. With Adobe Creative Cloud, I can sign up just for the months I need it, and use it. Adobe offers individual parts of the creative cloud for $19.99/month (http://www.adobe.com/products/creativecloud/buying-guide.html). So, in a given year, I’d have to spend $40. Much better deal for me. I’d have to subscribe for 15 years before I made up for buying Photoshop CS6. Even if I had to use it every single month, that’s $240/year, which means I’d make out just the same as if I bought a copy of Photoshop every 2.5 years. Seeing as Adobe released a new version of its creative suite roughly every 2-3 years, it comes across as a wash.

    And with older versions of the Photoshop, I had to buy upgrades every few years anyway, because the designers I work with upgraded as well. Photoshop CS5 can’t read files created by Photoshop CS6, etc. So that was roughly $600 I had to put out every couple years. This saves me a boatload of money.

    I also echo Jay Maynard’s statement that there is no replacement for Photoshop, particularly for web development workflows. If GIMP or Pixelmator or whatever had anything that was anywhere near as good as Photoshop’s Slices + “Save for Web” feature, I’d love it. But alas, they don’t, and everything that isn’t Photoshop sucks for reading Photoshop files. Perhaps it would be better if Photoshop wasn’t the de facto standard, but it is, and I gotta pay my bills, which means working with it. Photoshop is one of the primary reason’s I’ve got a Mac instead of Linux as my main workstation (which really isn’t so bad, once you install Homebrew and can get all of the open source command-line love easily installed, especially replacing all of the ghastly BSD tools with the GNU equivalents).

  10. Also, I want to say if you’re doing front-end web development, and you’re not scripting lots of your tasks using ImageMagick, you’re doing it wrong. For personal stuff that I do design myself (very, very basic stuff, like http://traas.org) I use Pixelmator instead of Photoshop, and use ImageMagick to cut up my output, resize images and stitch sprites together and stuff, as well as compression with tools like OptiPNG. But this takes very careful building of your layered file, and fat chance getting a designer to do that for you.

  11. GIMP is not a competitor to Photoshop. You want to do professional print work, you use Photoshop. Either suck it up and subscribe, as many professional shops and companies will do, or stick with a previous version, which won’t stop working.

    Open source supporters have got to realize that the most heinously licensed, buggy piece of proprietary software that does the job is infinitely preferable to an elegantly-written open source piece of software from a hippie commune that doesn’t do the job.

    Adobe writes the only piece of software that does the job. They are free to establish whatever power relationship they want with their customers, as long as that prevails. End of story.

  12. It’s like I keep saying: for many folks, a computer is a tool, not a political statement. It’s not about evil corporations vs. freedom fighters, or good corporations vs. communistic hippie communes, or any other such nonsense. It’s about getting the job done.

    If people want open source tools to take the market from proprietary tools, then the open source tools need to compete, head-to-head, politics aside.

  13. Nancy Lebovitz – Lack of money, knowledge, and time.

    Adobe makes a lot of money on Photoshop, and thus has a financial incentive to throw a lot of resources at the problem. This also has the side-effect of prioritizing more features, which people will pay for, versus being faster and more bug free. Photoshop is a huge, buggy, slow pig of an application, but it simply does more than anything else out there. And though everyone I know only uses about 5-10% of its capabilities, it’s a different set for each person.

    Also, because Photoshop is used by photographers, designers, web developers, etc. in very large numbers, they form relationships with their users and add features that their users request. GIMP doesn’t have a huge user base, so they don’t have a picture of what the “average” user needs as well as Adobe does. Sure, if you want to make a feature, you can write it yourself, but how many photographers and designers know how to code? How many web developers know how to write the kind of code you need to write for this sort of application? I mean, I have the capabilities, but it’s so far outside of what I do every day that it’d take me way too long to do.

    And the problem is so big. Sure, I could use GIMP for a lot of stuff if it had something similar to “save for web” and slices, but that would take me a couple hundred hours, and it’s also deficient in other ways. I couldn’t really make a dent, myself. And the UI is just… terrible and ugly.

  14. Photoshop is one of the primary reason’s a lot of people have got a Mac ;-)

    Seriously, is there any hope that Gimp can actually evolve to be a real competitor of Photoshop?

    I have read that one of the problems with the Gimp project is that the developers develop what they want and/or what they think is cool. There were some similar suggestions regarding Gnome 3.

    I write lim’ricks for Goddess and me
    Now, the Gimp is not all it can be
    Gnome 3 is a piss-off
    That’s where I got off
    Thank the Goddess for Xfce!

    (My new muse – OK, the scansion sucks, but I have got to get some other work done.)

    Gimp developers obviously don’t need the features that make Photoshop the standard. Yet there are presumably many people that would use the Gimp instead of Photoshop if it did the job, and more over time. This seems to be a fundamental aspect of the Open Source model in general.

    A coalition to fund the right sort of development in Gimp maybe? Is there any set of people that would put up money to see Gimp develop in a way to compete with Photoshop?

  15. All that “Save for Web” really is is a way to adjust the compression settings — which GIMP can do, too, via a slider that lets you adjust compression levels.

    But all this does is highlight why GIMP — and open source — sucks in general: if you build in a feature but don’t make it VERY, VERY EASY FOR AVERAGE USERS TO USE, you may as wel have simply not included the feature at all. People will pay quite a sum of money for an image quality adjuster called Save for Web over one called “Image Quality Adjuster”. Because the former addresses the exact problem they are trying to solve, whereas the latter confronts them with a PC LOAD LETTER problem (i.e., “what the fuck does that mean?”).

    Again, Adobe can do this because — wait for it — it CHARGES MONEY FOR SOFTWARE. The fact that they take people’s hard-earned cash puts them in a business relationship and position of RESPONSIBILITY to those people. They are therefore willing to listen to their userbase, and to give them the software their CUSTOMERS want, which makes their CUSTOMERS’ lives easier.

    This is why the software that ordinary people doing ordinary work use will always be proprietary.

    As for the subscription model, I just realized — my dad has an Autodesk subscription. It is really no big deal. He is willing to acquiesce to the subscription terms to reap the huge benefits he gets from always having the latest Inventor. The subscription model may be a good thing for Adobe users as they will always have access to software updates and support for the world’s only serious image editing program, for the same monthly fee.

  16. So, has anyone heard anything about the royally broken fork of GIMP to introduce component bit depths above 8 bit yet? Last I looked at it, it was utterly broken due to assumptions everywhere from tools to plugins that “of course a component is always an ubyte”.

  17. A coalition to fund the right sort of development in Gimp maybe?

    Maybe a monthly subscription would work?

  18. I would be the happiest person if GIMP would be on par with Photoshop. All the components of my creative workflow can be substituted by open source alternatives. In fact I’m writing this comment on Linux. I’m also using GIMP, still even some older versions of PS are faster, more intuitive, and have more useful features. I wish i had the coding skills to help GIMP become what i want it to be…

    GIMP is simply not there. I want it to be, but it isn’t. Anyone who says that these opinions are result of lack of experience, is not fully aware of what Photoshop is capable of, the speed of it’s tools and it’s performance. Photoshop is arguably the most polished Adobe product, among some really bad ones they have released (or purchased and rebranded).

    Plus, it’s not really fair to compare a free software developed by volunteers with a leading software giant’s flagship product.

    I think subscription model is not a bad thing. From the amount of a boxed copy, you can subscribe for many years. If you’re making money with Photoshop, it’s clearly a win, as you’ll get the latest.

  19. More likely would be companies getting fed up with Adobe, asking themselves how much money it’d cost to contract a programmer or two to get the features they need into GIMP, and doing so if they think it’d save them significant amounts of money over perpetual Photoshop subscriptions. Adobe’s current move may or may not yet be enough to propel movement in this direction, however.

  20. You can forget about GIMP getting anywhere near Photoshop in terms of serious image editing capability. Even if it got CMYK support tomorrow, you’d still have to license PANTONE in order to have pro-quality color support. In addition, many of the features of Photoshop — features essential to pro workflows — are covered by patents held or licensed by Adobe. So there would be significant legal challenges to getting GIMP caught up to where Photoshop was a few years ago; meanwhile, Photoshop has advanced and set new standards which have become integral and essential to the professional workflow.

    Just suck it up and buy the damn software.

  21. @ Ltw

    A coalition to fund the right sort of development in Gimp maybe?

    Maybe a monthly subscription would work?

    Laughing out LOUD!

    @ dtsund

    Yeah… I think that one or a few big companies that are fed up Adobe putting up some money for a few developers for a while would be the best possibility.

    @ Jeff Read

    you’d still have to license PANTONE in order to have pro-quality color support

    Is this a serious problem? Could the folks that need it, license PANTONE and people like me that screw around with maps and images for websites not license it?

    features essential to pro workflows — are covered by patents held or licensed by Adobe

    Man… patents were instituted to encourage innovation… Wasn’t there news a while back about software patents not being legal or something like that? Does anyone know what happened to that?

    Are we talking about work-flow patents (the slimiest thing I have ever heard of) or algorithm patents or what? Photoshop has been around for quite a while – any patents from the early days must already be over. Algorithm patents can sometimes/often? be worked around.

    I don’t know much patent law – is there any requirement for patent holders to license the “invention”? Again, this might only be for pros that would be willing to pay for a pro version.

  22. I for one welcome the sight of proprietary software vendors turning the screws hard on their little walled gardens. It will make the dangers of this model obvious to the mainstream before too many people get locked into it.

    This will inspire more of the open source community (plus a few forward-thinking proprietary software vendors) to offer more products that let the data get stored on *your* cloud instead of theirs.

  23. @Jeff Read: “The fact that they take people’s hard-earned cash puts them in a business relationship and position of RESPONSIBILITY to those people. They are therefore willing to listen to their userbase, and to give them the software their CUSTOMERS want, which makes their CUSTOMERS’ lives easier.”

    Jeff, I tip my hat to you with all sincerity as you are obviously an optimist of the highest caliber.

  24. I read the wired.com article to which ESR linked about unhappy Adobe customers. It says:

    Subscriptions cost $20 to $50 a month, and include several features, such as 20GB of online storage and, of course, the use of several Adobe products — including Photoshop, InDesign and Premiere.

    Am I mistaken or is 20GB not a whole lot of storage for people that spend their day creating images with Photoshop?

    I can buy a Western Digital 500GB Enterprise hard drive for $100 or a less sophisticated, less reliable 3TB WD hard drive for $160.

    Based on the higher-quality more expensive drive:
    (20GB / 500GB) * $100 = $4 for $20 GB or 20 cents per GB
    and of course that is a one-time price.

    Of course, Adobe buys in bulk and pays a lot less than I would pay for a single drive, and they might use cheaper drives in a RAID setup.

    But even comparing to my cost: you can bet your ass that a user that wants more disk space will pay more than 20 cents per GB over the average life of a hard drive.

  25. Very well written.

    By the way, to all those complaining about the quality of Open Source software not being up to par, this article is not about the quality, which can always improve and will continue to improve as more development efforts are made and proprietary technologies are reverse engineered. It’s about the power relationship that proprietary software companies have over the customers.

    GIMP may not be photoshop, but for the majority of non-pro users, GIMP does the job reasonably well.

    The subscription model puts the end user with little choice but to use the “latest and greatest” even if they prefer an older or stabler version. Also I read somewhere about people suffering from forced upgrades because of dropping some essential features in a newer version of the software which is actually a regression.

  26. This amounts to a rapid-fire version of what’s already happened to Windows. Just implemented with licensing instead of practical considerations.

    I don’t think there’s anything in the license of Windows 2000 that would bar me from installing it today. But that would be insane because Microsoft stopped publishing security updates in 2010, just before a bug was discovered that could allow you to be owned by merely viewing a directory containing a poison file. If you were satisfied with W2K and not with open-source alternatives, you had to buy Windows again.

    Microsoft has put off doing this to Windows XP for quite a while, but their forbearance ends early next year.

  27. Michael, that’s true of all OSs, and totally different to a subscription model. Ok, you don’t have to pay to upgrade Ubuntu, but if you’re managing a large organisation’s IT infrastructure you still have testing/rollout costs, retraining, recompilation of internal software to work with the new version. Support for open source has deadlines too.

    It’s going to be very interesting to see if Microsoft can actually end of life XP. They’ve tried a few times already. The Adobe revolt will be nothing compared to that.

  28. In 1993 and 1995, I was fixing bugs in Fractal Design Painter (which is now Corel Painter) along with the two founders (Mark Zimmer and Tom Hedges) who were the core programmers. Mark now works for Apple and does key patents for the iOS graphics systems. The key programmer of Photoshop was visiting our offices and we hired away Steve Guttman who was formerly the VP of Marketing for Photoshop (now is a VP for Microsoft). For a time, it appeared we might be trying to compete with Photoshop, then apparently (I was gone) it settled into a collaborative marketing relationship with Painter as the natural painting media tool and Photoshop the core image editor.

    A few times since then I have downloaded various freeware or open source image editors to find something with layers and other key capabilities of Photoshop. Recently Photo Pos Pro (at least I don’t have to install .NET). The GIMP looked like a nightmare to get rolling quickly with, so I never tried it. Nothing is really any where as polished as Photoshop.

    I am capable to attack this problem, but what is the profitable business model for me?

    Professional users won’t switch without duplicating most of Photoshop’s ecosystem. So that leaves the non-professional users. You could pick them off with an improvement over for example Photo Pos Pro, but how to earn a profit?

    As with Android and the smartphone, destroying the old requires some paradigm shift that creates a new market.

    Currently I am thinking it could be web apps not based on HTML5, but some new model that is more integrating. For example, every application that handles photos, should let me adjust brightness, contrast, etc..

    Integration is the key. Think ecosystem.

  29. > Professional users won’t switch without duplicating most of Photoshop’s ecosystem.

    Hmmm… wasn’t there a plugin for GIMP allowing it to use at least some of Photoshop plugins (or filters), in some circumstances?

  30. Adobes biggest problem with this transition is that most of their products are pretty mature. I own the CS6 suite, and probably would have paid them money to upgrade to CS7 just to get a few extra features, but I have no interest in funding their annuity.

    There’s very little I can’t do with the current version of their product and there is really no compelling reason for me to pay them a monthly fee for the few extras they will be supplying. I’m guessing that I’m not the only one that feels this way, and it will be interesting to see what happens to Adobe in the coming months. I’m betting they will quickly back down off of this when they see how few people sign on to their new scheme.

  31. @Al:

    there is really no compelling reason for me to pay them a monthly fee for the few extras they will be supplying.

    Professionals have to stay current, because of interoperability and integration, e.g. someone sends them new version files or a file that requires a plugin that has dependencies.

  32. “GIMP may not be photoshop, but for the majority of non-pro users, GIMP does the job reasonably well.”

    Not the point. Non-pro users already either don’t use Photoshop or pirate it. And there’s clearly a LOT of money in making the de facto standard of professional image editing software.

  33. All that “Save for Web” really is is a way to adjust the compression settings — which GIMP can do, too, via a slider that lets you adjust compression levels.

    Yes, that’s what it boils down to, but the GIMP version, last I checked, didn’t allow me to adjust the compression to multiple different levels and let me look at the output side-by-side, or cut the layered image into “slices” and export dozens of well-named files in one pass. Or resize/resample an image on output—very big in today where I serve up 2 different sizes of an image: one for high-DPI displays (newer phones, “retina” Macbooks, Chromebook Pixel. etc.), and one for low-DPI displays (everything else).

  34. I suppose the empirical key for Eric’s argument to work isn’t that Gimp be as good as Photoshop. Rather, it’s that Gimp be good enough for enough Photoshop users that their defection could effectively rein in Adobe’s market power. The feedback to this thread seems to come largely from users who need the full power of Photoshop. But what empirical evidence do we have on the number of low-power users, who could switch?

  35. I honestly cannot understand the people upthread claiming that the GIMP is somehow “unusable” or “not a viable replacement”. I find it works well and covers all my needs; and I, an avid commandline freak, have not once resorted to script-fu. (You bet I would if there were the /slightest/ excuse.)
    But of course “the GIMP will never be able to compete with the powerhouse of Adobe” is a much more consonant narrative than “actually, this GIMP thing’s good enough now”. Availability heuristic strikes again.
    Come to think of it, the GIMP just might be executing a classic disruption from below; when the argument for Adobe is “it has all these fancy features [that you don’t really need]”, the gold-plating has already taken over.

  36. I need a migration path from FreeHand to Inkscape in the worst way. Right now, Illustrator is the only one until Inkscape develops a FreeHand import. I’m not alone.

  37. @Al
    >Adobes biggest problem with this transition is that most of their products are pretty mature.

    I don’t know Photoshop, but I think that this is core reason for the new revenue models. It is certainly true in Microsoft Office, which basically peaked in Office 2003. There have been really no new significant features since then. What MS tried was a new GUI (which was utterly horrible) and now they are trying to move to a subscription model.

    The old subscription model was “upgrade every year because we fix bugs and add crucial features”. That gravy train has ended since there really aren’t any more features to add, and most of the important bugs are fixed. So there is a need to move to a new revenue model to replace the “upgrade every two years” model.

    God forbid they used all that programming talent to come up with new products. Of course that is really hard in big, sclerotic companies like Microsoft and Adobe, which is why they tend to buy rather than build.

    So the goal now is use a forceful model to maintain their revenue stream, a stick rather than a carrot. Not “upgrade for these new features” but “don’t upgrade and you won’t be able to open new file types, and we won’t support you.” The fact that the new file types don’t have any new types of useful information in them is beside the point.

    Similar, BTW with Windows 8. Windows 8 is not about making things better for the users, it is about making things better for Microsoft, and using their market force to push that on to people whether they want it or not.

  38. @Thomas Blankenhorn:

    But what empirical evidence do we have on the number of low-power users, who could switch?

    They probably already do far outnumber the professional users, but I suspect they don’t spend as much even collectively as the professional image editing market does.

    The key with Android disruption is that even though iOS users spend more on hardware and software, the profitability of Google and Samsung is not affected.

    There needs to be some tangential business model for someone to fund the improvement of the tools for the non-pro image editing markets to the point where they are sufficiently powerful to bleed the professional market, as Android is doing to iOS. That is why I wrote “think integretation and ecosystem”.

    I have been toying with the idea of an alternative to HTML5 for web apps that would build into such libraries and then let everyone build the functionality into their apps. But I have too many ideas.

    @Edward Cree:

    I honestly cannot understand the people upthread claiming that the GIMP is somehow “unusable” or “not a viable replacement”.

    Because of the installation and learning curve hurdle. Last time I tried, I couldn’t even get past the huge download and other issues I don’t remember. I want to have something running on my Windows XP in 3 minutes that looks as close to Photoshop as possible, so I can finish my task and get on with my life.

    argument for Adobe is “it has all these fancy features [that you don’t really need]“

    Users maybe only need 10% of the features, but the features they need vary and they really need them. Besides in a pro workflow, you are tied to the file formats your various team members use. Thus the greatest common denominator wins. Commercial vendors listen to their customer support and build in every little thing asked for. I know because I interfaced with customer support and liasoned to the core developers. Open source features get added only if someone is motivated enough to fund it, which is more chaotic and less focused. Open source suffers from the modularity problem which makes it costly to add features when it requires large code refactoring. Commercial outfits can think holistically and cathedral planning. I have been trying to improve this by designing a computer language that will enforce greater modularity of code.

  39. Could the folks that need it, license PANTONE and people like me that screw around with maps and images for websites not license it?

    I must not be getting it at all. What does “license PANTONE” mean from a practical standpoint? Say I need to match “Pantone 100 C”. I can consult a cross-reference and find that’s
    C=0 M=0 Y=51 K=0
    R=255 G=255 B=125
    #ffff7d
    I can then use “Pantone 100 C” in The GIMP with very little fuss. If that’s too much trouble, someone could probably write a plug-in that does the lookup “in” The GIMP. By making it separate, the GIMP team wouldn’t be responsible for the license issue (it is unclear to me exactly what IP Pantone is asserting, as I don’t see the ™ symbol associated with these color codes, and they’re too short for copyright to apply).

    Is there something else involved in how Photoshop uses PANTONE beyond translating from PANTONE numbers to CMYK / RGB decimal /RGB hex like those cross-references do?

  40. @The Monster:

    I don’t see the ™ symbol associated with these color codes

    Pantone asserts trademarks to the color names and numbers.

    Perhaps use “Phantomb 1.0.0 C” instead ;)

    Chinese bloggers are astute at obfuscation, even misusing the literal meaning of a word to create a social wave of implied humor to subvert political criticism filters.

  41. @Thomas Blankenhorn: “But what empirical evidence do we have on the number of low-power users, who could switch?”

    Who cares? They aren’t the market Adobe is after.

    I have an ancient (v5) yersion of Photoshop. It’s not even installed at the moment. The last time I used it, I loaded it up because I was dealing with a truly enormous (over 100MB) image file from the Hubble telescope, and PS was the only image tool I had that would even load it. I use the Gimp under Linux and Paint.NET under Windows for the relatively low end uses I have. (For the most part, all I need to do is crop, resize, and sometimes color correct images. Lots of things can do that.)

    I do somewhat more DTP, and there, I use MS Publisher under Windows. (I have Scribus, and it looks capablebut I know how to make Publisher do what I want, and haven’t had the time or incentive to thoroughly learn Scribus.)

    Adobe products are *expensive.* In DTP, InDesign is in the same position as Photoshop for image editing. It’s what everyone in the industry uses.

    I can’t justify the expense. I don’t do DTP for a living. (I was a designer and print production guy long ago, but have been out of that business for decades.) The stuff I do in DTP and image editing is hobby work that I don’t get paid for. There is no way I will spend what Adobe charges for PS and ID for what is essentially occasional hobby projects, when there are cheaper or free projects that do what I require.

    The people Adobe aims at *make their living* using Adobe tools. The “low end” user running Photoshop likely has a pirated copy. There are too many other tools (including open source and freeware) for the low end user that cover their needs. They don’t *need* Photoshop enough to spend the money.

    There isn’t one big undifferentiated market here. There are several markets, with the basic distinction being those who get paid for what they do vs those that don’t. If you spend all day in an image editor doing work people pay you to do, you use PS, period. There is nothing else from anyone that is comparable. And because you do it for a living, it’s a tax deductible business expense.

    Adobe’s new model may stop piracy, but I don’t expect Adobe’s revenues to go up because of it. The folks who pirated it did so because they could get it free, but are unlikely to pay for it in any case. They might shift, (or stick with the older version thay have), but Adobe wion’t miss them because it never had them.

    The people we need to hear from on the issue are pros who make their living doing this sort of thing and use Adobe tools to do it. Our opinions don’t count, because we are irrelevant to Adobe.

  42. @DMcCunney:
    I mostly agree with your comments.

    The people we need to hear from on the issue are pros who make their living doing this sort of thing and use Adobe tools to do it. Our opinions don’t count, because we are irrelevant to Adobe.

    Or perhaps as a longer odds idea, we need to hear from people who have ideas about how the professionals (or their ecosystem) could be replaced by novices (or disrupted by a platform that is also applicable to novices).

    Apple and Adobe upscale markets are supported by western advertising and consumerism. With the coming global sovereign debt implosion, the big budgets may be slashed and the “at any cost” dynamic might give way to “what reasonable can be accomplished at low expense”.

    I suppose one might be able to just start cloning Photoshop and pick off 10% of its features every year and be ready by the time the 2017 implosion hits. I could probably do much of this myself (although I am sure I would run into snags, e.g. emulating plugin idiosyncrasies). Perhaps sell it in subscription model at $1 per month, but with open source for those portions that could benefit from community help for those who want to jump through hoops to save a dollar per month. Hmmm.

    Apologies for multiple posts. I am very interested in this topic.

  43. @Just Saying: “Apple and Adobe upscale markets are supported by western advertising and consumerism. With the coming global sovereign debt implosion, the big budgets may be slashed and the “at any cost” dynamic might give way to “what reasonable can be accomplished at low expense”.”

    I suspect we differ on the likelihood of a global sovereign debt implosion. But even so, I don’t see it fundamentally changing the dynamic. If I use Photoshop, I may have migrated to their cloud based subscription model, but I likely still have an older local copy to fall back on. (What do I do if there’s an Internet outage and I can’t *get* to the cloud for a bit? I have local tools and a local copy of work in progress.)

    For that matter, I know PS users still using older versions who feel no need to upgrade. (One makes his living doing photo retouching, and the version he has does what he needs. He’s disgusted by the fact that each newer release moves things around and imposes a learning curve on figuring out how to get *this* version to do what he wants. He’s staying put, and has for years.)

    Most of the commentary here seems to assume the market is lots of individual price sensitive users who will revolt and switch as the price rises. I don’t think that’s valid because there really isn’t anything to switch to, the principal market is corporate users, the costs go rather beyond the price of the software itself, and chances are good that either your employer pays for the software, or it’s a deductible expense for you. Price is probably the least important criteria in the what to use decision.

  44. @DMcCunney:
    I think you’ve supported my assumption which is that many users are content with a static Photoshop version (or clone that they can approximate to particular version they are comfortable with). And that professionals require the debt-based consumer markets that fund the corporations. If I am correct about the coming debt implosion, the disruption will thus come from where those older version users go when the older versions no longer function on new hardware or OSes, or are lacking some key feature or interoperability that becomes important. That is why I keep stressing thinking ecosystem for the likely path of disruption. What the market really wants is viable business model that funds the quality of a Photoshop, but with more user choice over which feature changes they want (able to mix and match). I keep thinking a more modular computer language is a key ingredient for all things open source going forward.

  45. Lol, GIMP. The poster child of why bazzar non-corporate FOSS will not replace major proprietary ecosystems.

    A key component of the FOSS ecosystem had a grand total of 2.5 devs for a significant period delaying major releases and features.

    If adobe falters it won’t be because of any FOSS product but another proprietary app. Likewise MS Office. The threat isn’t Libreoffice but Google Apps. Which is proprietary and for businesses a subscription service.

    Even then, most folks have figured out that $50/mo for the entire suite is cheaper for many pros. Prosumers can get away with apps like Pixelmator except for the once in a blue moon need to pay $30 to use PS for the month.

    It’s install and use with a 180 day grace period for checkin if you are disconnected. I would assume that you can install keep using multiple older versions for safety and compatibility during a long project. If not that’s a fair complaint but fixable.

  46. The problems that Gimp needs to address to become a viable alternative to Photoshop are:

    1. Change the butt ugly user interface, If you want to have any appeal to graphics professionals, you have to be beautiful. Use nice widgets and appealing fonts in the user interface.
    2. Do better in the workflow department. Many operations can be made much smoother. The UI designer should imagine that he has to do almost the same operations on 100 images in a row. The job is to turn the task from boring and tedious to only being boring.

  47. 1. Change the butt ugly user interface, If you want to have any appeal to graphics professionals, you have to be beautiful. Use nice widgets and appealing fonts in the user interface.

    Hey, did you hear that they wrote a whole widget library just for GIMP? It’s called Gimp Toolkit or GTK or something. Ridiculous! They should have just used the native Macintosh widgets for a beautiful, consistent look and feel. I mean it’s not like there’s going to ever be any serious image work done on Linux or anything…

    2. Do better in the workflow department. Many operations can be made much smoother. The UI designer should imagine that he has to do almost the same operations on 100 images in a row. The job is to turn the task from boring and tedious to only being boring.

    GIMP’s solution to this problem? Build a scripting language into the image editor! I mean, come on! Do they really expect the guy who has to perform the same operation on 100 images a day to know enough to automate his task, let alone how to go about doing it?

    And that’s exactly why MySQL and PostgreSQL will never displace Clipper in the database market. You see, Clipper optimizes for the common case: when you have someone who has to enter hundreds of customer or invoice records per day. The free SQL databases don’t do much on their own, and are aimed at programmers building highly specialized applications. As we all know, programmers are completely irrelevant to the professional market.

  48. @Nigel: “Lol, GIMP. The poster child of why bazzar non-corporate FOSS will not replace major proprietary ecosystems.”

    Yes and no.

    If anything you’re being too kind to GIMP. But to generalize all bazzar non-corporate FOSS from GIMP is stretching way too far. The UI on GIMP looks like an S&M experiment that got way out of hand. Thankfully it’s hardly representative of FOSS.

    Is Linux an example of bazzar non-corporate FOSS? I don’t think anyone could answer that. And who cares? It’s been replacing major proprietary ecosystems for a long time now.

    I don’t really care if my tools are truly bazzar non-corporate FOSS as long as (per ESR) the requisite ability to fork the code is there. Even proprietary CSS is not bad as long as the opportunity for vendor lock-in (a la Apple, Microsoft, Adobe) is minimal.

    But yes, if GIMP were any kind of example of the best that could be delivered by FOSS most of us would gladly pass.

  49. Edward, I’m a low-power user…but what I do in Photoshop is make things that have to look good at small pixel dimensions that get blown up to big objects in Second Life. I’ve tried it in The GIMP. It failed miserably. Things just looked badly done, pixelated and fuzzy and cheap. There may be tricks to coerce The GIMP into producing as good results as Photoshop does out of the box…but that tricks are needed is itself a damning commentary.

  50. @Jeff Read:

    Gimp Toolkit or GTK or something. Ridiculous! They should have just used the native Macintosh widgets for a beautiful, consistent look and feel. I mean it’s not like there’s going to ever be any serious image work done on Linux or anything

    The GUI toolkit should have the user configurable option to adopt the native look & feel. I tried using GTK for just a simple timer for an internet cafe in 2010, and I couldn’t easily get rid of the huge title bar on the small always-on-top clock window.

    A GUI toolkit is not necessarily a bad design concept, assuming it is as performant as the native GUI APIs it abstracts.

    Build a scripting language into the image editor! I mean, come on! Do they really expect the guy who has to perform the same operation on 100 images a day to know enough to automate his task, let alone how to go about doing it?

    If designed correctly, that is the correct way to modularize the solution, and now someone just has to script Photoshop clone features for tasks and make them available preconfigured into a download for a GIMP version.

    @Michael Hipp:

    http://www.gimpshop.com

    I looked at the home page and I am interested to try that next time I need to use an image editor. Comments for those who’ve tried it would be appreciated. The home page says it is optimized for Cairo and Google’s summer of code has funded some of the improvements. I wonder who is funding the key developers? Does anyone know the business model?

  51. Is Linux an example of bazzar non-corporate FOSS? I don’t think anyone could answer that. And who cares? It’s been replacing major proprietary ecosystems for a long time now.

    Nope. If it were not for IBM, HP and corporate coders Linux would look a lot like FreeBSD in both capability and market footprint.

    The reason why IBM and HP made the investment is that IBM realized it could really hurt Sun this way and take server share and HP because, well, because HP is stupid. It worked. Sorta. Sun now belongs to Oracle which is probably not exactly the outcome IBM was looking for.


    I don’t really care if my tools are truly bazzar non-corporate FOSS as long as (per ESR) the requisite ability to fork the code is there. Even proprietary CSS is not bad as long as the opportunity for vendor lock-in (a la Apple, Microsoft, Adobe) is minimal.

    There is a significant and very vocal segment of our community where FIAWOL (FOSS Is A Way Of Life) and not FIJAGWOSP (FOSS Is Just Another Ghod-Damned Way Of Solving Problems) and they are a lot more militant (and annoying IMHO) than the Fandom FIAWOL adherents.

  52. @Michael Hipp: I looked at Gimpshop a while back. It’s essentially a repackage of the Gimp to provide a UI more familiar to users of other programs. It’s similar in nature to Cream, a repackage of Vim to provide a CUA type UI. It doesn’t add any underlying functionality to the Gimp – it just makes it easier to use if you’re coming to it fresh, If what you really need is Photoshop because the Gimp can’t do certain things, well, you still really need Photoshop because the Gimp still can’t do them.

    I use the Gimp under Linux because I *don’t* need Photoshop, but I’m not someone who does image editing and manipulation for a living.

  53. I have a degree in Fine Art with a focus in Graphic Design. Through about 1998 Photoshop, Illustrator and Quark were a major part of my life. I’ve used those three “professionally”, and supported graphic designers, art directors and photographers for a major entertainment magazine (in the 90s). I have also almost completely left that life and now try to use open source tools to do the same sort of thing. It’s HUGELY frustrating.

    In the 1994/5/6 time frame (I can’t remember exactly, and those archives are gone) I was on the developers list for what eventually because The GIMP.

    As others have indicated, The GIMP *today* isn’t the graphic design/photo tool that Photoshop was 15 years ago. Partially this is because even 15 years ago Adobe had started designing their products to work together, and partially because Photoshop is written for high end graphic design, which makes the webdesign stuff trivial. GIMP is designed to manipulate images in various ways. This doesn’t make The GIMP bad, it just means that if you’re a professional graphic designer or photographer looking at print or high end reproduction you pay for Photoshop.

    @ESR said:

    This may be happening to Adobe now, or it could take a few more turns of the screw. But it will happen. And as with Adobe, so with all other proprietary software.

    No, no it won’t.

    There are tons of niche market tools that are proprietary and will *always* remain so. There just aren’t enough people who have the need for certain packages.

    dtsund:

    More likely would be companies getting fed up with Adobe, asking themselves how much money it’d cost to contract a programmer or two to get the features they need into GIMP, and doing so if they think it’d save them significant amounts of money over perpetual Photoshop subscriptions.

    It appears you have an odd idea of who/what buys Adobe CS products.

    I’d be willing to bet, without digging much into it, that revenues are probably 25% big companies–for values of “big” that mean “advertising company”, not General Electric–25% medium sized companies, and 50% small companies and individual firms/designers/photographers.

    The closest they’d be able to come to contracting a programmer would be to start a kickstarter project.

    Which really, if you want to fix The GIMP is where you start.

    @Jeff Read:

    Gawd it pains me to agree with this man, with one change: Instead of “a few years” make it “a decade”:

    You can forget about GIMP getting anywhere near Photoshop in terms of serious image editing capability. Even if it got CMYK support tomorrow, you’d still have to license PANTONE in order to have pro-quality color support. In addition, many of the features of Photoshop — features essential to pro workflows — are covered by patents held or licensed by Adobe. So there would be significant legal challenges to getting GIMP caught up to where Photoshop was a few years ago; meanwhile, Photoshop has advanced and set new standards which have become integral and essential to the professional workflow.

    There is a huge, huge difference between what you have to have to put a file up on your website and what you need to if you’re doing 4 color (or more so if you’re doing 6 color or hi-color) output.

    Pantone spot colors are NOT readily translatable into 4 CMKY color space, there is not always a 1:1 mapping, and there are colors inside the pantone range that are outside the CMYK gamut (and vice-versa IIRC (it’s been 15 years since I needed this information)).

    Also when doing color separations you will seriously f* up if built a custom color via CMKY.

    Look, this isn’t to take shots at The GIMP. It does what it does moderately well. Professional quality press output ISN’T what it does yet.

  54. @William O. B’Livion:
    Indeed color spaces and reproduction are more complex than novices may realize. However, it doesn’t necessarily take a decade to implement, if the resources committed are sufficient (or the modules have wide spread applications, see my comment to DMcCunney). Rather the business model is the key factor. If the market is small, the price needs to be high. However, apparently the prices being charged are not just to amortize the man-hours of programming, but also to support returns expected by shareholders. Thus if there is a (even a significant segment of the) market that needs these features which is not price inelastic, then there may be a business model for disruption. Also perhaps more importantly, if there are features (choices) the market needs which aren’t being addressed by Adobe, e.g. the aformentioned desire for maintaining GUI consistency across new feature version or the need for more open file formats, that may be another potential vector for disruption. Typically proprietary software is disrupted by interoperability failure perhaps at some key paradigm shift, such as the introduction of the browser or the smartphone. However, with such a lucrative captured market, and a fairly mature feature set with no clear disruptive technology in sight, Adobe may be in a more stable rent extraction position.

    @DMcCunney:

    If what you really need is Photoshop because the Gimp can’t do certain things, well, you still really need Photoshop because the Gimp still can’t do them

    I presume that the business model for gimpshop doesn’t support several full-time programmers with liason to community (user) feedback, so as to achieve this continual refinement of integration and ecosystem knowledge?

    Nothing gets done with a continuum of dedication and focus without someone getting paid. Eric’s gift or reputation culture model (in the Magic Cauldron) works for defocused contributions, but it doesn’t evolve into a tight feedback loop between development and paying customers who don’t do source code. There is still an incentive to open source portions of code in this case where paying customers need to drive continual development focus, for those portions which have reuse in other applications, thus the community can share the development costs on the open sourced portions without cannibalizing the business models that add some proprietary knowledge. Again I keep coming back to the need for using modular source code design concepts.

  55. @Just Saying: ” ‘If what you really need is Photoshop because the Gimp can’t do certain things, well, you still really need Photoshop because the Gimp still can’t do them’

    I presume that the business model for gimpshop doesn’t support several full-time programmers with liason to community (user) feedback, so as to achieve this continual refinement of integration and ecosystem knowledge?

    Nothing gets done with a continuum of dedication and focus without someone getting paid.”

    Correct. The open source model in general doesn’t support that notion.

    Unless you are a developer working for someone like IBM or Google who has a substantial investment in open source, and pays engineers to hack on open source code because they use it themselves. you probably *don’t* get paid for writing open source code. You do it as a sideline and make your actual money doing something else. Your incentive will be either scratching a personal itch and hacking on something *you* need, or community status hacking on something else.

    And because of your motivation, there is an almost inevitable disconnect between you and the user. Jamie Zawinski touched on the sort of issues that can arise in what he called the “teenage attention deficit” model of programming. His annoyance was with Gnome. Gnome 2 had a list as long as your arm of unfixed bugs, But maintenance isn’t *fun*. Writing *new* code is fun. So instead of fixes for Gnome 2, we got Gnome 3, and a whole new set of bugs (that likely won’t be fixed.)

    In commercial software with paying customers, you survive by providing the features the customer asks for, not the stuff you think would be neat to do, And you fix reported bugs because you want to *keep* the customer, and fun or not, maintenance is someone’s *job* and you pay them to do it.

    You won’t get that sort of attention paid to the Gimp unless someone with fairly deep pockets is willing to hire some developers, give them full time jobs at salaries comparable to developers elsewhere, and pay them to develop and improve the Gimp to make it a viable competitor to Photoshop. Whoever it was would have to be in close touch with users, learning what they do and how they do it, and understand what the Gimp lacks to be a PS competitor and addressing those lacks.

    Who is going to spend that sort of money, and what’s in it for them? The people who would benefit are high-end corporate users in places like ad agencies, but while I can probably think of a few such outfits that have deep enough pockets, they don’t have the expertise to successfully run such an effort, and wouldn’t see the point when they have Photoshop and a working relationship with Adobe.

  56. @William O’Blivion

    Pantone spot colors are NOT readily translatable into 4 CMKY color space, there is not always a 1:1 mapping, and there are colors inside the pantone range that are outside the CMYK gamut (and vice-versa IIRC (it’s been 15 years since I needed this information)).

    If certain Pantone colors can’t be rendered in CMYK, then they probably can’t be in RGB either, which means WYSINWYG. I’m not sure how a graphic editor can keep track of distinctions that are invisible to the eye of the person operating it.

    I must admit that I have trouble wrapping my brain around how any color can’t be represented in RGB, because of how our receptors work (other than those tetrachromatic women who carry one of the color-blindness genes). But I’ve heard enough people say that there are some “corners” in the CMYK tesseract that don’t fit into the RGB cube and vice versa that I remain open to having it demonstrated.

  57. I presume that the business model for gimpshop doesn’t support several full-time programmers with liason to community (user) feedback, so as to achieve this continual refinement of integration and ecosystem knowledge?

    Nah, to get that you’d need Cinepaint.

  58. @The Monster:

    how any color can’t be represented in RGB, because of how our receptors work

    Just off the top of my head (been 18 years since I was doing this stuff), I assume for the same reason that an analog signal can generally only be approximated by point samples. Although our analog receptors may have weighted sensitivities centered at 3 wavelengths (do they?), I assume they are not point samples at only three wavelengths.

    @Jeff Read:
    I also detest software that requires me to jump through hoops, or which is poorly implemented (Android fails even basic usability in many corners of my usage!). But I also don’t like walled gardens that cause me other problems. My current theory is the gap is caused by the inability to code software modularly, so as to have optimum orthogonality and integration efficiency between proprietary and open source portions of the software ecosystem. I am viewing the solution more holistically. Ideally I should be able to code just the improvement I need to Android’s existing apps, and get paid by everybody who wants my fix. That is my (unattainable?) goal. I am dreaming of a software ecosystem where we get paid for small modules, not only entire projects.

  59. Nb. RGB and CMYK colorspaces cover all colors if we allow negative coefficients… but anyway 3-dimensional colorspace is not how our color vision works: http://blog.asmartbear.com/color-wheels.html (and realistic color mixing is even more difficult).

    And there are ICC profiles (and there is open source inexpensive colorimeter harware: ColorHug) to allow for mapping from one device RGB to other device RGB or CMYK.

  60. @Jakub Narebski:

    RGB and CMYK colorspaces cover all colors if we allow negative coefficients

    I doubt this (nearly certain this is wrong). How mathematically can 3 point wavelength samples (even with negative coefficients) represent the 3 cones of analog response in our eyes (as shown in your link)?

  61. @JustSaying: 3-component color spaces define mapping into absolute colorspace which looks for me like baryocentric coordinates (see e.g. the image in “Color space” Wikipedia article).

  62. @Jakub Narebski:
    My understanding is the mapping from negative monochromatic primaries to the our perception is a model with arbitrary assumptions, and is not an absolute color space match. My understanding is that because the human is measuring an analog response curve for cones, the validity of the approximate model depends on the human discarding those response curves in the brain and retaining only a monochromatic color with linear perceptible spacing. Apparently this may work as an approximation of perception in some cases.

  63. @JustSaying: It is more complicated than “human is measuring an analog response curve for cones”, see my link to article about color wheel. There are two filters after cones output: red-vs-green and red/green-vs-blue (plus luminance filter).

    Anyway negative coefficient in RGB or CMYK color spaces are about referring to point in absolute color space that cannot be represented normally.

  64. @Jakub Narebski:
    Agreed it is more complicated (and I don’t remember it all). My understanding is the negative coefficient representation is based on the assumption in the model that adding monochromatic color to a target color is the equivalent of subtracting it from the coefficient of the match, and on some other arbitrary assumptions mentioned the last section of the wikipedia link I provided. These are models that approximate human perception and they apparently fail to match expectations in some scenarios and differing absolute color spaces, e.g. I don’t think you can reliably match expectations of say for example every Pantone color by representing it in RGB space with negative coefficients. I may be wrong, but that is my recollection and understanding.

  65. @Aaron Traas:

    The best argument you have for Photoshop is “Save for Web”? Seriously?

    So, tell me Aaron, if I write a GIMP plugin that exactly mirrors Photoshop’s “Save for Web” and Slices features, you will then stop using Photoshop?

    Right. Didn’t think so.

  66. Eric, a large part of your argument that proprietary software is doomed (doomed, I say!) hinges on that as the market saturates, in order to continue its growth a proprietary software company will have no choice but to jack up its rents, eventually crossing a threshold where the cost of switching is less than the cost of staying with the proprietary stuff.

    That such a threshold must exist is indisputable. The problem with your argument is that that threshold can be arbitrarily high. It’s a bit like Keynes’s famous retort, “in the long run, we’re all dead”. Consider as an example all the corporate infrastructure that’s been running on IBM mainframes since before I was alive, and will probably still be running on IBM mainframes long after I’m dead. Talk about your subscription models — millions of dollars a year in service and support contracts, and the companies often still don’t even own the physical hardware. Even so, the costs of engineering and executing a smooth migration to an open source system running on x86 hardware, say, with similar performance and reliability characteristics, is still much higher. Yes, there are things like Linux for System z, but that’s intended more to add internet functionality to existing OS/360 apps, not facilitate migration.

    Photoshop doesn’t have quite this level of entrenchment, of course, but Adobe still has considerable leeway in terms of how hard it can squeeze its customers, due to the simple fact that the entire industry has reconfigured itself around Adobe tools until “doing professional prepress image work” implies “using Photoshop”, by definition.

  67. @The Monster
    > I’m not sure how a graphic editor can keep track of distinctions that are invisible to the eye of the person operating it.

    By having a separate layer or channel for it and allowing the person using it to explicitly select it?

  68. @Jeff Read:
    The computer universe is expanding, so walled trajectories risk losing share of the universe. Open source doesn’t need to kill every closed market, it just needs to keep accelerating away from them. The goal is not to kill all business models which fund R&D, as that would be deceleration, rather to accelerate away from the stored monetary capital rent seekers. Real capital is stored in the minds of the creators, and when most of the world’s real wealth goes to them instead of stored monetary capital, technology will be vastly accelerated.

  69. RGB has a much larger color space than CMYK, but it still does not encompass the entire range of visible colors.

    A daydream: Apple develops top-notch versions OpenOffice/GIMP/etc. that can truly replace MS Office/Photoshop/Creative Suite, Microsoft and Adobe have heart attacks. (OK, if Apple did it, they’d be more likely to charge $99 and they’d only run on OSX and iOS, but it would be fun to see it happen, though I doubt they’d take the risk of pissing off MS and Adobe.)

  70. @me:

    The goal is not to kill all business models which fund R&D, as that would be deceleration, rather to accelerate away from the stored monetary capital rent seekers.

    The implied point being that business models which (perhaps cash out highly knowledgeable angel investors) but don’t sustain low knowledge post-IPO investors, will concentrate capital where it is most knowledgeable and accelerate away from those that don’t. I need not invoke the Parable of the Talents ;)

    Recent crowdfunding blog is relevant.

  71. A daydream: Apple develops top-notch versions OpenOffice/GIMP/etc. that can truly replace MS Office/Photoshop/Creative Suite, Microsoft and Adobe have heart attacks.

    Won’t happen. Apple already makes a photography application — Aperture — and Photoshop it ain’t. After the Final-Cut-pocalypse, Apple has retooled its software lineup to be aimed squarely at the same segment of wealthy prosumers and dilettantes that they sell most of their hardware to now.

    High-end professional-grade software for specialized creative industries is very much an Apple-of-the-90s thing. Now that they’re selling Macs and iThings to mom and pop, expect their software lineup to be mom-and-pop-oriented.

  72. >Nb. RGB and CMYK colorspaces cover all colors if we allow negative coefficients… but anyway 3-dimensional colorspace is not how our color vision works: http://blog.asmartbear.com/color-wheels.html (and realistic color mixing is even more difficult).

    Luminosity, Red – Green (I’m seeing some references saying that this is actually Red + Blue – Green), and Red + Green – Blue is still a 3 dimensional colorspace, just with different axes.

    And context sensitivity and color constancy don’t mean that our color space isn’t three dimensional, just that the brain fudges the values for each “pixel” based on the values of surrounding “pixels”.

  73. A daydream: Apple develops top-notch versions OpenOffice/GIMP/etc. that can truly replace MS Office/Photoshop/Creative Suite, Microsoft and Adobe have heart attacks. (OK, if Apple did it, they’d be more likely to charge $99 and they’d only run on OSX and iOS, but it would be fun to see it happen, though I doubt they’d take the risk of pissing off MS and Adobe.)

    Apple appears to be moving toward the prosumer/low end pro segments with software rather than high end segments held by Adobe and Avid.

    FCPX at $299 and Aperture for $79 is puts pricing pressure on pro apps in those two categories. A Photoshop replacement is more work and something Apple would likely only tackle of Adobe stopped supporting the Mac.

    Lightroom is better than Aperture and $30 isn’t much ($79 vs 112). $299 vs $650 for Premiere Pro is a bit more but now it’s part of that $50 subscription for everything.

    But for the prosumer or wedding videographer going FCPX and Aperture for under $400 vs paying $50/month makes some sense if they weren’t already shelling out for the whole CS package. They can stick with CS6 Photoshop and hope in a couple years something like Pixelmator fills their needs.

    Adobe’s problem is that prosumer tools are getting pretty good. GIMP’s problem is those prosumer tools work a lot better and are only $30 ($15 on sale).

  74. I use Libre Office, not for ideological reasons, not for financial reasons (Microsoft office is free for me) but because Libre Office is just better. Its UI is just more usable.

    In the days of Bill Gates, Microsoft sotfware always had a better user interface than anyone else’s. Now Microsoft UI sucks. Consider Windows 8.

    However, Gimp sucks

    Gimp sucks

    Gimp sucks.

    Gimp sucks.

    I don’t use any weird special super duper features that only bazillion dollar corporation could afford to put in. I just use layers, selections, masks, and the usual painting tools.

    And it still sucks.

    If, for example, I want to rescale something, its rescale is so bad that I export the image to Irfanview, rescale it in Irfanview, and import it back into Gimp. Who is so stupid that they can screw up scaling?

  75. Brian Marshall said: Am I mistaken or is 20GB not a whole lot of storage for people that spend their day creating images with Photoshop?

    It isn’t.

    But the 20 gigs there is for interop with the mobile apps (ie, so you can fiddle around with the iPad versions that can’t see your hard drive).

    You aren’t required to store “all your Creative Cloud App Data” in that 20 gigs; the expected use model is that all your files will continue to live on your local backed-up probably-RAID-ed data store.

    (Nor do you, per Adobe, have to even be online except 1) at install time and 2) once a month to ping the subscription server, with a 99 day “my connection is totes down” grace period before it shuts off.)

  76. Which sort of interpolation are you using for scaling? Linear, cubic, or sinc? They produce different results in quality.

    Also, are you using the latest version? There was a scaling bug in 2.6.1 that was fixed in 2.6.6 I believe.

  77. The GIMP, for all it gets pushed, can’t match the capabilities and UI of the consumer-grade editors like Paint Shop Pro or Photoshop Elements, let alone CorelDRAW Graphics Suite X6 (the likely beneficiary of people moving off Photoshop) or full-on Photoshop. Cinepaint, which can match some of the capabilities (well at least when in RGB or LAB spaces) can’t match the UI, flexibility or ecosystem.

    The PS CC backlash seems to be primarily with Photographers, who as a group are in a position to move away from PS, are a major market for PS and have a long-standing hate/hate relationship with Adobe (who wants them to just move to Lightroom and shut up already, despite LR’s suckitude at sharpening, resizing and local adjustments. Adobe does not have a pleasant relationship with their userbase).

    In reality, Apple, PhaseOne (makers of CaptureOne, the premiere non-Adobe RAW conversion editor) and Corel stand to be the real groups to benefit from Adobe’s shotgun to the foot. Nobody who works seriously in this area even looks at open-source products because they frankly suck horribly and the people with the skills to make them not suck are not involved in their development, the developers couldn’t be bothered to talk to the primary user groups and the potential users are generally rather non-technical (Free on the other hand has a chance, there are several excellent free but not OSS RAW converters).

  78. > I use Libre Office, not for ideological reasons, not for financial reasons (Microsoft office is free for me) but because Libre Office is just better. Its UI is just more usable.

    How do you work around Bug 4914?

    For a huge number of people (including myself) it’s an absolute deal-killer – and yet it has persisted for over a decade now.

  79. Deep Lurker> How do you work around Bug 4914? [I]t’s an absolute deal-killer
    Now this baffles me. If you want to _edit text_, use a _text editor_*. If you want to format a document for print or display (and it’s short / unstructured), use a word processor.† (If you want to format a long / structured document for print or display, use a markup language like Docbook-XML‡… and a text editor).
    You are asking Writer to be something it ain’t supposed to be. I don’t think it’s fair to blame Writer for that.

    *This is not actually rocket science.
    †Possibly writing the body text in a text editor first, then pasting it in and formatting it.
    ‡I’m not actually much of a fan of XML; I’d much rather use an S-expression-based markup language, but there aren’t any mature Lispesque markup languages around.

  80. @Monster:

    If certain Pantone colors can’t be rendered in CMYK, then they probably can’t be in RGB either, which means WYSINWYG. I’m not sure how a graphic editor can keep track of distinctions that are invisible to the eye of the person operating it.

    I must admit that I have trouble wrapping my brain around how any color can’t be represented in RGB, because of how our receptors work (other than those tetrachromatic women who carry one of the color-blindness genes). But I’ve heard enough people say that there are some “corners” in the CMYK tesseract that don’t fit into the RGB cube and vice versa that I remain open to having it demonstrated.

    Please forgive the inadequacy of my explanation here, it’s been 15 years since I’ve had to explain this, so I’m a bit rusty.

    Pantone spot colors are *mixed inks*. When you look in the book your printer has (no, not the little box over on a stand somewhere, a *real* printer, some dude with seriously arcane knowledge of shit like this) you find your pantone number and you see a recipe that says something like 3 parts white, 6 parts burnt umber, two parts Rhodesian red (I made that up. I think).

    When you do a “Spot Color” separation, say it’s a 2 color job, with one of them being black and the other being Pantone 349, you get 2 pieces of film. One with the black stuff, and one with the Pantone stuff.

    When you do a CMYK color sep you get 4 films. One each for Cyan, Magenta, Yellow and Black.

    What happens for inks, whether mixed by the printer in a vat, or by careful line screens as in the case of CMYK (or hi-color for more vibrant printing) is that the ink on the page absorbs most wavelengths of light, only allowing certain wavelengths to bounce to your eye. CMYK is a very, very narrow color gamut and is incredibly dependent on the paper it hits. There’s a picture here http://synergenstudios.com/?p=512 that shows some of the issues.

    What you’re dealing with inside a photo retouching/graphic design program is trying to show a photograph taken in 8, 12 or 16 bpp, map them into the RGB space that the monitor can show, while still preserving the dynamic range in the underlying file/image. Partially this is where the color profiles come in.

    Then you have to map THOSE values into the range of the targeted printer–and then you have to worry about things like “dot gain” (the increase in size of the dot of ink once it hits the paper) for various papers (which means you make a *sligtly* smaller dot to compensate) etc. etc.

    These are all really not easy problems. Of course they’ve been solved (several times) but many of the good transforms are patented.

    Yes, I think patents on this kind of thing are questionable, but I’m not the one enforcing or making the rules.

  81. @Edward Cree: “You are asking Writer to be something it ain’t supposed to be. I don’t think it’s fair to blame Writer for that.”

    What *do* you think Writer is supposed to be?

    LibreOffice/OpenOffice tries to be a replacement for MS Office, and Writer is a replacement for Word. Word is a word processor. So is Writer.

    The fundamental difference between a text editor and a word processor is the assumed end result. A text editor assumes the output will be a file. A word processor assumes the output will be a printed page, and that the author will care what text *looks like* on the page. So a word processor includes facilities to control the appearance of the output.

    In the old days, *nix followed the one tool for one job model, and *nix users wrote documents in something like vi, and formatted them with something like nroff for printing. This meant you had to know the markup language, and either insert the markup in the document as you wrote, or add it in an editing pass once the basic text was complete.

    Word processors, starting with dedicated devices like Wang and IBM’s DisplayWriter incorporated facilities to create and edit text and control output format without requiring knowledge of a formatting language or a second edit pass, and word processing programs for PCs and other things followed suit.

    When you are writing, particularly when you are writing long documents, you need to be able to review your text with a minimum of extraneous stuff on screen. You still *need* that extraneous stuff: you just don’t want it visible and in the way when you are editing your text. A desire for something like Word’s “normal” view isn’t a desire to copy Word – it’s a desire for a basic feature *any* such program should have.

    You won’t get writers willing to do a “Create/edit in one program, format with another” like they did back when. (And a large part of why they did it that way back then was that programs that *did* both weren’t available.) They want to use the same program for both, and have the markup that controls the formatting invisible unless they specifically choose to look at it.

    This issue with Word is an example of the same disconnect between the developers and the people who will use the program that bedevils the Gimp. I know an assortment of folks who make part or all of their living writing. They live in their word processors the same way developers live in their IDEs. They all use Word.

    In part, they use it because they are familiar with it. In part, they use it because the people who will pay for what they write expect to get copy as Word documents. In part, they use it because Writer just doesn’t quite cut the mustard for them, and lack of things like a normal view is a reason.

    (Many aren’t that thrilled with Word, either – it apparently falls down on long documents, and these folks are writing *books* where a manuscript may be 250,000 words in one file. Word is simply the best of a questionable lot.)

  82. esr said “But doing that doesn’t solve an even more fundamental problem, which is that the stock market doesn’t actually reward constant returns any more; it wants an expectation of rising ones in order to beat the net-present-value discount curve. Thus, in a near-saturated market, the amount of rent you extract per customer has to perpetually increase” I don’t entirely agree.

    When people invest in the market, they expect returns. They get those returns in two ways: an increase in the value of their stock, or dividends paid on their holdings. In general, your returns on any particular stock will be one or the other, not both.

    What the market mostly wants these days is *growth*, and that means constantly rising sales. Increasingly, companies that are doing just fine in terms of revenue and profits and have healthy businesses are getting hammered in the market because they *aren’t* perceived to have growth potential.

    Adobe is in a similar position to the one MS faces. MS was the quintessential “growth” stock for a long time. They showed regular double-digit increases in revenues and profits because of ever growing sales, and got a stock price in the stratosphere in consequence. They didn’t pay dividends, but investors didn’t care. The value of their holdings was constantly rising.

    MS is facing the inevitable problems that accompany a transition from a growth company to a “mature” company. Mature companies throw off gobs of cash, but *don’t* have stock prices in the stratosphere. You can argue that Bill Gates picked a savvy time to quit. He went out a winner, having built Microsoft into the dominant player in PC software and being for a time the world’s richest man in consequence. Ken Ballmer was left holding the bag and trying to support the stock price.

    For that matter, Apple is on the cusp of a similar transition. Everyone is waiting to see if they have another category defining product in development, that will create a whole new market the way the iPad did, and provide the kind of growth the market likes. If they don’t, expect their stock price to get hammered. I don’t care how much revenue and profit their existing lines continue to generate. The investors interested in capital gains want *new* sales.

    Adobe is both selling into an arguably saturated market, and has a mature flagship product. While PS might not be “feature complete”, what sort of new features can Adobe add that are so compelling existing users will feel they *have* to get the new release, let alone lead to new sales from folks who don’t currently use PS?

    My own suspicion is that Adobe’s move is largely about their stock price. As such, I suspect it’s doomed to failure. If they can’t penetrate new markets, possibly with new products, it won’t matter *how* much revenue and profit their new pricing model can squeeze out of existing customers to investors looking for capital gains, because Adobe won’t be *growing*.

    The market seems to be reacting with something less than enthusiasm to Adobe’s move, and the stock price continues a slow downward drift. A stock buyback plan would not surprise me as another attempt to support the stock price.

  83. Jeebus, Vinge was writing in 2009 using something that looks like it’s from 1983. At best. Seriously, white type on black? I would not use this screenshot as an argument for open source writing tools….

  84. @PapayaSF: Remember that Vinge was a professor of Computer Science who wrote on the side before taking a retirement package and turning to full time writing. He wrote with what he was familiar with. It’s entirely possible he could have used black on white, but if you are used to inverse video, why bother?

    And I don’t know where you got 2009 from: the screenshot is from A Fire Upon the Deep, which got published in 1993. He also wrote A Fire Upon the Deep at a time when a publisher was more likely to accept a manuscript that wasn’t a Word document. The submission draft might even have been hardcopy.

    Back when, you submitted a hardcopy of your manuscript, and final copy was rekeyed by a typesetter to produce the book.

    Nowadays, manuscript submission is electronic in the form of a Word document, editing and revisions take place on that document, and final copy is imported to Adobe InDesign for typesetting and markup. InDesign produces a PDF file that goes to the printer, and is fed to an imagesetter to produce the plates from which the book is printed,

    I knew a chap years back who was looking at hacking emacs to be a writers tool, and turn it into a kind of Writer’s IDE. A variety of such things exist now, adding the ability to use outlines and keep notes on characters, timelines, objects, plot elements and the like in an ancillary database to the existing editing and formatting tools of a word processor, but the ones I know of can produce a Word format document as output, because that’s what publishers want to see.

  85. @PapayaSF: Okay. So he still uses the same tools. He doesn’t see a particular reason to change his workflow And he’s popular enough that his publisher might just let him submit something other than a Word file, or he simply imports to Word for submission draft.

    Writers are all over the map. Fred Pohl learned his trade in the days when manual typewriters were all that was available, and you made corrections by retyping pages. He *still* first drafts on a manual typewriter, and rekeys into a word processor for editing and submission draft. Elizabeth Moon learned WordStar as her first word processor, and though she uses Word these days, she’s still likely to invoke WordStar when struggling on a section in a book, because it proves a level of familiarity and comfort greases the wheels a bit. Rob Sawyer is an unrepentant WordStar fan.

    The fundamental question is “How good are open source products for people writing long-form prose?”, and the answer would appear to be “Not good enough.” The ones I know may keep something like OO on a notebook as an “on the go” tool, but the main engine will be a commercial product: Word on Windows, Pages on a Mac, Scrivener or the like if they use a “Writer’s IDE”, with submission draft a Word format document because that’s more or less the industry standard for publishing. (None of them use Linux, nor would I expect them to.)

  86. @Nigel:

    GIMP’s problem is those prosumer tools work a lot better and are only $30 ($15 on sale).

    How is Apple’s minority stagnant or shrinking global market share a problem for GIMP? The computer universe is innovating away from Apple’s walled garden. Yeah the alternatives are not always as shiny and usable (yet), but sometimes do desirable features that aren’t allowed over there.

    For me, GIMP doesn’t have to be a useful end product, it is also a body of source code I can reference if I start writing image editing code for some purpose. This is fundamentally why open source accelerates away from closed source– knowledge proliferates faster.

    Businesses should open source what they can benefit from sharing, and they should hold proprietary what can fund their growth while preventing mere copying (as opposed to knowledgeable competition) from destroying their business model. See next point…

    @DMcCunney:
    Seems you are agreeing with my prior comment, which is that business models should target growth and that extracting rent for the dump post-IPO investors is a withering proposition, especially since the computer universe continues to expand.

    Imagine that if other people used your open source modules in a profitable business, they sent you a small tip. You could always target growth. This is my dream. I am trying to think of how this could be accomplished. My idea thus far is to create a repository where the terms are such that if you use the open source code there in a profitable product (or where you pay for development) above some thresholds that exclude hobbyists and experimentation, you agree to pay say 5% of gross sales (or say 10% of your development costs which ever is greater) to all the open source modules used. I am thinking the funds would be distributed based on LOC, with the metric measured by fundamental lambdas. Then anyone can improve your module, and the percent reduction of LOC they achieve is the percent of ownership they take from you. Of course the market decides which version of a module to use.

    This dream is why I was (am) working on an idea for a computer language that enable greater modularity (referential transparency with pure functions and higher-order semantic models of modularity).

    One thing I don’t like about my idea is the centralized repository. How to make it distributed? I am also not sure if LOC idea works. Although someone could copy modules and offer then outside (or within) the repository, my thought is that for the small tip (could even be reduced from 5% if necessary), the market is going to want to support the knowledgeable creators who can keep the modules well maintained.

    I am interested to here criticism and faults in the idea. I hate to waste time on something unworkable. Anyone like to help? I am tired of working alone.

  87. Your biggest typo was “Apple’s minority stagnant or shrinking global market share.”

  88. @PapayaSF:
    IDC says iOS fell from 23% to 21% and Android jumped from 52.9% to 70.1% last year. I am not factoring in PCs since I read lately this is a shrinking market overall. Care to cite a refutation? Web and app downloads share is not applicable to my point. I realize there is contention over registration versus shipments, and also Apple clearly captures greater percent of enterprise and higher spending markets. But my point is that the usershare is undeniably accelerating away from Apple. It is easy to understand that Android is less expensive and so the developing world prefers Android. My point is all about growth, not about near-term rent extraction.

  89. I am interested to here criticism and faults in the idea. I hate to waste time on something unworkable. Anyone like to help? I am tired of working alone.

    Both the Web and open source suffer from the same fundamental problem: they abstract away the notion of author-as-subject, decontextualizing text and code from its provenance. This is a problem that Jaron Lanier has been speaking at length about, and it has a solution, albeit an unpleasant one that the likes of Eric will rebel against.

    Any such distribution system for code, in order to function, would have to do what Xanadu would have done, and what the W3C is just now considering for the Web: bake DRM into the system at a fundamental level.

    I know, I know. Suck it up. DRM is here to stay, and it will only become more standardized and pervasive. The network is the most ruthlessly efficient content distribution system ever devised, and yet for all its crypto-anarchist hacker geniuses it still hasn’t gotten the crucial “making sure authors get paid for their work” bit right yet. Until it does, the quality of what it distributes will remain low, and walled gardens — where making sure authors get paid is a priority — will attract all the decent authors. Hence Steam and iTunes Music Store and paywalled magazine sites and — all of Hollywood basically.

    So the W3C adopting standards for DRM on the Web is not only inevitable, it could be the best thing that’s ever happened to the Web. It could set us free.

  90. @Just Saying:” @DMcCunney:
    Seems you are agreeing with my prior comment, which is that business models should target growth and that extracting rent for the dump post-IPO investors is a withering proposition, especially since the computer universe continues to expand.”

    Not exactly. I was drawing a distinction between business models, but not suggesting one was more valid than another. The stock market is currently fixated on capital gains, and investors are looking for stocks whose prices have potential to rise dramatically, allowing them to make a killing on the difference between the price at which they bought and the price at which they anticipate being able to sell. That requires rapid growth, and either being an early entrant into a big market, expanding into new markets, or developing new products you can also sell to your existing market.

    You can have a strong existing company, with solid revenues, high profits, and no debt, but the market will ignore you because your stock price accurately reflects your current value, and absent one of the things I mentioned above, they *don’t* expect the price to go *up*, and *don’t* see an opportunity to profit on the change. There are companies going private precisely to get off the treadmill created by those market expectations.

    The problems come in for companies that *used* to be growth companies and are now mature. They may still have high revenues and higher profits, but their stock price will *decline*. Stockholders are never pleased when the value of their holding *drops*, and the company’s question becomes “How do we support the stock price?”, with the most likely answer being “You *can’t.*”

    If I’m correct and Adobe’s move is about the price of the stock, I suspect it will fail, because it won’t boost the stick price appreciably pretty much regardless of how much revenue and profit it generates. They will need to expand into new markets or introduce compelling new products or both to support the stock.

    What *your* business model should target depends on what you are trying to achieve. The first thing *I’d* look for is sustainability. I want my enterprise to be alive and well after I’m gone. I’m not interested in growth for the sake of growth, and how big I need to grow and how fast I need to do it will be a function of what I’m doing and the market I’m trying to address.

    Given my druthers, I might *not* go public. You go public for two reasons: to get money to fund further growth, and/or to grab the brass ring and dramatically increase your personal wealth. Many startups appear to be fixated on the pot of gold at the end of the IPO rainbow, which is a recipe for disaster, because most IPOs fail. You should be willing to do what you do regardless of whether you get rich in the process, because you love it and it’s what you want to spend your life doing.

    And given the market’s hunger for massive gains in stock price, if I do go public, I risk being pressured by investors to do things that will run up my stock price in the short term, but may be fatal long-term. They won’t care: their goal is to buy low, sell high, and make a big buck doing so, and if what I was asked to do proves fatal to my company later, hey. they’ve got theirs. They don’t care about me or my enterprise – only the money they think they can make off gains in my share price.

  91. @Jeff Read:
    Businesses won’t steal source code modules, because they then violate copyright and set themselves up for lawsuit. Businesses can’t steal out in the open– individuals can because they are hidden. Also they won’t steal because their primarily need is for the module to receive bug fixes. The cost function is set such that the incentive to support the programmers is much greater than the incentive to steal. You kiss your programmers on the arse, because they are critical to your business and they are not fungible.

    Individuals have a cost function and if the price is low enough and the DRM is non-obtrusive enough, some will buy into it. But the DRM distributed music is I assume still small compared to the total file sharing. I think it is possible to create a very anonymous file sharing P2P paradigm. Wikipedia the MUTE open source project. There apparently just isn’t much demand for it, because the cost function is those who want to steal can get away with it (e.g. in all the developing world) and the rest find the cost function for say iTunes or Amazon downloads to be acceptable (until the global economic implosion comes).

    I anticipate an end game where DRM isn’t needed, because songs are 10 cents (inflation-adjusted). It won’t be worth the time and hassle to steal. We aren’t there yet, because the music labels want to extract maximum rent from the debt-inflated statism before it crashes, then they will lower prices to go global.

    If the W3C tries to force DRM every where with cooperation of the key browser vendors, they will become irrelevant. I hope they try as it will open a big opportunity to replace them.

  92. For those photographers looking for an open source diigtal darkroom here’s an interesting option:
    http://lightzoneproject.org/
    http://en.wikipedia.org/wiki/LightZone

    It appears to be a former proprietary product that is now open source. Not sure how active the project is, though it appears that there are recent updates on the website. It may be a better option for photography buffs than GIMP, with its larger colour space and non-destructive editing features.

  93. @DMcCunney:
    I read your comment as illustrating that courting stored monetary value destroys knowledge. There is nothing stable in nature, it is either growing or it is dying, relatively speaking in the competitive evolution. Thus my proposal for open source modularization. I conceptualize that the financial model we have now is dying because knowledge (now significantly encoded in software and creative arts) isn’t fungible and can’t be just bought. Our financial system creates money from nothing, but knowledge can’t be. Knowledge is never static, it is either generating new knowledge or it is falling behind that which is. The universe is always expanding, total entropy (# of independent possibilities) of the universe is always trending to maximum (Second Law of Thermodynamics). I am thinking it is wise to shoot for permanent growth by maximizing sharing of knowledge in an economic framework where the relative knowledge is partitioned fairly to maintain incentives for knowledge production (since really physical wealth is just knowledge, e.g. 3D printer designs). This is the essence of open source. We must get the economics right. Seems so far open source has largely been funded by large corporate strategies, which thus can not be perfectly fitted to the knowledge granularity of the creative actors. I think this may the reason open source is not always performing well against small focused teams using proprietary source. I know I would be very motivated to improve many of the complaints I read in this blog, if I could be paid for my contribution. The only way for me to do that, is go back to work for Corel, but then I am not free to work on my chosen priorities.

    @PapayaSF:
    I noticed recently in the Philippines that Android tablets have blasted off. I suspect we may soon see Apple’s tablet market share trounced, for the same cost and variability of price points and models features (freedom of choice) reasons. But I haven’t followed closely the comparative technical issues about tablets, so perhaps I am missing some key point. I read that Samsung’s very large screen smartphones are displacing some tablet sales. To Apple’s credit, I finally see iOS products widely available here, and some desire them, but it is more along the lines of “can you buy me one” (because of the high cost), while they are using an affordable Android. Most requests I’ve heard is for the iPhone 4S, yet mostly I see people buying Samsung in the retail outlets.

    @Jeff Read:
    I am thinking you see a need for DRM because you think that the author should be able to maximize rents and the society shouldn’t have any veto except to not use the creation. Nature gives individuals a veto, which is to refuse to pay when the cost function is not optimized. I am thinking the knowledge creators don’t need stored monetary value (rather is the dumb stored monetary value rent seeker parasites who do and they destroy knowledge creation), they need most is to balance the cost functions so that maximum knowledge is created, i.e. maximize sustained growth for all.

  94. @Edward Cree:

    Strict segregation of text and formatting is the wrong thing for most writers, and in any case “strict segregation” is a myth – the text displayed by a text editor will have formatting, however basic and minimalistic that formatting might be.

    Writers need a certain amount of formatting mixed in with the text when they’re writing. The type and amount of formatting needed varies from writer to writer, and even a single writer will need different amounts of formatting from time to time. For most writers, however, the amount of formatting needed is more than the basic & minimalistic formatting offered by a text editor. Instead, writers reach for word processors that can quickly and easily change the kind and amount of formatting that’s displayed at any given time.

    Telling a writer that he should write in a text editor (rather than a word processor) is like like telling a programmer that he should code in an editor that lacks syntax highlighting and code folding (or that lacks user-customization of those features). In both cases, a minority of writers/coders are just fine without the extra features, but most writers/coders find the extra features to be highly desirable if not absolutely necessary.

  95. @PapayaSF
    “Apple does better in the US,…”

    We are close to the point that there are over 7 billion mobile phone subscriptions with over 4 billion unique users:
    http://communities-dominate.blogs.com/brands/2013/01/countdown-to-mobile-moment-when-will-mobile-phone-accounts-outnumber-humans-i-say-july-2013-we-are-a.html

    Most of these phones will be “converted” to some kind of Smartphone in the next 5 years or so for poor to very poor people. That will be some 3-5 billion, mostly cheap, Smartphones. These phones will run things like Android or Firefox OS. They will most definitely not run iOS.

    How are these demographics to square with any phantasy about iOS taking over?

    (suggestion: Maybe eric’s travels have to do with Firefox OS?)

  96. @Edward Cree:

    >Deep Lurker> How do you work around Bug 4914? [I]t’s an absolute deal-killer
    Now this baffles me. If you want to _edit text_, use a _text editor_*. If you want to format a document for print or display (and it’s short / unstructured), use a word processor.

    I had to go to our family desktop and open up Word 2003 to remember what “Normal View” does. It has a fixed (rather than window-size-dependent) line width as in page view, so that it does show more or less exactly what will be on the printable area of the page (unlike a text editor or web view), but doesn’t display the margins or page edges.

    I can see where some people would find such a thing useful, though I find it rather uncomfortable to work with myself (which is why I’ve been using page view instead for years). The OP on the bug thread makes it sound like he wants a text editor, but some of the posters downthread I think give a better idea of why people who want such a feature want it.

  97. > > I use Libre Office, not for ideological reasons, not for financial reasons (Microsoft office is free for me) but because Libre Office is just better. Its UI is just more usable.

    Deep Lurker on Monday, May 13 2013 at 7:54 pm said:
    > How do you work around Bug 4914?

    If I want to edit unformatted text, and see what it will look like unedited, I use a text editor. If I am using Write, It is because I want to see the text formatted, and edit formatted text.

    Word processors are word processers, text editors are text editors. Use the right tool for the job.

  98. I did not say, and never have said, that iOS was “taking over.” I have just been arguing that Android was not taking over in the way that you and Eric and others have been predicting. (I.e. that Android had reached, or would soon reach, a tipping point which would inevitably drive iOS down to a single-digit market share in the US.) It ain’t happening, bro. One reason is that apparently more people (at least in the US) move from Android to iOS than the other way around.

    Yes, the world will be filled with cheap-ass smartphones, and most won’t run iOS… at least not at first.

  99. Jeff Read on Tuesday, May 14 2013 at 12:37 am said:
    > Suck it up. DRM is here to stay, and it will only become more standardized and pervasive. The network is the most ruthlessly efficient content distribution system ever devised, and yet for all its crypto-anarchist hacker geniuses it still hasn’t gotten the crucial “making sure authors get paid for their work” bit right yet.

    These days, I watch music videos produced by amateurs for free, rather than commercial music videos. I just like them better. For example [MMD][POKERFACE][GUMI][MIKU][APPEND][NEW CAMERA] [60 FPS].mp4

    I read web comics.

    I use free android apps.

    People are prepared to do free work to gain mindspace, and worry about monetizing mindspace later.

  100. @PapayaSF:
    If my first experience with Android is any indication, I must agree it isn’t good enough to displace iOS to single-digit share. I actually had the thought of buying an iPhone, but I think my first step is to evaluate a Samsung in the store (load up my huge SMS history) instead of an HTC and hope my experience thus far was an abberation. I can’t even queue up multiple SMS messages without the darn thing crashing every time, it is slower than pulling teeth I assume due to failure to optimize for big O, and there is some hardware defect or virus that causes it to self-touch the touchscreen. My phone now has a configuration from random self-touches that I would have no idea how to undo, other than reformat (on my todo list, but concerned about history backup and restore working from some reading).

  101. @PapayaSF
    “(I.e. that Android had reached, or would soon reach, a tipping point which would inevitably drive iOS down to a single-digit market share in the US.)”

    I am not sure about Eric, but I hardly care about the US market share. On a global scale, less than 500M iPhones among more than 5B Smartphones sounds like single digit market share to me.

  102. You don’t have to care. Apple doesn’t particularly care about market share percentages, either, especially among the profitless low end of a market. Dell captured that in the PC world, and how are they doing these days? There have always beens lots of cheap MP3 players, but that didn’t stop iPods from conquering that market.

    I agree that it’ll be meaningful when billions of poor people have cheap smartphones, I just don’t think that makes Apple irrelevant. Most of those people would prefer an iPhone, which is a factor you shouldn’t ignore. Apple has plenty of talent, skill, cash, and vast economies of scale. In 5 or 10 years, billions of iPhones are not out of the question.

  103. @JAD: OK so you don’t need the feature. But it’s arrogant to assume that no one else does, either.

    When writing multipage blocks of text I greatly prefer to see the text formatted or partly formatted, rather than unformatted. Therefore a text editor is not the right tool. But I also need to view that text without the distraction of the top and bottom margins of each page, or of the header and footer text in those margins. Therefore I need a word processor that has a “normal” or “draft” view, or a modified “page” view that hides the top and bottom margins.

    It turns out that Word 2003 has both a normal view and an option in the page view to hide the top and bottom margins. Open Office and Libre Office don’t have either option, as of the last time I checked.

    It’s like being given a choice between a programmer’s editor that shows syntax highlighting but not code folding and one that has code folding but not syntax hightlighting, and then being told to “use the right tool for the job.” It turns out that the right tool is the one that can do both at once.

  104. @PapayaSF
    “Apple has plenty of talent, skill, cash, and vast economies of scale. In 5 or 10 years, billions of iPhones are not out of the question.”

    They are. For a few reasons. Mostly because one design does not fit everyone. Also, it has never ever been Apple strategy to serve the masses. Not when they marketed the original MacIntosh, nor when they marketed other “gadgets”. Apple always goes for the high end of the market.

    Even now, when they could take over the laptop market completely with cheap Macbooks, they stick to the expensive part of the market.

    @PapayaSF
    “Dell captured that in the PC world, and how are they doing these days?”

    Windows PCs rule the market. Nobody but Dell shareholders care about Dell market share. They do care about Windows market share though. The same with mobile phones. Users do not care about Samsung market share. But users do care about Android market shares.

    Down the road, I care whether or not all 7 billion humans can get cheap and easy access to networked mobile computers because that will make life better for everyone, including mine. Apple is not going to deliver that, even if they could. Because, as you write:

    Apple doesn’t particularly care about market share percentages, either, especially among the profitless low end of a market.

  105. @PapayaSF:
    Samsung is moving upscale and innovating, such as their larger screen smartphones. Android is innovating. Windows worked out for the PC, and Windows 3.1 wasn’t that great. Incomes in the developing world are rising and Android has more price points and models to migrate through.

    The breaking point might be when the smartphone is powerful enough to use as PC at a docking station, then the synergies from openness and marketshare might even more important because we will want to run all our apps and peripherals.

    I suppose most developers of generalized apps make an iOS version, because the marketshare and revenue potential are significant. But if most of your customers are running Android, then priorities in vertical markets can disrupt this.

  106. The docking station might even boost your Android or sync it, without having to pack all that powerful hardware into the phone, giving the user experience that you are not manually managing two separate devices. The apps might auto-sense when they are not running on the docking station to offer simpler interfaces and lower resource usage.

  107. >The problem is fundamental; one-time purchase payments can’t cover unbounded downstream support and development costs.

    Why should they? Take Microsoft Dynamics. There is no unpaid support at all, and as for development cost, customers can either pay 16% of the licence price a year and then get the new versions for free, this is called “paying licence maintenance” or they can give it up and it means they must buy the next version. They usually must, because after 2 major version changes the older ones will be unsupported, meaning that for example Dynamics NAV 5 is not guaranteed to run on Windows 8.

    Does any company really offer unlimited support and future versions for a one time purchase price? That makes no sense.

    Unless their product is pirated so much that they simply have to throw it in as an incentive not to pirate. This seems to be the case of PhotoShop. Everybody I know pirated it at least once, even if just out of curiosity.

    Anyway the causal chain is different then: piracy -> have to offer unrealistic things to paying customers to entice them not to pirate -> unsustainable business model.

  108. @Jeff Read: “Until it does, the quality of what it distributes will remain low, and walled gardens — where making sure authors get paid is a priority — will attract all the decent authors.”

    Value is relative: something is worth what someone else will pay for it.

    Back when Hollywood was pushing SOPA/PIPA, they were making an underlying assumption I consider invalid: they were assuming that if they could just magically stop piracy, revenues and profits would soar because the pirates would buy instead.

    I don’t believe that. The pirates were willing to grab content they could get free. It wasn’t valuable enough to *them* to *pay* for it. Make it impossible to pirate, and they’d simply do without, and do what they were doing before: spend their money on stuff that *was* valuable to them.

    The market pays for value. The trick is to *provide* value, price appropriately, and make it as easy as possible to give you money. Efforts trying to prevent piracy are pointless, because the pirates likely wouldn’t buy in the first place. You’re far better served to spend your efforts on reaching those who *will* pay for what you do.

    Baking DRM into content won’t magically insure authors get paid, because you don’t get paid if no one is buying, and applying DRM won’t insure they do.

  109. JustSaying: “For me, GIMP doesn’t have to be a useful end product, it is also a body of source code I can reference if I start writing image editing code for some purpose. This is fundamentally why open source accelerates away from closed source– knowledge proliferates faster.”

    If open source is accelerating away, where’s the product that shows it? It ain’t The GIMP.

    JAD: “People are prepared to do free work to gain mindspace, and worry about monetizing mindspace later.”

    The danger is that there may well be no way to monetize the mindspace. People’s expectations are that they can get it all for free, and those who try to monetize what used to be free tend to face severe backlash.

    PapayaSF: “Apple doesn’t particularly care about market share percentages, either, especially among the profitless low end of a market. Dell captured that in the PC world, and how are they doing these days?”

    And don’t forget that Compaq’s downfall was a blind push to gain market share, too.

  110. If open source is accelerating away, where’s the product that shows it? It ain’t The GIMP.

    It’s Cinepaint, which has been used by many major movie studios for the past 10 or 15 years. I’m not sure why this professional-grade GIMP fork doesn’t get enough love. Curiously, GIMP developers have historically been hostile to Cinepaint-originated contributions, which boggles the mind. Its primary focus is on movie-oriented work, but it has a number of features (color management, 16-bit color components) that print pros will appreciate as well. These days it even supports CMYK.

    The feedback between Cinepaint and its user base is even tighter than between Photoshop and its user base: the same studios that use it also contribute code to it. It’s arguably more actively developed than GIMP.

    I use GIMP as an artistic program, and it suits my own needs just fine. But until the GEGL-pocalypse comes, it will be deficient for professional work. In the meantime, a possible open source alternative exists in the form Cinepaint.

  111. @William O’Blivion

    Pantone spot colors are *mixed inks*. When you look in the book your printer has (no, not the little box over on a stand somewhere, a *real* printer, some dude with seriously arcane knowledge of shit like this) you find your pantone number and you see a recipe that says something like 3 parts white, 6 parts burnt umber, two parts Rhodesian red (I made that up. I think).

    That’s all well and good, but what actually reaches my eye is light, not pigment, and the human eyes and brain represent that light as three values. The idea that printers can only faithfully represent certain colors via use of six or seven “primaries” is therefore absurd. The YPbPr color space used by TV is probably a better representation of our perception, but it is converted back to RGB for display.

    Yes, I realize that paint colors use even more than that number of pigments, but that’s for the same reason that only the cheapest printers use CMY without the K: it’s cheaper to have more pigments to mix in smaller quantities.

  112. @JAD “If I want to edit unformatted text, and see what it will look like unedited, I use a text editor. If I am using Write, It is because I want to see the text formatted, and edit formatted text.”

    So, you never want to see (and work with it ) it formatted (as in, having bold, italics, headers in large font sizes, indented paragraphs) but not paginated (as in, having margins, a defined width, page breaks)? And because you never want to do that, neither should anyone else?

  113. @The Monster

    Are you proposing having all files at all stages of the process represent pixels purely in a simple color space (whether that is RGB, CMY[K], Lab, or whatever else) and all conversion to how it should physically be printed – including selection of which primaries to use – should happen… at what level? At the print shop? Within the printing device’s firmware?

  114. P.S. Would you at least agree there’s some merit in wanting a color other than a fully saturated CMY primary show up as solid ink rather than an array of dots? That’s not just a matter of being cheaper, that’s a matter of looking better.

  115. How is Apple’s minority stagnant or shrinking global market share a problem for GIMP? The computer universe is innovating away from Apple’s walled garden. Yeah the alternatives are not always as shiny and usable (yet), but sometimes do desirable features that aren’t allowed over there.

    Apple doesn’t have a shrinking global market share. OSX share is increasing relative to Windows.

    With respect to iOS the global market share looks good given their market expansion in India and China. If they actually release a lower cost iPhone then it looks even better and if the Pegatron rumors are true then expect to see a plastic iPhone this fall.

    /shrug Folks here predicting the imminent irrelevance of iOS has been wrong thus and have had to repeatedly move goalposts.

    Eventually they may be right but not anytime in the near future (ie next 5 years)

  116. @JustSaying:
    Of course Samsung and Android are innovating, but Apple is not exactly standing still.

    @Winter:
    That’s a bit sweeping. Apple is rarely the cheapest option, but I wouldn’t say that the Apple ][ and all iPods and iPads and iMacs were “high end.”

    Users do not care about Samsung market share. But users do care about Android market shares.

    Actually, the opposite is more true. Samsung is doing well in part because they are branding themselves well. Other Android vendors aren’t making money. Note that Samsung advertising almost never mentions Android. A huge portion of “Android buyers” are just getting a cheap smartphone and they don’t know or care what OS is on it. Thus the large portion of buyers who don’t use the web from their phones, don’t engage in the Android app system, etc. And it’s even a stretch to refer to an “Android market share” when so much of it consists of devices with old OS versions that will never be updated, incompatible forks (esp. in tablets), etc.

    Apple is crucial to driving the process of getting everyone a smartphone in several ways. Clearly they kickstarted the process with the iPhone, giving everyone something to emulate. (Recall that Android started as a Blackberry clone.) And Moore’s Law works in their favor, too: what’s high-end now doesn’t stay that way.

  117. It’s Cinepaint, which has been used by many major movie studios for the past 10 or 15 years. I’m not sure why this professional-grade GIMP fork doesn’t get enough love.

    For the same reason that lucid emacs didn’t get much love from the FOSS crowd. Because corporate developers actually needed to get the code done to do real work as opposed to futz around with an egotistical “lead” developer.

    Curiously, GIMP developers have historically been hostile to Cinepaint-originated contributions, which boggles the mind.

    Because FOSS eats it’s own young a lot. Primarily because GIMP devs feared that Cinepaint would steal active contributors from GIMP. The coin that matters in FOSS is code and a successful forked project attracts devs that might otherwise work on the original.

    It’s arguably more actively developed than GIMP.

    Well that wouldn’t be all that hard. They’ve been working 16 bit for a decade and AFAIK it’s still in the unstable branch awaiting their “final transition to GEGL”.

    ROFL

    In the meantime Pixelmator just added vector support in 2.2. The velocity difference between Pixelmator or Acorn vs GIMP is almost night and day.

  118. @Nigel: “With respect to iOS the global market share looks good given their market expansion in India and China. If they actually release a lower cost iPhone then it looks even better and if the Pegatron rumors are true then expect to see a plastic iPhone this fall.”

    Personally, I think that iOS vs Android market share comparisons are fundamentally flawed. They’re comparing Apples to oranges (sic). If you want to do a meaningful comparison, compare vendor to vendor, and compare Apple against, say, Samsung, who is the closest thing to real competition in Apple’s space.

    Android overall may have a larger market share, but that’s spread over how many different vendors and devices? Any other vendor would *kill* to get a fraction of Apples sales.

    Apple targets the high end of the market, which is not price sensitive. They make enormous revenues and and profits, have a stock price in the ionosphere, and until they finally decided to start paying dividends, had more accumulated cash than the total value of a lot of their competitors combined.

    Market share is meaningless if it doesn’t translate to revenue and profits, and you can have a huge market share and still be losing money if you can’t successfully charge enough for what you sell. The old joke “We lose money on every sale but make it up on volume!” applies.

    Apple’s stock price will get hammered if they don’t have another game changer product up their sleeve, but they’ll still be making enormous amounts of money. Whether Apple will release a cheaper iPhone is a tricky question. The decision will likely reduce to “Will we make enough additional sales to more than compensate for the margin we’ll forfeit on those sales?”

    If Apple *doesn’t* see increased market share translating into significant revenue and profits, they may pass on trying to get it.

  119. @Random832

    I’m saying that whatever method one uses to encode color, what our eyes and brains get out of it is <Y,Pb,Pr> or something damned close to it. Whoever thinks more data than that should be used to represent the color has the burden of proof to show why doing so would help in any way. For instance, there are vector graphics that can more efficiently encode constructed shapes, with rasterization delayed until the image is to be rendered either on a screen, printer, or some other type of device. (This does not preclude caching intermediate representations, such as a vector-based font being rasterized to a particular size, weight, resolution, etc.; then that result stored for faster reconstruction of that particular combination; with the understanding that it is merely for convenience, and can be discarded at any time and recreated as needed.)

    I’m not saying that the translation to CMYK or whatever pigments must be in the printer’s firmware, but it does belong somewhere between there and a driver that knows details about the printer that the graphics editor doesn’t need to know. Its job is to manipulate the image, not figure out how to make a particular device produce that image.

  120. JustSaying said: For me, GIMP doesn’t have to be a useful end product, it is also a body of source code I can reference if I start writing image editing code for some purpose. This is fundamentally why open source accelerates away from closed source– knowledge proliferates faster.

    Spoken like someone who is not the target market for any photo-editing software.

    If GIMP is accelerating away from closed source, it’s doing so very, very slowly – it remains useless now as it was the first time I looked at it in the late 90s.

    My observation is that Open Source is only “accelerating” in the server-tools and unix-development realm, frankly.

    That’s a big, big place. But … it’s also exactly where the Average Joe User never, ever, ever makes a software decision.

    Open Source remains unable to compete very well in non-geek end-userland; the only thing I can think of that’s compelling and not used as a known-inferior only because it’s free* is VLC.

    Reasons why are well-established above: The programmers both aren’t the target market and have no real incentive to do ugly grunt-work to serve that market**.

    (* This describes Open Office in almost all cases I’ve seen someone suggest or cop to it. Likewise GIMP.

    ** This is exactly why I have zero worries about the OSS competitors to the commercial code I get paid to write ever putting me out of a job. The OSS competition in my field [point-of-sale] is laughable and shows no signs of ever being otherwise.)

  121. TheMonster said: <I.That’s all well and good, but what actually reaches my eye is light, not pigment, and the human eyes and brain represent that light as three values. The idea that printers can only faithfully represent certain colors via use of six or seven “primaries” is therefore absurd. The YPbPr color space used by TV is probably a better representation of our perception, but it is converted back to RGB for display.

    Realize that in many cases for professional print output, “the printer” is not a Device That Mixes Some Pigments To Approximate YPbPr/CMYK.

    In many cases, it’s both a CMYK array (of literally separate passes) and actual spot color (or just spot color!).

    The software, if told by the designer, “this area is Pantone XYZ”, can’t just store “some CMYK variant of Pantone XYZ” unless you’re okay with converting to 4-color process for that output. I will give you a big hint here, when I say that is NOT OKAY in terms of output quality, typically.

    If your solution can’t handle spot color as an organic entity, your solution is useless for prepress. Period.

  122. @Deep Lurker:
    >In both cases, a minority of writers/coders are just fine without the extra features, but most writers/coders find the extra features to be highly desirable if not absolutely necessary.
    I guess I just happen to be in those minorities, and not comprehend the people who aren’t, then. While my text editor (gedit) does have syntax highlighting, I’m not convinced it’s useful (since reading this). And when I have to edit grub.conf in ed*, I’m pretty sure it’s not syntax highlighting I miss. As for code folding, if you need it, your code is probably bad.
    By analogy, if your writing needs to be formatted for you to be able to bear working on it, your writing is probably bad. I’ve used plaintext or markup pretty much exclusively for everything from technical writing to fiction and it’s never bothered me.
    So, before declaring that you need feature X (whether X=formatting or X=syntax highlighting), try working without it for long enough to give you a reasonable chance of adjusting… and see if you still need it. You might not; and if you do, you now have evidence to convince people like me, who don’t, that there’s relevant interpersonal variation.

    *Well, technically, I had the choice of ed or vi. The fact that I chose ed says something about vi…

  123. @Sigivald:
    > If your solution can’t handle spot color as an organic entity, your solution is useless for prepress. Period.
    But looking at it from another angle, and not deifying the existing process, I could equally argue “If your solution requires the designer to know about and manually specify printing-process arcana, your solution is useless for graphic design. Period.”
    Unless you happen to believe that an artist is so much better than a computer at working out how best to render an image given an available printing technology, that it actually makes sense for them to spend brainwidth on spot colour rather than on the actual, y’know, design. (Not just better, but enough better to give comparative advantage.)

  124. @DMcCunney: “Personally, I think that iOS vs Android market share comparisons are fundamentally flawed. … If you want to do a meaningful comparison, compare vendor to vendor, and compare Apple against, say, Samsung, who is the closest thing to real competition in Apple’s space.”

    Smart Mobile Device shipments for Q1 2013:
    Samsung: 87.2M units
    Apple: 59.6M
    Android overall: 183.7M

    Apple share: 19.3%
    Microsoft share: 18.1%

    So *Microsoft* has essentially tied them.

    Read the whole thing:
    http://linuxgizmos.com/android-tops-q1-2013-smart-mobile-device-shipments/

  125. Shipments aren’t sales, though. There’s no way that Microsoft sold nearly as many smartphones and tablets as Apple this year.

  126. @PapayaSF: “Shipments aren’t sales, though. There’s no way that Microsoft sold nearly as many smartphones and tablets as Apple this year.”

    Yep, and forgetting that can be deadly.

    The late Alfred Sloan was CEO of General Motors during its formative years, and wrote an autobiography called “My Years with General Motors”. He described an incident during the Great Depression when he gave one of the few direct orders he ever issued as CEO and told the divisions to cut production.

    The divisions recorded it as a “sale” when a car was shipped to a dealer. Sloan had toured dealerships and seen the growing inventory of unsold cars. He recognized that they were killing their dealers, and if their dealers died, their death would follow.

  127. > So, before declaring that you need feature X (whether X=formatting or X=syntax highlighting), try working without it for long enough to give you a reasonable chance of adjusting…

    What makes you think I haven’t? Been there, done that, wore out multiple tee-shirts. Starting with a manual typewriter, back in the 1970s.

    If I hadn’t found the new programs to be significant improvements, as they came along, I wouldn’t have adopted them. And in fact, I didn’t take to my first wysiwyg word processor (on a C64) because it didn’t have a “draft” mode. When the new version came out with a draft mode, I switched – because at that point it was a big improvement over a text-editor style program.

  128. (I should note that my “been there, done that” applies to writing only. Beyond dabbling a bit in html code and vba macros, I am not a programmer.)

  129. @PapayaSF @DMcCunney

    You seem to take it as a given that Apple is selling most everything they ship, where Microsoft, well, just couldn’t possibly be. But for all we know Apple may be the “General Motors” here.

    But then I’m anxious for any data that suggests Apple’s dominance and negative influence on this industry are being kept in check.

  130. @Michael Hipp: “So *Microsoft* has essentially tied them.”

    Well, that’s a start, though I’ll take the MS numbers with a sack of salt.

    But simple share percentages are meaningless, unless you have some idea what share works out to in revenue and profit. If you have significant share, but it’s in the low end commodity part of the business where margins are paper thin and you make pennies on the dollar, someone with much smaller share but higher prices and margins might arguably be doing a lot better than you.

  131. @DMcCunney “But simple share percentages are meaningless”

    LOL. No offense intended to you, but the Apple chorus has been fun to watch:

    – Apple sells more smartphones than anyone!
    (then Android passes it up)
    – Doesn’t matter! Apple sells more iOS devices than anyone, it’s total ecosystem that matters!
    (then Android passes it up)
    – You can’t compare Apple to all of Android, you’ve got to compare it to just one company!
    (then Samsung passes it up)
    – Market share doesn’t matter, it’s total revenue and profit that matters!
    (will this too get passed up? I dunno. But I’m sure at that time we’ll hear “it doesn’t matter!”)

    Am I the only one that sees the humor in this?

  132. @Michael Hipp: “You seem to take it as a given that Apple is selling most everything they ship, where Microsoft, well, just couldn’t possibly be.”

    I *do* think Apple is selling most everything they ship. There is a distorting effect in the smartphone market, since the price the customer pays is heavily subsidized by the carrier. (Heavily enough that it’s likely AT&T doesn’t *make* money on an initial iPhone contract. The cost of the subsidy eats their profit. They have to hope the customer re-ups and they make money down the road.) If the customer had to pay full retail, we’d likely see an effect on sales. (Never mind that over the long term, buying an unlocked iPhone at full retail, and then shopping for a good plan will likely save you money. Most consumers aren’t good at making that long term analysis.)

    And Microsoft *is* selling everything they ship, if you you are talking about the OS. They get a fee for Windows for every device a manufacturer makes that has Windows pre-installed. Actually *selling* the device is the hardware vendor’s problem. So a question to ask is how Nokia is doing, since they bet the farm on Windows Phone as the basis for their new product lines.

    Microsoft isn’t a major hardware vendor, and doesn’t want to be, I haven’t seen numbers on what sales of Surface units are like, but I consider that MS trying to prime the pump and get OEMs to climb on board.

    And once again, simple share percentages aren’t terribly meaningful unless we know what sort of revenues and profits that share brings.

  133. @Michael Hipp: “Am I the only one that sees the humor in this?”

    I see a lot of humor in it, but then, I’m not an Apple fan and don’t own Apple gear. (I *do* have an old PowerMac given to me by a friend cleaning house, waiting for me to install the last version of OS/X that runs on that class, just to play with OS/X.)

    But I am an interested observer who has been watching the industry for decades, and I stand by my statements. Look at Apple’s revenue, profits, and stock price. As yourself how much they really *care* about market share percentage or unit shipments. Ask yourself how much they *should* care.

    I see trouble down the road for Apple, as they are arguably at the tipping point where they will no longer be a “growth” company. But most of their competitors would *like* to have that sort of problem.

  134. @DMcCunney: “I *do* think Apple is selling most everything they ship.”

    You may be right. You may be wrong. I posted some data (however imperfect).

    I tend to doubt that AT&T, Verizon, HTC, et al are sitting on *tens of millions* of unsold MS units. Surely they’re better at managing production schedules and inventory than that.

    The data does *suggest* that a) Apple’s position is far from secure, b) MS might yet be a serious player.

  135. @Michael Hipp: “@DMcCunney: “I *do* think Apple is selling most everything they ship.”

    You may be right. You may be wrong. I posted some data (however imperfect).”

    And the data is at least a start.

    I have no idea what carrier inventories might be of units running Windows Phone (and don’t especially care.) I agree that they should have gained some expertise in inventory management,

    MS may well become a player, though are playing serious catchup. I’m even less a fan of MS that I am of Apple, but I hope they *can* compete. It’s good for the overall market.

    See my earlier comments on Apple’s position. The biggest threat they are faced with is a declining stock price. Vendors like Nokia are struggling to *survive*.

  136. @DMcCunney: “… and I stand by my statements. Look at Apple’s revenue, profits, and stock price. As yourself how much they really *care* about market share percentage or unit shipments. Ask yourself how much they *should* care.”

    Sure. But consider a couple of things:

    a) Strong rumors exist that Apple will be bringing out a “low cost” iPhone. If true, this would suggest that Apple considers market share important. Similarly their entrance into the previously poo-pooed 7″ tablet market.
    b) Their stock price is a bubble as is essentially the entire equity market. Means nothing.
    c) Market share and mind share are closely linked and can be a make-or-break issue depending on how strong are network effects and “social” aspects of consumer choice. Apple by no means has this sewed up.
    d) Everything you said about Apple and everything we know of them are from the Jobs era. That era has ended and much uncertainty follows. It would be extremely naive to think “as it has always been, so will it always be”. The safe bet is that Apple weakens, the unknown is how much.

  137. @Michael Hipp:
    It’s not a given, but yes, the way to bet is that Apple is selling most of what they ship and not channel-stuffing. As DMcCunney writes, Apple’s revenues and profits are a good indication. They’ve gotten very good at supply chain management and inventory control compared to the bad old days of the 1990s, when they constantly had shortages of popular items and gluts of unpopular ones. Their focus on a few variations of just a few models helps.

    I think any “negative influence” Apple has on the industry is far outweighed but the positives. The personal computer, the GUI, the laser printer, digital music sales, MP3 players, smartphones, tablets: all either popularized or revolutionized by Apple, and all cheaper or better or both than what came before. And don’t forget smashing the control that wireless companies used to have over cellphones. If you have an Android phone without uninstallable crapware and a stupid vendor skin, thank Apple for paving the way.

  138. @Michael Hipp: “@DMcCunney: “… and I stand by my statements. Look at Apple’s revenue, profits, and stock price. As yourself how much they really *care* about market share percentage or unit shipments. Ask yourself how much they *should* care.”

    Sure. But consider a couple of things:

    a) Strong rumors exist that Apple will be bringing out a “low cost” iPhone. If true, this would suggest that Apple considers market share important. Similarly their entrance into the previously poo-pooed 7? tablet market.”

    No, it means that Apple considers *growth* important. Growth is what got them that stock price in the ionosphere, and growth is what might keep it there.

    The question is what they’re willing to pay for growth. The rumors essentially require that Apple be willing to accept lower prices, less revenue, and less margin to get growth, and the question is how much less they’ll accept. My assumption is that they will do it if they think the growth they will get will be sufficient to more than make up for the lower revenue and margin they will accept to get it.

    If it works, it will result in higher market share, but the point isn’t market share, it’s money.

    “b) Their stock price is a bubble as is essentially the entire equity market. Means nothing.”

    It means *everything* if you are a publicly traded company *in* the equity market. Your shareholders are your owners, and the value of their equity is critical to them. Once you have attained a stock price in the stratosphere, your problem is keeping it there. Top management’s primary job is to preserve and increase the value of the owner’s holdings. They can be sued by shareholders if the holders don’t think they are trying to do so, and dismissed and replaced if shareholders think they *are* trying, and failing.

    “c) Market share and mind share are closely linked and can be a make-or-break issue depending on how strong are network effects and “social” aspects of consumer choice. Apple by no means has this sewed up.”

    I don’t think they have either. But I think they are in a good position, and face the challenge of staying there.

    “d) Everything you said about Apple and everything we know of them are from the Jobs era. That era has ended and much uncertainty follows. It would be extremely naive to think “as it has always been, so will it always be”. The safe bet is that Apple weakens, the unknown is how much.”

    I largely agree. I just advise not holding your breath waiting for Apple to fail.

  139. @PapayaSF: “I think any “negative influence” Apple has on the industry is far outweighed but the positives. The personal computer, the GUI, the laser printer, digital music sales, MP3 players, smartphones, tablets: all either popularized or revolutionized by Apple, and all cheaper or better or both than what came before.”

    You give them more credit than I believe is due, but that’s a matter of opinion and not a discussion that can be settled. In any event, what informs my opinion of them is the colossal long-term danger inherent in their control-freak mindset toward the customer. There is no amount of “shiny things” that can ever outweigh that, IMHO.

  140. @DMcCunney: “No, it means that Apple considers *growth* important.”

    Entering low-end markets is near guaranteed to depress ROS. I’m pretty sure they wouldn’t take that gamble if they didn’t consider it important to avoid news phrases like “declining share”, “minimal presence”, “no-show Apple”, “ceded the market”, etc.

    “It means *everything* if you are a publicly traded company *in* the equity market.”

    Guess I need to be more specific. It means nothing for the *future* as it is likely unsustainable. Even if their profits continue to do well, their stock price might very well go way down. Stock prices are mostly a numeric measure of emotion/perception, and such are fickle things.

    ” I just advise not holding your breath waiting for Apple to fail.”

    I’m neither expecting nor wanting them to fail. I just want their negative influences kept in check and for non-control-freak alternatives to abound.

  141. As a sometime designer and writer who has worked with many designers and artists and writers, I see Apple’s control-freakiness as that of a designer. They do not have a typical consumer electronics manufacturer’s attitude toward their products (make a bunch of variations to cover everyone and see what sells) or their customers (“They want a feature that doesn’t work well? So what? Give it to them!”) That is, by and large, a good thing. All design is tradeoffs. Apple wants to do a few things right, not lots of things half-assed. That does mean fewer features, fewer models, and a more constrained set of possible user experiences. Yes, I’d like them to open up a bit more, but they are avoiding many pitfalls: compare iOS vs. Android malware, user satisfaction and loyalty, etc.

    This is at the core of why esr and other hacker and open source types dislike Apple. To a hacker, customizability = freedom and is pretty much non-negotiable. A designer, though, wants limits on that, so that the user doesn’t screw things up. (Not only in the eyes of the designer, but so that the user doesn’t screw things up for themselves.) But it’s a fact that the vast majority, when it comes to their phones and computers, prefer something that just works smoothly over something that has capabilities that they won’t use. Hence, among other things, the greater defections from Android to iOS than the other way around.

  142. @PapayaSF: “As a sometime designer and writer who has worked with many designers and artists and writers, I see Apple’s control-freakiness as that of a designer.”

    Perhaps. But you’re giving them the benefit of the doubt toward altruistic motives. I think the evidence of their behavior suggests otherwise. But we can’t read their minds, we can only deal with the consequences of their decisions … and … deal with the fact that they want to reserve every possible decision for themselves. Surely it is understandable that this triggers alarm bells for many of us who aren’t prone to trust (any) big company.

    “But it’s a fact that the vast majority, when it comes to their phones and computers, prefer something that just works smoothly over something that has capabilities that they won’t use.”

    Why do you take it as a given that this is a mutually exclusive choice? Is there no-one who can design something that gives a smooth out of the box experience but yet allow much flexibility for those wishing to venture there. If Apple designers are as good as everyone claims would not they (of all people!) be able to produce such a thing. Would not their designs trend toward such over time as improvements are made? In fact, do their products trend that direction or the other? Perhaps, just perhaps, this is not really their motive.

  143. If Apple designers are as good as everyone claims would not they (of all people!) be able to produce such a thing.

    They did. It’s called a Mac. Every open source booster (our host included) should really try one, and check out the plethora of top-quality software that’s available for it. You will be amazed, your horizons will expand, and your life will be easier since you’re not recompiling kernels or fiddling with xorg.conf. (xnu even has a standard driver interface that works, allowing hardware vendors to ship drivers for it — fancy that!)

    True freedom is being able to do what you bought the tool to do.

  144. @Jeff Read: “They did. It’s called a Mac. Every open source booster (our host included) should really try one, ”

    I have. Unimpressed. Awkward, poorly designed, unreliable, confusing, irritating, low productivity, hard on the wrists and arms. Give me Win7 any day. But if the comparison is with Linux/FLOSS desktops I regretfully agree … they all stink.

    But perhaps you’re making my point … Apple is capable of building such, but choose *not* to (at least in the smart mobile category), which would seem to go against PapayaSF’s benevolent designer theory.

  145. @Shenpen:

    Does any company really offer unlimited support and future versions for a one time purchase price? That makes no sense.

    […]

    Anyway the causal chain is different then: piracy -> have to offer unrealistic things to paying customers to entice them not to pirate -> unsustainable business model.

    The point I made to Jeff Read (about DRM) and DMcCunney (about growth being most important and stock renters being parasites) upthread, is that individuals have a natural veto (because copyright law can’t stop them, not even DRM can) to software companies that are not maximizing ecosystem growth and knowledge. My point is that yes to funding development of your specific knowledge, but extracting additional rents for renter seekers (the stockholders) who are adding no knowledge at the cost of not open sourcing the non-specific portions, is not maximizing growth (of knowledge and relevance) in the expanding universe. The point is that all the programmers get the wealth, as much as much.

    @Jay Maynard & Sigivald:
    GIMP isn’t winning for all classes of image editing users, but still the source code is there for anyone to leverage (copy, reuse, or rewrite) for any program which has any thing to do with images (not necessarily an image editor). This maximizing systemic growth of software and knowledge. For example, I could take the observations here in this blog and use them to go create a better product, leveraging the existing knowledge.

    I have posited upthread that the reason open source can not focus well, is because there is no market for writing modules and getting paid module-wise, so the open source portions can be maximally orthogonal and those with specific knowledge can earn more than they currently are at proprietary companies. I came from the proprietary side and I don’t want to earn less. I disavow Richard Stallman.

  146. @me:
    Typo: as much as possible.

    I don’t want to earn less

    What I really want to maximize is my earnings in knowledge. Money is just one fungible way of representing that, but it is more important for me to have a repository of knowledge that I am maintaining and it is sending me food and other material needs every day. The stored wealth should be in the knowledge, not in money that sits there and is stupid extracting rents and (not only) retarding civilization (but also enabling gaming production by statism leading to horrific societal busts via delaying annealing to technological acceleration).

    The point is that all the programmers get the wealth

    The point is cut out the parasites by eliminating the impedance mismatch that provides the Theory of the Firm (thanks again Winter for turning me on to that some years back). So all of us earn more, and civilization improves.

    As for those designers who are afraid of losing revenue without total control freak gardens, you are actually giving (a portion of) your work to the stockholders instead and being controlled (or retarded) by a paradigm of collective retardation.

    @Jay Maynard:

    The danger is that there may well be no way to monetize the mindspace. People’s expectations are that they can get it all for free, and those who try to monetize what used to be free tend to face severe backlash.

    Agreed. The peaking statism is perhaps funding much of this. Much of this excess stupid monetary capital (including all insurance and retirement plans that invested in bonds) should be wiped out soon, since it is just an illusion of debt being propped up on the backs of the authors in the hitech space in the developed countries and the labor in the developing countries. Then the expectation of free could be replaced with a reality of not many able to fund free software (unless it is paid for in some ancillary way).

  147. @Sigivald

    If your solution can’t handle spot color as an organic entity, your solution is useless for prepress. Period.

    Finally, an explanation of what “Pantone support” means in real terms. What it really means is “spot color support”

    It seems like this can be accomplished via a layer that encodes the spot color as its YPbPr triple, but with a tag attached to the layer saying “Pantone XYZ” (or theoretically anything else the designer can use to specify this spot color), which thereby provides the instruction to the press operator. Then simply don’t EVER flatten that layer into the other layers. Providing a special per-layer flag that locks it so that it can’t be flattened (without first clearing that flag) would probably be nice. The mechanics of displaying the image don’t change one bit by this.

  148. @PapayaSF:

    Apple wants to do a few things right, not lots of things half-assed.

    W.r.t. to winning the long-term ecosystem, the most important single thing to do right is make a framework, where unlimited others can “do a few things right”, so we end up with nearly everything right.

    I don’t know if the problems with Android are due to Google not achieving this with the OS and hardware. I will know soon if I start writing apps.

    I would strive to write very high quality apps, that just work. And probably replace the standard apps that come with Android if they continue to suck as with my recent experience on my HTC Desire V (soon to perhaps become a $400 paper weight or hockey puck).

    @Jeff Read:

    your life will be easier since you’re not recompiling kernels or fiddling with xorg.conf.

    Indeed. But this is not necessarily mutually inclusive with openness. Those capabilities can be abstracted to a large degree and just work easily for users.

  149. @Michael Hipp: “You give them more credit than I believe is due, but that’s a matter of opinion and not a discussion that can be settled. In any event, what informs my opinion of them is the colossal long-term danger inherent in their control-freak mindset toward the customer. There is no amount of “shiny things” that can ever outweigh that, IMHO.”

    Apple has never been an innovator in the sense of doing completely new things. All of the markets Apples is in – desktop and laptop computers, media players, smartphones, tablets – existed before Apple got into them. What Apple has been superb at is distilling and *refining* the concepts, until Apple devices set the standard for The Way Things Are Done in those markets.

    For Apple’s customer base, That “control freak” mentality you mention is an Apple *feature*, not a bug.

    Frankly, while we toss around “freedom of choice” as a mantra, what most folks really want is freedom *from* choice. They want a *reduction* in the amount of things they must consciously consider and make decisions about. They are threatened by information overload, and having so *many* choices that they might be best served by picking one at random, because they don’t have and can’t acquire the knowledge needed to make an *informed* choice.

    Apple has always been totally focused on the user experience, and that focus is what has let them lay claim to the high end of the market. Apple devices are well designed, with a UI crafted so that things generally do what you would expect them to do, and as far as possible, you don’t have to read documentation to use the device for the intended purpose. Apple also recognized that products that *looked* good *sold*, and their industrial design has been superb.

    Apple has a “walled garden” as part of the UX focus. They require apps that run on Apple devices to follow Apple design guidelines, and are fussy about what they approve to run on Apple devices. If Apple approves it, and you get it through the iTunes store, you can be confident that it will do what it says it will do, in the way you would expect it to do it.

    In part, this greatly lowers support issues for Apple, but the real purpose in comfort and usability for the user. Users may have less choice with Apple products than with others, but they don’t *care* – Apple’s offerings are broad enough that whatever they want to do, they can find an Apple certified app that will do it, and they don’t *need* to venture beyond the garden walls.

  150. @Michael Hipp: “@DMcCunney: “No, it means that Apple considers *growth* important.”

    Entering low-end markets is near guaranteed to depress ROS. I’m pretty sure they wouldn’t take that gamble if they didn’t consider it important to avoid news phrases like “declining share”, “minimal presence”, “no-show Apple”, “ceded the market”, etc.”

    It remains to be seen what gamble they are taking. I *really* don’t see Apple trying to compete in the *low* end markets. They simply aren’t set up to do that. I *can* see them reaching down into the upper middle and perhaps middle of the market, *if* they think they can more than make up in volume what they lose in revenue and margin.

    The financial markets want growth from Apple, but they won’t reward growth that comes at the cost of profitability. Apple reached it’s current heights by achieving enormous sales of high priced products on which they could make a very high margin. The financial markets want more of the same.

    ” “It means *everything* if you are a publicly traded company *in* the equity market.”

    Guess I need to be more specific. It means nothing for the *future* as it is likely unsustainable. Even if their profits continue to do well, their stock price might very well go way down. Stock prices are mostly a numeric measure of emotion/perception, and such are fickle things.”

    Well, Apple profoundly *hopes* it’s sustainable, because sustaining it is their biggest challenge and goal.

    They are in the same position as many who came before them. They made investors happy by achieving a high value for the stock, and getting that high value was why investors bought their stock. Now they must *keep* that value high.

    You’re right that profits remaining high won’t keep their stock price from dropping, and I talked about the issues involved earlier.

    The fundamental problem is that most investors who buy your stock don’t *care* about your *company*. They only care about the money they can make from owning your stock. In some cases, they care about income from dividends you pay. More often, they care about the value of their stock steadily increasing, so they can make a high return if they sell from the difference between the low price at which they bought and the high price at which they can sell. They’re in it for capital gains.

    If they think your stock price will nose-dive, they’ll bail out to make as much as they can while thier stock still has a high value. This can cause a general exodus from your stock as others follow suit, and a dramatic fall in your stock proce can cause larger problems down the road.

    If you can manage it, you *really* want to keep the price of your stock high,

    ” ”I just advise not holding your breath waiting for Apple to fail.”

    I’m neither expecting nor wanting them to fail. I just want their negative influences kept in check and for non-control-freak alternatives to abound.”

    Their negative influences are largely specific to them. And there are plenty of non-control freak alternatives.

  151. A tip jar is a better business model than elitist extortion. Indirectly, Adobe is forcing its customers to become more intelligent.

  152. @DMcCunney: “Frankly, while we toss around “freedom of choice” as a mantra, what most folks really want is freedom *from* choice. They want a *reduction* in the amount of things they must consciously consider and make decisions about.”

    No doubt true.

    But Apple *could* build devices that cater to both. I want to contend in the strongest terms that those are not mutually exclusive – especially to a crew of talented and well-funded designers like Apple.

    Unfortunately this don’t-make’me-think consumerism plays right into Apple’s control-freakism. You say they’re doing it for the customer. I say they’re using the customer as an excuse to justify their control-freakism. We’re both right. Obviously they make some really nice stuff and it brings in lots of money. But I do not want to see what this industry would be like if their way becomes how most things are designed and sold.

    I’m curious … will everyone still be as quick to defend/praise Apple if they move Mac OS X toward being a locked-down platform and closed ecosystem like iOS? (It seems obvious to me that’s where they’re heading.)

  153. @Michael Hipp: “But Apple *could* build devices that cater to both. I want to contend in the strongest terms that those are not mutually exclusive – especially to a crew of talented and well-funded designers like Apple.”

    Given what I’ve stated about Apple’s motivations, I’m not sure how they *could* make devices that cater to both. I suppose they could make it easy to install software not acquired through and vetted by Apple, but what would be the point? If you want to install J. Random Software and not be encumbered by manufacturer constraints, you don’t buy Apple in the first place.

    And if Apple opens the gates to the walled garden, they multiply support headaches. Another Apple selling point is that “It Just Works”, and if it *doesn’t* work, they’ll *fix* it. That becomes an order of magnitude harder if users can install what they please and it breaks. I guess Apple could open the gates and effectively say “Okay. We’ve opened the gates. You can install what you like. But if you didn’t get it through us and we didn’t approve it, if it breaks we *won’t* fix it, and the resulting issues are *your* problem”. That goes so far against Apple’s grain I really don’t expect to see it.

    I said Apple’s customer-centric approach was what got them where they are. Whether it or the control-freak aspects came first is largely a chicken and egg question, and ultimately irrelevant. Apple is what it is.

    “But I do not want to see what this industry would be like if their way becomes how most things are designed and sold.”

    Nor would I, but I really don’t see that as likely.

    “I’m curious … will everyone still be as quick to defend/praise Apple if they move Mac OS X toward being a locked-down platform and closed ecosystem like iOS? (It seems obvious to me that’s where they’re heading.)”

    I don’t think that’s exactly what will occur.

    What the industry seems to be moving to is the *same* OS on any device you happen to have. MS is largely there with Windows for the desktop/laptop, Win8 RT for ARM based tablets, and Windows Phone for smartphones. Linux is moving in that direction, as Android is a flavor of Linux, runs on smartphones and tablets, and had an alpha X86 port for desktops/laptops.

    I expect to see increasing convergence between OS/X and iOS, and at some point Apple devices will all run iOS, but I don’t see them locking down what runs on desktops/laptops the same way they do mobile devices. They have vastly different use cases.

    (A major factor is that hardware gets steadily faster, cheaper, and more powerful, and mobile devices now have enough horsepower that you *can* run the same OS on them that you run on bigger iron.)

    About half the folks I know are techs – system architects, network engineers, sysadmins, developers – and what at least half of them use as workstations are Macs. OS/X has a modified BSD kernel under the hood and a full set of Gnu utilities. You can get to a bash shell and do all the standard *nix command line stuff. OS/X has a well crafted GUI that integrates well with *nix beneath. The hardware is solid and well crafted, It Just Works, and if it *does* break, Apple *fixes* it. (You can also use Parallels and run Windows and OS/X at the same time in a virtual machine setup.)

    These are not proprietary software advocates. Some of them contribute to open source projects. They are simply getting the best possible tools to help them do their jobs, and Macs are simply better tools.

    I really don’t see Apple alienating a large segment of their power user market by trying the same sort of lock down they impose on consumer handhelds. Whatever you might think of them, they *aren’t* stupid.

  154. @DMcCunney:

    what most folks really want is freedom *from* choice. They want a *reduction* in the amount of things they must consciously consider and make decisions about.

    Most want both, with anything difficult or annoying being a roadblock.

    Apple could instead offer an *optional* certification that they put on apps in their store. Those customers who want to be 100% protected from choice, could set their store to only show certified apps.

    can find an Apple certified app that will do it, and they don’t *need* to venture beyond the garden walls

    As I understand, they won’t allow an app that loads any scripting from over the network, e.g. a Safari browser plugin can not load Javascript from over the network.

    So user cross-sharing of scripts is impossible.

    So Apple would have to deny a social network or game that enables users to share scripting. Otherwise this could become a platform to destroy their oversight.

    Seems scripting could become very important in the future if employment will be significantly programming.

  155. @DMcCunney:

    but I don’t see them locking down what runs on desktops/laptops the same way they do mobile devices. They have vastly different use cases.

    Ah ha! I was thinking about Visual Basic and Office scripting and such and then I was thinking about my comment upthread about the eventual convergence of the smartphone and the docking station for the desktop, which means same applications run on both.

    The walled garden is going to lose.

  156. @JustSaying
    “The point is cut out the parasites by eliminating the impedance mismatch that provides the Theory of the Firm (thanks again Winter for turning me on to that some years back). So all of us earn more, and civilization improves.”

    Your welcome. Ignorant people think that economics is about money, smart people know that it is about productivity and efficiency.

    @JustSaying
    “Then the expectation of free could be replaced with a reality of not many able to fund free software (unless it is paid for in some ancillary way).”

    About Gratis Software. Productivity is about getting compensation for your work. Most free* software is compensated from savings in development, maintenance, and testing. You need an application, you barter in kind. It is obvious that this works. Linux and BSD are developed this way. Is this sustainable? In the long run, nothing is sustainable. But in the long run we are all dead. And it certainly looks like selling bits is not sustainable in the long run.

    There are other methods to get compensation for free* software. Find a new one and you get rich. But the world is now turning into a buyer’s market for software and customers want free* software.

    * Replace free with a word of your choosing.

  157. @DMcCunney: “And if Apple opens the gates to the walled garden, they multiply support headaches. Another Apple selling point is that “It Just Works”, and if it *doesn’t* work, they’ll *fix* it. That becomes an order of magnitude harder if users can install what they please and it breaks.

    I really don’t see Apple alienating a large segment of their power user market by trying the same sort of lock down they impose on consumer handhelds.”

    You are contradicting yourself here. If opening up the walled garden causes everything to not just work, then this is something they must do with Macs. If it doesn’t then they have other motivations for not doing it with mobile devices.

    Realizing that most buyers of Macs are not the “techs” you describe, but the same pure “consumers” that are buying the mobiles. Makes no sense to treat them differently, if Apple’s motives are as you say. (Which is why I argue their motive is control-freakism for its own sake and for the money, not any concern for the customer.)

    And do you not imagine Apple coveting the 30% that is there to be earned on every piece of software installed on every Mac. An obvious source of that growth you pointed out they *must* have.

  158. JustSaying: You really need more tools than just the “statism” hammer. Not everything looks like a nail.

    DMcCunney: “These are not proprietary software advocates. Some of them contribute to open source projects. They are simply getting the best possible tools to help them do their jobs, and Macs are simply better tools.”

    Exactly.

    TomA: “A tip jar is a better business model than elitist extortion. Indirectly, Adobe is forcing its customers to become more intelligent.”

    Let us know when you come back to the real world. Tipjars are notorious for being terrible ways to make a living.

  159. @The Monster “Its job is to manipulate the image, not figure out how to make a particular device produce that image.”

    Defining which colors are important (i.e. should be reproduced as solids rather than as an array of dots) is part of manipulating the image.

  160. @Deep Lurker:
    > What makes you think I haven’t? Been there, done that, wore out multiple tee-shirts. Starting with a manual typewriter, back in the 1970s.
    I trust you’re aware of the ‘honeymoon effect’? It’s possible that, on each upgrade, by the time you’ve discovered all the pitfalls of the new way, you’ve forgotten what the old way was really like, at least to the extent that you can’t make a valid judgement as to which is more productive. That’s why I suggested trying a “downgrade”.

  161. @Monster:

    That’s all well and good, but what actually reaches my eye is light, not pigment, and the human eyes and brain represent that light as three values.

    What your brain perceives is a LOT more complicated than that.

    The idea that printers can only faithfully represent certain colors via use of six or seven “primaries” is therefore absurd.

    No, it makes sense when you know about how ink, light and paper interact. With monitors–LCD,LED or Electron/Phosphor you build a color by mixing bits of red, green and blue. With inks it’s MUCH more difficult because you’re putting down colors that *subtract* light, leaving only the color you want coming off the page.

    The other thing you’re not really “groking” is the different ways to mix color. With pantone spot colors, even on a “low end” press you get consistent smooth color that is accurate to within the abilities of your pressman to mix colors. You also need *1* pass through the press to get that color down.

    When you do CMYK you need to lay down 4 colors, and these do not mix on the page, they mix in your brain. You get a variety of colors out of process color by breaking the ink up into dots and then by varying some quality of the dots (size, shape, placement etc.). Older techniques (the original half-tone) used lines of dots of different sizes along different angles at predictable distances. Modern techniques use consistent sized dots and vary the spacing as well as eliminating the angles. The latter greatly reduces the moire patterns, however for our discussion it doesn’t matter.

    The point is that you have relatively limited control over the way process color mixes, and because it’s reflected light based by subtracting out all he OTHER colors, CMYK just *cannot* get the full range of color. Keep in mind that color is a combination of RGB in varying intensities. Adding additional colors significantly increases the color gamut.

    Yes, I realize that paint colors use even more than that number of pigments, but that’s for the same reason that only the cheapest printers use CMY without the K: it’s cheaper to have more pigments to mix in smaller quantities.

    Huh? It’s cheaper to have 45 little tubes of various pigments floating around than to have 5 big tubes of red, yellow, blue, white and black?

    No, no it’s not.

  162. It seems like this can be accomplished via a layer that encodes the spot color as its YPbPr triple, but with a tag attached to the layer saying “Pantone XYZ” (or theoretically anything else the designer can use to specify this spot color), which thereby provides the instruction to the press operator.

    No, no, no. Unless you bake support for this sort of thing right into the application, you will lose to competing apps that are actually designed for pre-press. You know, like Photoshop.

  163. @Edward Cree:

    @Sigivald:
    > If your solution can’t handle spot color as an organic entity, your solution is useless for prepress. Period.

    But looking at it from another angle, and not deifying the existing process, I could equally argue “If your solution requires the designer to know about and manually specify printing-process arcana, your solution is useless for graphic design. Period.”

    Then you have *no* idea of what you are talking about.

    Seriously.

    There are a HUGE array of things designers work on, from business cards to billboards to all sorts of random crap that use in bizarre printing methods.

    There are thousands of products want to print color on, and they want those colors to be, well, the right color. If you’re creating key-fobs for IBM you print the logo in Pantone 2718. If you don’t you won’t get the business again. You don’t do a CMYK approximation, you specific a SINGLE color. You want golf shirts for Shell Oil you specify the thread match 116C and 485C. T-shirts for Greenpeace? 363U.

    Are there CMYK equivalents? Probably, but if you only need one color, or black plus one color why screw with all the hassle of process color (line screening, registration etc.).

    This gets even more complicated when you’re working a range of materials that are to be distributed together (think mousepad, brochure, T-Shirt and hat).

    Unless you happen to believe that an artist is so much better than a computer at working out how best to render an image given an available printing technology, that it actually makes sense for them to spend brainwidth on spot colour rather than on the actual, y’know, design. (Not just better, but enough better to give comparative advantage.)

    The designer knows *AHEAD OF TIME* what the end material is going to be, what the budget is, what they client wants in terms of quality etc. and picks the appropriate color models.

    If you’re designing an advertisement for a glossy magazine you don’t *get* spot color unless you want to pay a LOT extra, then you have to map everything into CMYK and live with what you get. If you’re designing a door hanger for a lawn maintenance company that is going to be printed on card stock you may only have a budget for one color. You may be printing on non-white paper. You might be designing packaging for non-standard shapes, or something that is going to be screen printed (used to be called “silk screen”, but we don’t use silk anymore) on plastic. The possibilities are almost endless.

  164. Edward Cree said: <I.But looking at it from another angle, and not deifying the existing process, I could equally argue “If your solution requires the designer to know about and manually specify printing-process arcana, your solution is useless for graphic design. Period.”

    “If your solution requires designers to know about printing, it’s useless for making printed goods”, is how I read that.

    I don’t think that works, sorry.

    Remember that graphic design (full disclosure: I’m a trained graphic designer, though I don’t work in the field) for print (as opposed to “for the web”, etc.) is literally about printing. The job is not “to make some pretty looking picture without reference to its use in reality”, but “to make a physical object, using actual existing technology, for a physical purpose”.

    It’s not about “deifying the existing process” (of design) – it’s about the pure, extant fact that the job of making a physical object depends on an actual, extant physical process using real methods and tools.

    The “process” that constrains the designer is that of making a printed object – and having it not look like ass. (There’s a reason the printing world doesn’t just do everything as four-color process: it’s not the best solution in many contexts.)

    Saying that the tools used to design images for print shouldn’t … take into account the technologies actually used to produce printed objects is one I find baffling.

    Thus, “Unless you happen to believe that an artist is so much better than a computer at working out how best to render an image given an available printing technology, that it actually makes sense for them to spend brainwidth on spot colour rather than on the actual, y’know, design. (Not just better, but enough better to give comparative advantage.)” falls out.

    We’re not talking about “how to render some picture” (pictures are in effect always four+-color process); it’s “how do I make the best looking output, with the constraints of what the customer is willing to pay for” [and in some contexts “what the press we have available can DO”, especially in say industrial contexts].

    So, yes, I believe that is exactly one of the things a designer is better than a computer at doing.

    Part of the job of being a professional graphic designer (a significant subset of the real target market for Adobe’s very expensive tools) is the ability to understand those tradeoffs and make them. Software is inherently lousy at judgment calls like that.

    No algorithm can decide that “this background should be spot vs. process because it won’t look as nice, and we can afford the extra color on the press run because we happen to have a five color press available… and maybe the benefit to upgrading to six colors would be justifiable if we did THIS here…”

    If you care about the quality of your results, you don’t just assert four-color process and tell it to export (or detect a big field of solid color and assert it needs to be spot color!); it takes human judgment to make those calls.

  165. @ Jay Maynard – “Let us know when you come back to the real world. Tipjars are notorious for being terrible ways to make a living.”

    Exactly. Hence my point that extortive business practices are similarly naive and doomed to fail.

    I guess that wit is wasted on the witless.

  166. Unless you happen to believe that an artist is so much better than a computer at working out how best to render an image given an available printing technology, that it actually makes sense for them to spend brainwidth on spot colour rather than on the actual, y’know, design.

    New rule: Every soi-disant “hacker” who thinks they know shit about design, who hasn’t been through four years of design school and doesn’t have AT LEAST one example of professional output in circulation, shut the fuck up RIGHT NOW. Ya’ll don’t know shit about design. Design is an industry, profession, and craft every bit as complex as yours is, possibly even more so. Design is not just about deciding how things should look. It’s deciding how things will be made. It incorporates elements of physics, chemistry, geometry, and (this is the tricky bit for geeks) human psychology. For example if designing for print, you have to take into consideration the type of paper being used, the way the pigments interact with the material and each other, the process being used, etc. and ensure that these elements combine to make your design look good which is highly subjective. Do you know why in the 60s companies suddenly started changing their logos, e.g., from ornate representations to simple abstract shapes formed of geometric lines and curves? Phototypesetting, that’s why. See, optical aberration from the lens in a phototypesetter made the old logos look like ass, but we don’t perceive the aberration in an abstract logo because it’s out of the uncanny valley. So when phototypesetting displaced hot-metal type for letterheads and that (due to being cheaper), the companies had new logos designed to better accommodate the phototypesetting process. It’s not just marketing douchebaggery that goes into these decisions but real physical concerns.

    Adobe has had a close relationship with the design community since its inception. (John Warnock’s wife is a professional designer, and even designed the Adobe logo.) Their products are entirely the result of close collaboration with media professionals doing actual work, shaped to fit the needs of those professionals. Open source class projects like GIMP? Not so much.

    If you do professional design with a digital workflow, you will be purchasing Adobe products. Period. End of story. Next question please.

  167. @Just Saying: “Apple could instead offer an *optional* certification that they put on apps in their store. Those customers who want to be 100% protected from choice, could set their store to only show certified apps.”

    Tell me why they *should*, given my prior comments.

    ” They can find an Apple certified app that will do it, and they don’t *need* to venture beyond the garden walls”

    As I understand, they won’t allow an app that loads any scripting from over the network, e.g. a Safari browser plugin can not load Javascript from over the network.

    So user cross-sharing of scripts is impossible.”

    That’s not an Apple limitation, and it’s not just Safari. Scripting exploits are a major Internet security issue. Mozilla Firefox and Google Chrome both have security restrictions in place to put scripts in a sandbox. I assume IE and Opera have measures as well, but haven’t looked. Under Firefox, I run the NoScript extension that blocks all scripting unless the URL trying to run scripts is in a user maintained whitelist, and NoScript can optionally block Java, Flash and Silverlight, too.

    I’m not sure what you mean by “So user cross-sharing of scripts”, but it likely isn’t impossible. The restrictions will be on what the scripts are allowed to *do*.

    For instance, I run an app called Tiddlywiki. Tiddlywiki is a personal wiki implemented in a single file, and composed of HTML, CSS, and JavaScript. It can be run in any current browser
    that supports the features it uses. Tiddly had a problem a while back: Mozilla changes broke it on Firefox by blocking the ability for Tiddly to make changes to the local filesystem. This wasn’t aimed at Tiddly specifically – it was a general “JavaScript should not be able to write to the local filesystem” security measure. But since Tiddly lived in the local filesystem and wiki updates would have to change the local copy, Tiddly was broken. Tiddly’s creator did an interim work around by calling a Java applet to do the updates, and eventually provided a Firefox addon that gave Tiddly the necessary permissions to work unaided. (On Mozilla SeaMonkey, use of Tiddly would pop up a dialog box stating that the app wanted to write to the local file system and did you want it to do that. Saying yes set a permissions flag and let it operate. Firefox users never got that question.)

    “Seems scripting could become very important in the future if employment will be significantly programming.”

    It’s already very important. More powerful hardware has made it possible to do things in scripting languages you used to have to do in compiled code, because the hardware can run the interpreted scripting code fast enough. esr codes primarily in Python these days, and uses another language only if he has no other option. Python on current hardware is fast enough that you probably don’t need to write in C or the like, and it’s cross-platform, so your Python code can run on Windows, Linux, or OS/x.

    And developement is increasingly shifting to the web, so you are coding in HTML, CSS, and JavaScript. HP (formerly Palm’s) WebOS, originally developed for smartphones, had a model where app development was based on HTML5. Mozilla’s Firefox OS for smartphones has a similar model.

  168. Of course a professional print designer, who’s invested years of human capital into learning about printing methods, is going to believe — or at any rate argue — that his job is indispensable.
    It’s like a textile worker in 1800 arguing that a machine can’t possibly replace him, because it doesn’t know how to thread a needle; or a messenger in 1900 arguing that this telegraph gadget can’t replace him, because it can’t ride a horse.

    Sure, it might be good to have a plugin to your image editor that does “work out how to print this (for a given cost-quality tradeoff), and show (perhaps continuously) a preview of what the process would produce”. It might even help to have a plugin “optimize for print” that takes an image and alters it slightly to make it more printable without overly affecting its appearance. Shockingly enough, this can actually be done; and instead of every designer having to know how, only the programmers writing the plugins need have in-depth knowledge of the capabilities, restrictions and results of various print processes.

    By way of analogy, consider the ULAplus; it’s not a print process, but like a print process it has various limitations on what images it can produce and how good they look. There are tools for producing ULAplus images directly (that is, paint programs that are aware of the restrictions and enforce them while drawing). There is also a tool that converts arbitrary images, working out how best to represent them with ULAplus. (Full disclosure: I wrote it.)
    This tool does not require the user/artist to have any understanding of the hardware’s capabilities and limitations (which are somewhat abstruse and arcane, though perhaps less so than print processes), and moreover it is generally better than humans at the all-important process of palette optimisation.
    Thus it is used more widely, to produce a greater quantity of images, with better quality, than the manual approach.

    Computers are better at your job than you are. When the same becomes true of programming (and, perhaps sooner than you expect, it will), I hope I am more accepting of my obsolescence than you seem to be of yours. And to temporally invert this analogy, saying that a design tool should permit and require the designer to choose which spot colours to print his design with is like saying that an IDE should permit and require the programmer to choose which opcodes to compile his code to.

  169. Exactly. Hence my point that extortive business practices are similarly naive and doomed to fail.

    Lol. Tipjars fail because they aren’t extortive but depend on the kindness of strangers.

    Turns out that strangers often aren’t kind and if you want to be paid a fair price for fair work you need to require payment for work. That’s not extortive.

    Why is it you guys are for free markets except when it comes to software? At which point you simply want it for free and not what the market will bear.

    Do you feel that creating software somehow less hard than creating any other kind of product sold?

  170. It’s like a textile worker in 1800 arguing that a machine can’t possibly replace him, because it doesn’t know how to thread a needle; or a messenger in 1900 arguing that this telegraph gadget can’t replace him, because it can’t ride a horse.

    Or Mel Kaye of Royal McBee Computer Corp. arguing that an optimizing compiler can’t replace him, because it uses separate constants and doesn’t know how to locate values on the drum head.

    Except that, as it turns out, Mel-types are still not hurting for work, particularly in high-performance applications like programming DSPs. Because “sufficiently smart compilers” are always far enough in the future as to effectively be vapor. And no machine can make the kinds of finicky optimizations it takes to get really tight, small, low-latency code that maximizes use of the chip.

    Designers are in the same space with an additional gotcha: Designers have to make their output look beautiful. Machines are pretty good at optimizing for simple fitness functions with objectively measurable output. But show me the machine that can recognize beauty, let alone optimize for it, and I’ll show you a SKYNET candidate.

  171. You are contradicting yourself here. If opening up the walled garden causes everything to not just work, then this is something they must do with Macs. If it doesn’t then they have other motivations for not doing it with mobile devices.

    There’s no contradiction. One is an information appliance. The other is a computing platform.

    Most folks didn’t want computing platforms but information appliances but until relatively recently laptops and netbooks were what you got. Today you have iPhones and iPads.

    Expectations are different and these are much more limited devices. Having a closed app store that checks for certain minimum levels of quality and adherence to rules works better to meet expectations on an information appliance.

    Steve Jobs’ Trucks vs Cars analogy provides insight into Apple’s thinking on the matter and why the Mac and iPhone behave differently when it comes to app security and their respective app stores.

    http://allthingsd.com/20100601/steve-jobs-session/

    With trucks you expect power and towing capacity but on cars you expect to have a more refined ride and speed.

  172. @Nigel: There’s no contradiction. One is an information appliance. The other is a computing platform.”

    The contradiction is not in the different use cases, the contradiction is the idea that something can’t “just work” while yet being versatile and powerful. Jeff Read says the Mac proves they can. DMcCunney implies such is not possible. Which is it?

    “Steve Jobs’ Trucks vs Cars analogy provides insight …”
    http://allthingsd.com/20100601/steve-jobs-session/

    Might want to read that a little closer and ponder the implications of his closing sentence on the topic (especially in light of recent trends in OS X): “But I think we’re headed in that direction.”

  173. @Nigel: “Why is it you guys are for free markets except when it comes to software? At which point you simply want it for free and not what the market will bear.”

    It’s not just software, and it’s largely human nature. We all want the best possible deal and want to pay as little as possible for what we buy.

    We are all in favor of competition, too, when it benefits us in terms of lower prices and better choices, but we are less thrilled when *we* must compete.

    The issues arise when you start thinking about what a “fair price for your work” is. You know what *you* want to make, but value is relative. Something is worth what someone else is willing to pay. If you can’t find someone to pay you *your* idea of a fair price for what you do, you have problems. That’s inherent in “charge what the market will bear.”

    Some free market fans haven’t considered all the implications of what they espouse, and squawk when they bite, like when the market *won’t* bear what they want to charge.

    The fundamental problem is that open source as an alternative to closed-source proprietary software requires that the open source product *be* a reasonable alternative that *can* replace the proprietary product. In the specific case – Adobe Photoshop – there *is* no open source replacement. There are open source packages that do subsets of what Photoshop does, but even the subsets aren’t done as well.

    The question is whether the open source development model *can* produce a replacement. The disconnect between the developer and the customer inherent in that model may mean it can’t.

  174. @ Nigel- “Tipjars fail because they aren’t extortive but depend on the kindness of strangers. Turns out that strangers often aren’t kind and if you want to be paid a fair price for fair work you need to require payment for work. That’s not extortive.”

    Kindness is opening a door for an elderly women that may be a stranger to you. People most commonly contribute to tip jars because they value the blog/blogger and want it to continue in existence. Adobe can no longer derive the revenue it requires from one-time software sales and is now morphing into a fee-for-service business model. If you’re hooked on Adobe products (and require the ongoing service), then you’ll willingly pay the fees and life will go on. No one here is suggesting that it should be otherwise. But at some point, business evolution will kick in and an acceptable open source equivalent will arise, and this in turn could push Adobe into extinction. Unwittingly, Adobe has just hastened the process.

  175. @Jeff Read:
    >But show me the machine that can recognize beauty, let alone optimize for it, and I’ll show you a SKYNET candidate.
    Except that you’re conflating two meanings of ‘beauty’ here. No-one’s asking the computer to distinguish a pulchritudinous belle from a plain jane, nor even a snazzy logo from a dull one. The only kind of “beauty” we’re asking of the computer is a rather functional one, of colours and tones and such-like. A computer that can’t design a gearbox can still turn a gearbox design into the lathe commands to make it – and a computer that can’t design a corporate logo can still work out how to make your logo look good on the page. These are things that are simple enough to optimise for, as I believe my ULAplus example indirectly demonstrates.

    Besides, taking the outside view, most predictions about what computers can’t do have proven almost hilariously near-sighted; Hubert Dreyfus is perhaps the most egregious example.

    As for Mel, DSP, and embedded, when was the last time you placed-and-routed your FPGA by hand? Or even your PCB, come to think of it. (And don’t try to argue low-latency with me. I work for Solarflare; I think I know a thing or two about low-latency — and how few people are actually employed to write anything resembling Mel-code, compared to how many are writing, say, Python. Your line of work may not be completely eliminated, but that doesn’t mean you’ll still be able to get a job doing it.)

  176. @Edward Cree: “Computers are better at your job than you are. When the same becomes true of programming (and, perhaps sooner than you expect, it will), I hope I am more accepting of my obsolescence than you seem to be of yours. And to temporally invert this analogy, saying that a design tool should permit and require the designer to choose which spot colours to print his design with is like saying that an IDE should permit and require the programmer to choose which opcodes to compile his code to.”

    You aren’t a designer.

    Computers are superb at doing things that can be reduced to an algorithm, expressed in code, and run on suitable hardware. An increasing number of jobs *can* now be done by computer, because the algorithms can be designed, code expressing them can be written, and hardware powerful enough to run them effectively is available and cheap enough to be economically used for the purpose.

    Tell me what the algorithms are for aesthetics. Tell me how to write a program that can design something that looks good to another human being.

    You can’t. We are still in the early stages of understanding *why* various things look good to us, and *what* looks good will vary between cultures.

    There are lots of tools to help designers do their work. I was a designer/print production guy back in the days before DTP, when the tools of the trade were T-squares, eXacto knives, non-reproducing blue pencils, rubber cement, waxers, and typeset galleys off of phototypesetting machines. I was *delighted* when DTP software appeared to help. I didn’t have to tediously do paste-ups and mechanicals for pieces I designed. I could do the work in a DTP program, see what the piece would look like as I did it, and when it passed muster, generate a PDF for the printer, who would feed it to an imagesetter to make the plates he would mount on his presses to print the job.

    The computer could do work that had been done by specialized workers, like typesetters and mechanical artists. It could *not* do the actual design. That required someone like me. It will continue to require someone like me for the foreseeable future.

    I do not design in a vacuum. It is not just me being creative. I am not a fine artist painting what I feel like painting.

    I am designing *for* a client. There are similarities to writing code, as I must find out what the client wants (which may not be what they *need*), craft a design that the client approves, and execute it. The client will be trying to reach other people through the work I do, and I must understand what they will need to see for the client’s message to get accross.

    I will have constraints on what I *can* do both in what the client wants, and what the client’s resources will cover. My challenge is to come up with effective design within those constraints, and a lot of the creativity is in the use I make of what I have to do the job.

    I don’t expect to be alive when a computer is available that can do that. It would require a true AI, and we are nowhere near building one.

  177. Edward Cree:

    Of course a professional print designer, who’s invested years of human capital into learning about printing methods, is going to believe — or at any rate argue — that his job is indispensable.

    Except I’m not a professional print designer. I was one for a few years during and after school, and I supported Graphic Designers, Art Directors and typesetters at my first full time job out of college.

    I know what the range of work a Graphic Designer works in and what their job is.

    It’s like a textile worker in 1800 arguing that a machine can’t possibly replace him, because it doesn’t know how to thread a needle; or a messenger in 1900 arguing that this telegraph gadget can’t replace him, because it can’t ride a horse.

    No, it’s not. Textile workers are factory workers. They are like the guys set type out of California cases (BTDT, but not professionally), who ran the linotype machines, the guys who did the kerning/letter spacing by hand in the optical type days. The guys who did paste-up in the news rooms.

    And those guys *are* gone. The last newspaper converted away from Linotype, IIRC in the 1970s.

    Graphic designers are *designers* and are in some sense like Architects, they have to consider the physical media (and it’s vastly more complicated than Mr. Read indicated. Even using the same underlying paper but with a coating can radically change the look of a printed item).

    Sure, it might be good to have a plugin to your image editor that does “work out how to print this (for a given cost-quality tradeoff), and show (perhaps continuously) a preview of what the process would produce”. It might even help to have a plugin “optimize for print” that takes an image and alters it slightly to make it more printable without overly affecting its appearance. Shockingly enough, this can actually be done; and instead of every designer having to know how, only the programmers writing the plugins need have in-depth knowledge of the capabilities, restrictions and results of various print processes.

    You’re focused on the wrong thing. Photographs aren’t the problem here FOR DESIGNERS (well, good photo retouching is part of the issue, but a minor one). We already have software that can do the transforms for various output devices (ICC color profiles) and I don’t remember if this includes stuff like dot-gain etc.

    But what the software cannot do–and once it can skynet is a botched perl script away–is it sit down with the client and understand their needs and figure out how best to meet them. It cannot work with the print broker to determine requirements. It *might* be able to work with the pre-press house in some ways (do those even really exist any more?), but only once all the other factors have been hashed out.

    By way of analogy, consider the ULAplus; it’s not a print process, but like a print process it has various limitations on what images it can produce and how good they look. There are tools for producing ULAplus images directly (that is, paint programs that are aware of the restrictions and enforce them while drawing). There is also a tool that converts arbitrary images, working out how best to represent them with ULAplus. (Full disclosure: I wrote it.)
    This tool does not require the user/artist to have any understanding of the hardware’s capabilities and limitations (which are somewhat abstruse and arcane, though perhaps less so than print processes), and moreover it is generally better than humans at the all-important process of palette optimisation.
    Thus it is used more widely, to produce a greater quantity of images, with better quality, than the manual approach.

    See, you do not understand. You do not understand what a designer does, how they work or what they expect.

    Computers are better at your job than you are. When the same becomes true of programming (and, perhaps sooner than you expect, it will), I hope I am more accepting of my obsolescence than you seem to be of yours. And to temporally invert this analogy, saying that a design tool should permit and require the designer to choose which spot colours to print his design with is like saying that an IDE should permit and require the programmer to choose which opcodes to compile his code to.

    You are already obsolete. You don’t pay attention and you don’t listen. That is the IT/Programmer model of the last century. Today you have to listen and do what the business requires, not what you think is technically better. Technology exists to solve problems and get shit done, not to satisfy your desire for nifty hack.

    You do NOT know better than people who studied and done this for years.

    First off the choices of how to print something are made well before the software is even launched. Designing an advertisement for inclusion in a local newspaper is (admittedly slightly) different than designing an advertisement for distribution to newspapers nationwide, and is different from designing for *a* magazine (The printing in National Geographic is different from Times/Newsweek). The way ink goes into newsprint on a offset press is very, very different than the way ink goes onto clay coat off a rotogravure press. It is not a matter of optimizing on the back end, it’s a matter of knowing ahead of time what will and will not work.

    When you’re printing on newsprint you get a LOT of dot-gain from the ink flowing into the soft and fibrous paper. This means that you have to carefully select any photographs, that type (fonts) will change their look slightly and that certain type sizes (let’s say “7 point” for the sake of argument. It might be smaller these days) will simply turn into blobs. When you use rules/lines you have to use 1pt or larger as “hairlines” will break badly. You have to know if you can buy space on 4 color pages (even full color newspapers only print certain pages in color, knowing that the front page is in color means that page 16 is also in color.

    OTOH, if you’re putting an ad in Better Homes and Gardens 4 color rendition is MUCH better.

    And we won’t even get into magazines like Playboy that use (or used to) two different printing technologies for different parts of the magazines.

    As for spot v.s. process:

    There are two points in time, speaking generally, when a designer will choose a spot color. One is during the creation of what is called “an identity” (or used to be 17 years ago when I was doing this). This is the process of, depending on the size and anal retentiveness of an organization, developing the standards for all “printed” (to include the web) material. The development of a corporate visual identity is very critical to some organizations. IIRC the one for KMPG when I worked there was like 25 or 30 pages long. Note that at this time there IS NOTHING BEING PRINTED. Colors are specified as a Pantone (or one of the other color systems) spot color because when you say “Pantone 235u” everyone knows how to find out what that is. In the modern world a good designer will pick pantone colors that do have reasonable maps to 4 color printing. Otherwise the degeneration (if there is any) will just be accepted by the client.

    The other case is when working with a client on specific material that is NOT going to be run on a 4 color press. In that case one picks a pantone color because it’s the easiest way way to tell the printer what color you want. You might not even care very much–to use the example from my previous post, if you’re printing a lawn care door hanger in 1 or 2 colors your client may only say “I want it grass green”. Uh. Which color grass, Kentucky bluegrass or zoysia? Either way you grab your pantone color guide and you leaf through it until you get a good green write the number down and ship it. If it’s going to be a one color job you may never even specify the color in your software (of course you would, but its not required because any separations printed will be black and “grey” if you’ve used halftones.

    The one other time some designers use Pantone is when they want to have an easy way of picking colors and then they (if they know what they are doing) flatten to process colors. Lesser designers will get confused as to why their colors change when this happens. Really bad designers get a phone call from the pre-press shop asking if they REALLY wanted to do a 18 color press run. (Had a fellow student get really freaked out when her assignment called for process color separations and she got 18 pages because of that).

    In sort Graphic Design inherently is about understanding the eventual object to be worked on, and while it may be someday possible to build all of that into software, that day isn’t even in my grand kids time.

  178. “hard on the wrists and arms. Give me Win7 any day”

    Fascinating. I don’t use Win PCs regularly but I had no idea that keyboard/pointer design had diverged so much from Macs. That Jony Ive is a revolutionary.

  179. As an oversimplification (the space is multivariate), tipjars are at one end of the spectrum and 100% enforceable DRM is at the other end– both will generate minimized sales. Rather the optimum is to match the cost functions of producers and consumers.

    That Apple views the smart phone and tablets as appliances and not as general computing devices, should tell Google exactly how to disrupt them. Google went for a general purpose operating system with multi-threading and is now waiting for the hardware to catch up. Once the desktop and mobile are seamlessly integrated, the walled garden won’t have enough degrees-of-freedom to survive.

    The forward looking programmer who wants to build a long-term viable business should be looking at apps that integrated with the web and the desktop and ignore iOS. It simply isn’t worth jumping through hoops when the future will belong to Linux every where.

  180. About scripting, I can load Javascripts in a browser plugin in Mozilla, Chrome, and Opera, but not Safari apparently. The security issue is typically with cross-site scripting on web pages, because users are not agreeing to give extra security permissions to a plugin and are trusting the browser to sandbox all web pages.

    Apple can’t both enforce certification and have general user programmability.

    The problem I have with coding for iOS, is that the restrictions can change at any time. We don’t know what a future Apple decision might be. They could destroy my investment over night. NO WAY! I will invest in closed OS again. One day you will wish you did not too.

    Adobe will be disrupted, primarily because we are moving away from print media.

  181. @William O’Blivion:
    >See, you do not understand. You do not understand what a designer does, how they work or what they expect.
    I don’t care what a designer does, how they work, or what they expect. I care what the people who currently employ designers want, how what they want can be achieved, and what they can reasonably expect — and those criteria do not produce a requirement for a guy who is both creative enough to design logos and well-enough versed in print arcana to undergo the drudgery of working out how to print them.

    @DMcCunney:
    >Tell me what the algorithms are for aesthetics. Tell me how to write a program that can design something that looks good to another human being.
    You’re still conflating two kinds of aesthetics. Maybe a program that can design a good-looking logo, given a description of the company it’s for, is AI-complete. But a program that can take an image, run a little brute-force search (When in doubt…) and find the rendition that optimises some measure of similarity and quality, based on principles of visual perception; that is not even difficult, as ULAplus proves, it is hardly even AI in the weak sense. If you want also to consider the costs of various processes or numbers of ink, you just put those costs in the objective function and the program searches over that, too, and comes back saying “I reckon you should do a 7-ink that looks like this; anything less and you may as well stick to 4-colour, which looks like this.” — and this still is not remotely Skynet-level AI.

    >You aren’t a designer.
    And you don’t know squat about AI.

  182. @DMcCunney:

    Tell me why they *should*, given my prior comments.

    They *shouldn’t* because they are beholden to maximizing short-term profits for their stored monetary capital (stock holders), not to maximizing knowledge production and long-term growth. Their business model can’t be patient to optimize the cost function between producers and consumers such that the modular knowledge portions of the business are open-sourced, i.e. focus on core expertise and let others do also while leveraging their work (don’t try to be omniscient because it is impossible). And thus I wouldn’t dare make a long-term investment in iOS, because they are forced to destroying the long-term.

    The same logic can also be applied to this Adobe debate. Only very narrow markets will stay, and the broader markets will leave both Adobe and Apple. They don’t care though right ;) We will see and I know how this story ends, as we’ve seen it happen before.

  183. I define growth, long-term and modular with the example that Eric claims his gif code is still in wider use after some decades.

  184. The contradiction is not in the different use cases, the contradiction is the idea that something can’t “just work” while yet being versatile and powerful. Jeff Read says the Mac proves they can. DMcCunney implies such is not possible. Which is it?

    That’s the point. The use case defines what is “just works”. A tractor trailer that can’t haul stuff doesn’t “just work” even if it rides like a car and is easy to maneuver and park.

    A car that’s hard to control/use doesn’t “just work” even though it does truck like things.

    Arguably Apple doesn’t make semi’s (servers) and barely makes any big trucks (Mac Pros) but only makes expensive large comfy pickups (iMacs and MBPs) that can tow a trailer as a 5th wheel and expensive sedans (iPads) and motorcycles (iPhones). There are load restrictions on OSX (I wouldn’t use it as a server OS) but it does have leather seats…at least until Jony Ives removes the skewmorphic elements…

    Anyway, back to the point: Use Cases do define what “just works” because it defines the “works” part of the phrase. Hence OSX will likely always maintain the capability to run apps from outside the mac app store even if the cost is less stability and higher security risks.

  185. @nigel
    As they say, the only thing that just works is a nipple. And as every mother can tell you, even that might require painful maintenance.

  186. Kindness is opening a door for an elderly women that may be a stranger to you. People most commonly contribute to tip jars because they value the blog/blogger and want it to continue in existence. Adobe can no longer derive the revenue it requires from one-time software sales and is now morphing into a fee-for-service business model. If you’re hooked on Adobe products (and require the ongoing service), then you’ll willingly pay the fees and life will go on. No one here is suggesting that it should be otherwise. But at some point, business evolution will kick in and an acceptable open source equivalent will arise, and this in turn could push Adobe into extinction. Unwittingly, Adobe has just hastened the process.

    This is like saying because there are friendly neighbors that are handy that will help you fix your toilet for a beer that business evolution will kick in and push plumbers into extinction.

    The only way an open source equivalent will arise is if another very large business entity views giving away Photoshop for free aids their business model.

  187. @nigel: “The use case defines what is “just works”.”

    No. Apple defines it. Always in their own self interest.

    “Hence OSX will likely always maintain the capability to run apps from outside the mac app store even if the cost is less stability and higher security risks.”

    Note that this directly contradicts what everyone claims is the benevolent and puritanical motive behind Apple’s every move.

    But I find your faith inspiring. Hope you’re not disappointed.

  188. @DMcCunney

    The question is whether the open source development model *can* produce a replacement. The disconnect between the developer and the customer inherent in that model may mean it can’t.

    I guess you have to first define what you mean by open source development model. The idealistic one where hobbyists band together to produce open source for free or the realistic one where companies decide it’s in their business interest to pay developers to work on FOSS.

    The first probably can’t get you beyond something like GIMP. The latter requires someone with a business model that can write off the development of the Creative Suite in order to sell something else.

    For Linux IBM could see that dethroning Solaris and HPUX would result in more expensive server sales and services work since AIX wasn’t getting the job done. What’s the business model for Creative Suite? So Apple can sell more Mac Pros?

    And it’s risky…the ROI on OpenOffice for Sun must be hugely negative and it failed to disrupt MS Office at all. Google Apps costs money for business use and isn’t open source.

  189. No. Apple defines it. Always in their own self interest.

    Yes and no. It’s self interest is dependent on meeting the needs of sufficient users to pay for an expensive device.

    This is like saying that BMW defines performance sedans in it’s own self interest. Sorta, but if they try to sell a Yugo dressed up as a BMW it’s not going to go well for them because even BMW fans wont buy one.

    “Hence OSX will likely always maintain the capability to run apps from outside the mac app store even if the cost is less stability and higher security risks.”

    Note that this directly contradicts what everyone claims is the benevolent and puritanical motive behind Apple’s every move.

    But I find your faith inspiring. Hope you’re not disappointed.

    The company espouses the credo that if they make great products that profit will come. That’s, of course, a bit of bullshit mixed in with the truth but for the most part they seem to walk the walk of user experience and design first.

    My faith has nothing to do with it and any disappointment will be because another company has moved from the pursuit of excellence to chase market share. When Apple stops making great products I’ll stop paying extra for Apple products. It’s a very simple relationship and requires little faith on my part.

    /shrug

    Win7 works well enough and I’m guessing Win9 will be better than Win8 is if Apple starts creating crappy products.

    I like Ubuntu but the whole X/Wayland/Mir thing strikes me as yet another PulseAudio kind of fiasco for end users. There’s going to be churn there for a few years I think for something as basic as putting pixels on the screen just like there was churn for something as basic as outputting audio.

  190. @edward

    I don’t care what a designer does, how they work, or what they expect. I care what the people who currently employ designers want, how what they want can be achieved, and what they can reasonably expect — and those criteria do not produce a requirement for a guy who is both creative enough to design logos and well-enough versed in print arcana to undergo the drudgery of working out how to print them.

    This is like saying I don’t care what a developer does, how they work, or what they expect and there is no requirement for a guy who is both creative enough to develop software products to be well-enough versed in computing arcana to know how compilers, operating systems and system architectures work.

    Yah, there are a good number of coders out there that don’t seem to understand computing “arcana” but they are limited in what they can accomplish. In not understanding their language implementation they end up selecting poorly performing structures and algorithms and don’t seem to understand why somethings don’t work well.

    Not understanding the fundamentals like how different ink looks on a page for a designer is not having the proper tools to be a craftsman. You can replace assembly line workers with automation and if that’s the kind of coding you do…yeah, your ticket to obsolescence will come with time…although 30 years ago they told me that coders would be obsolete in 5 years so even that may be a little while yet.

    There appears to be a class of things that are perpetually 5 years in the future. Eventually they’ll be right. Voice recognition is almost there. Maybe. I remember working with that at the Army Research Lab in the 80s as a high school intern. Siri and Google Voice Search seem tantalizingly close that maybe in 5 years voice control will be natural as a swipe is today.

  191. @nigel: “Sorta, but if they try to sell a Yugo dressed up as a BMW it’s not going to go well for them because even BMW fans wont buy one.”

    That’s not been my experience … a significant component of Apple’s customer base will buy whatever they sell and defend it with house-shaking emotionalism that it simply must be better than everything else. The PPC to Intel transition comes to mind. But even back in the mid 90s when Apple was obviously making junk it was no different.

    “Win7 works well enough … I like Ubuntu but the whole X/Wayland/Mir thing strikes me as yet another PulseAudio kind of fiasco for end users.”

    I used Linux as my main desktop for some years and tried hard to remain a believer but in the end the desktop experience is just too inferior. Win7 is very solid and a really good UI but underneath it’s still MS. I would really, really, really love to have my desktop be something built on a Nix core, but MS is too stoopid to build such a thing, and I don’t want to support Apple’s brand of Scientology even if I could get past the infuriating OSX UI. Ubuntu is the only one really trying, but they just don’t have the resources to fix all the things that are broken in the Linux desktop.

    So I stick with Win7 as the default choice and wonder if this industry can get any more messed-up.

  192. TomA: “But at some point, business evolution will kick in and an acceptable open source equivalent will arise, and this in turn could push Adobe into extinction.”

    You have yet to show just what the business model is for an open source replacement. By now, it’s obvious that any open source replacement will need to be far more, and have far more specialized capabilities, than The GIMP. Where’s the specialized knowledge coming from? Who’s going to pay for it? Tipjars ain’t going to cut it.

    Edward: “I don’t care what a designer does, how they work, or what they expect. I care what the people who currently employ designers want, how what they want can be achieved, and what they can reasonably expect — and those criteria do not produce a requirement for a guy who is both creative enough to design logos and well-enough versed in print arcana to undergo the drudgery of working out how to print them.”

    Those criteria do produce a requirement for a designer. If a customer wants his logo printed, then hiring someone who doesn’t know what printing truly involves is not likely to lead to success. Until you solve the strong AI problem, you’re not going to get a computer capable of judging whether a printed logo looks good, since that’s an inherently human judgment call. You can talk about maximizing performance functions all you want, but your clients won’t care about mathematical solutions. If they don’t like the result, you lose.

  193. @nigel:

    The use case defines what is “just works”.

    @me:

    Their business model can’t be patient to optimize the cost function between producers and consumers such that the modular knowledge portions of the business are open-sourced

    Good thing Apple and Adobe were impatient on creating the iPhone and Photoshop. Too much fiddling around with creating new open source would have resulted in Android and GIMP, which have inferior user experience apparently for the primary use cases.

    The market benefits from closed-source exponential boom and busts, where the modular open source wasn’t already available for reuse.

    I wrote I wouldn’t invest in iOS for the long-term, but I didn’t exclude short-term investments.

  194. @nigel:

    This is like saying because there are friendly neighbors that are handy that will help you fix your toilet for a beer that business evolution will kick in and push plumbers into extinction.

    The only way an open source equivalent will arise is if another very large business entity views giving away Photoshop for free aids their business model.

    Equivalent isn’t desired. Open source is disrupting print media with the open source browser(s).

    Degrees-of-freedom isn’t limited to equivalents and omniscience; only walled gardens need to think inside the Coasian box.

    Per my proposal upthread, I hope that open source isn’t always funded monolithically.

  195. That’s not been my experience … a significant component of Apple’s customer base will buy whatever they sell and defend it with house-shaking emotionalism that it simply must be better than everything else. The PPC to Intel transition comes to mind. But even back in the mid 90s when Apple was obviously making junk it was no different.

    There are zealots in every customer base. At this point Apple is so mainstream that most users aren’t in this category unlike the 90’s when at best you could say that Apple was “beleaguered” and many were saying “at death’s door”.

    As for equating Apple with Scientology…we’re long past the OS Wars of Mac Vs PC. It’s an OS, not a religion or way of life. Back to the FIAWOL vs FIJAGDH way of thinking,

    One reason I like Apple is what Steve Jobs said one year:

    ““If we want to move forward and see Apple healthy and prospering again, we have to let go of a few things here. We have to let go of this notion that for Apple to win, Microsoft has to lose. We have to embrace a notion that for Apple to win, Apple has to do a really good job. And if others are going to help us that’s great, because we need all the help we can get, and if we screw up and we don’t do a good job, it’s not somebody else’s fault, it’s our fault.”

    We should let go of the notion that for open source to win, closed source has to lose. We have to embrace the notion that for open source to win, open source has to do a really good job.

    For some things open source does a really good job. For others, not so much. Both have their place.

  196. I wrote I wouldn’t invest in iOS for the long-term, but I didn’t exclude short-term investments.

    If you invest in any computer technology for the long term you end up being a COBOL programmer…lucrative at times perhaps but very very narrow in the kinds of jobs available.

    On the other hand learning any ecosystem is not really short term if you want to be truly proficient.

  197. @nigel: “One reason I like Apple is what Steve Jobs said one year:

    ““If we want to move forward and see Apple healthy and prospering again, we have to let go of a few things here. We have to let go of this notion that for Apple to win, Microsoft has to lose. ”

    I submit that your mistake is that you believe what people *say*. Much better to judge them by what they *do*. If Apple were really living that they wouldn’t be suing everyone on the planet for every ridiculous and trivial patent our hopelessly broken government awards them.

    I’ll be glad to stop viewing Apple as Scientology when the faithful stop chanting and singing and at least show a semblance of willingness to view Apple objectively.

  198. @nigel:

    If you invest in any computer technology for the long term you end up being a COBOL programmer

    Unix was invented in the early 1970s.

    Open source (and closed-source if you have access) algorithms can be translated to new languages.

    @Michael Hipp:

    Ubuntu is the only one really trying

    Chrome OS and Firefox OS.

  199. @JustSaying: “Chrome OS and Firefox OS.”

    Good point. I’m just not yet sure whether to take them seriously or not.

  200. I submit that your mistake is that you believe what people *say*. Much better to judge them by what they *do*. If Apple were really living that they wouldn’t be suing everyone on the planet for every ridiculous and trivial patent our hopelessly broken government awards them.

    “And boy have we patented it!”

    – Steve Jobs 2007 iPhone Keynote

    http://www.youtube.com/watch?v=A1gISYqsApI

    Back when Google and Apple were friends and Eric had a piece of the keynote. Google can’t claim they didn’t know and given that Eric was on the Apple board at the time you can understand why Steve had a bit of animosity over Android.

    “As a board member you get one of the first ones”…to get disassembled in a Google lab somewhere.

    Companies are run by humans. Getting shafted by a buddy is often worse emotionally than getting shafted by a stranger.

    /shrug

    I believe that Apple does what they say far more than Google does what it says.

    Apple suing Samsung and other handset makers was mildly surprising as opposed to directly suing Google. I suspect that Apple legal told Steve not to overreach and go for solid base hits as opposed to swinging for a home run.

    Using patents to protect your IP is what it’s there for. Apple isn’t generally suing companies not trying to obsessively copy them.

    I’ll be glad to stop viewing Apple as Scientology when the faithful stop chanting and singing and at least show a semblance of willingness to view Apple objectively.

    If you only view open source as the FSF faithful then you end up with a very skewed view of open source fans and might compare open source to a cult.

    I submit that you are brighter than that.

  201. Equivalent isn’t desired. Open source is disrupting print media with the open source browser(s).

    Degrees-of-freedom isn’t limited to equivalents and omniscience; only walled gardens need to think inside the Coasian box.

    Given that Apple was the creator for the current disruption of print media through usable smartphones and usable tablets I’m thinking that walled gardens and degrees-of-freedom are largely orthogonal.

  202. @nigel:
    Even if iPhones and tablets are as important as browsers (e.g. news site disrupting newspapers, banner ads disrupting print ads and thus all print media) in disrupting print media, I see no logic from that correlation to the orthogonal assertion that walled gardens don’t impact some degrees-of-freedom.

  203. @ Jay Maynard – “By now, it’s obvious that any open source replacement will need to be far more, and have far more specialized capabilities, than The GIMP.”

    The rate at which GIMP evolves toward Photoshop capabilities will likely be influenced by the magnitude of any negatives associated with continued use of Photoshop. If Adobe keeps the cost of the addiction very low, then hackers won’t have much incentive to improve GIMP. Conversely, if Photoshop grows into an expensive bloatware product requiring lots of front-end training and ongoing support, then investing the time to improve GIMP starts looking like an attractive alternative. As a side note, even in the technology domain, evolutionary cycles are still relatively long and it might be a few decades before the smoke clears on this issue. But the trend is pretty obvious.

  204. @nigel: “Given that Apple was the creator for the current disruption of print media through usable smartphones and usable tablets”

    Uh, the current print media disruption was created by the *Internet*. Having more devices to read on certainly cements the trend but it was well underway beforehand. And you better not leave out Amazon.

    And just when I thought you weren’t religious about Apple :-)

    “I’m thinking that walled gardens and degrees-of-freedom are largely orthogonal.”

    I’m thinking that walled gardens limit degrees-of-freedom by *definition*. That’s their purpose.

  205. @Just Saying: “About scripting, I can load Javascripts in a browser plugin in Mozilla, Chrome, and Opera, but not Safari apparently. The security issue is typically with cross-site scripting on web pages, because users are not agreeing to give extra security permissions to a plugin and are trusting the browser to sandbox all web pages”

    What plugin? JavaScript is interpreted by the rendering engine, and is part of the core browser code. In IE, thats Trident. In Mozilla, it’s Gecko. Chrome, was based on Apple’s Webkit, but Google has forked Webkit to build a new rendering engine called Blink and it migrating to it. Opera used to use a proprietary engine, but has decided to adopt and support Blink too. Safari is still based on Webkit.

    Plugins come in for other types of work. Adobe Flash runs in a plugin, Media like video run in a plugin. (Though HTML5 is increasingly popular because it offers the possibility of not needing a plugin: you can encapsulate the video using the keyword. Lots of folks are salivating at Adobe Flash going away. It’s consistently in the top crasher reports for Firefox, and Firefox’s implementation of plugin-container, to provide a sandbox in which plugins can run and not take down the browser when they crash is larely *because* of Flash.) For that matter, going forward, Adobe is dropping support for Flash on modile devices, and has a toolkit in development to migrate Flash to HTML5.

    You need a codec to actually display video, but there are even open source video codecs with sufficient performance. Browser development’s current direction is in part “Plugins are bad. The user should be able to do everything from within the browser without needing them.”

    Safari supports JavaScript, too. I haven’t played with it lately, so I’m not sure what restrictions it imposes, but this sounds specific to Safari, and it’s not the only browser out there.

    “Apple can’t both enforce certification and have general user programmability.”

    Sure it can, as long as the programs conform to the certification.

    Any program will run under the OS on the device being targeted by development, and will call upon the OS to do various things. What functions it can call upon will be determined by the API provided by the OS, and what the API exposes for the application. This is true for any program on any OS.

    The issue for developers is “Can I write an application that does X that will run under Y OS?” The answer for OS/X and iOS is almost certainly yes, and the enormous number of apps for the iOS platform are evidence.

    Apple’s restrictions require the developer to observe various conventions in how the progam behaves and how the user will interact with it, but don’t arbitrarily restrict *what* the developer can write.

    The restrictions you are complaining about are on what the user can install, but as mentioned previously, there is a broad enoiugh selection of apps for Apple products that users have no need to *care* = theu can get an app that does what they want.

    “The problem I have with coding for iOS, is that the restrictions can change at any time. We don’t know what a future Apple decision might be. They could destroy my investment over night. NO WAY! I will invest in closed OS again. One day you will wish you did not too.”

    And this is different from any other platform *how*, exactly?

    As systems develop and mature, things change. New functionality is added. Old functionality is deprecated and eventually removed. The process is gradual (and has to be by nature.) Developers get advance warning of changes and how thet need to adjust their code. And vendors try to maintain a basic level of backwards compatibility. It does not help a vendor to at one stroke invalidate existing work. Apps written to earlier versions of the applicable standard will still run. They simply won’t have access to the new stuff.

    Say what you like about Apple, but they aren’t *stupid*. What you fear is so unlikely to happen I consider the chances as being as close to zero as doesn’t matter.

    “Adobe will be disrupted, primarily because we are moving away from print media.”

    I’ve been hearing about that, and predictions of things like the “paperless office” for decades. We are using less print, but print is far from going away. It’s another thing I don’t expect to see in my lifetime. And Adobe is involved in other things beside print, so print going away won’t put them down.

  206. >> “The problem I have with coding for iOS, is that the restrictions can change at any time. We don’t know what a future Apple decision might be. They could destroy my investment over night. NO WAY! I will invest in closed OS again. One day you will wish you did not too.”

    > And this is different from any other platform *how*, exactly?

    > As systems develop and mature, things change. New functionality is added. Old functionality is deprecated and eventually removed. The process is gradual (and has to be by nature.) Developers get advance warning of changes and how thet need to adjust their code.

    The keyword is RESTRICTIONS, as in which apps are allowed in their walled garden of App Store; it is opaque and subject to change when they feel like it (e.g. developing equivalent functionality in-house).

  207. > Using patents to protect your IP is what it’s there for. Apple isn’t generally suing companies not trying to obsessively copy them.

    There are mainly SOFTWARE patents and DESIGN “patents” (like patent on black square device). They are not used to protect “IP” (misnamed thing that it is, trying to put under one umbrella different kinds of statists-based limited monopoly powers, and giving it misleading name of “property” which it isn’t), they are trying to get rid of successfull competition (that copied them better).

  208. @DMcCunney: “Apple’s restrictions require the developer to observe various conventions in how the progam behaves and how the user will interact with it, but don’t arbitrarily restrict *what* the developer can write.”

    This is simply not true.

    Among other things you cannot write something pornographic or other “offensive” material, and you can’t write anything that substantially duplicates/replaces functionality of existing Apple apps or features. And you have no guarantee Apple won’t deny your app and then steal your idea and write their own to copy it. Those are just the few restrictions I’m aware of, there are probably others.

  209. @JustSaying:

    It’s been fascinating listening to your ideas on this topic. One thing, though: Adobe won’t be disrupted unless and until A) the need for print media disappears or B) someone else, open source or otherwise, disrupts them by matching their print media feature set.

    All those Adobe fanboys out there who claim they don’t work in print media, but claim to “need” Photoshop don’t actually know what they need.

    Mind you, when I say “print media,” I mean something other than printing to a color inkjet or color laser printer. I mean offset printing. Printing to a color laser printer or a color inkjet printer or other color printer doesn’t need CMYK color support, CMYK seps, none of it, because there aren’t any printers that produce accurate color matching anyway. Period. Go ask a real designer and get off my lawn if you don’t believe me.

  210. Uh, the current print media disruption was created by the *Internet*. Having more devices to read on certainly cements the trend but it was well underway beforehand. And you better not leave out Amazon.

    And just when I thought you weren’t religious about Apple :-)

    Yes and Amazon as well with eReaders but I think that color was important as is the mobility vs even a laptop. Phablets are better than phones in this regard but I kinda wouldn’t want to go smaller than a Kindle Fire or iPad Mini for lots of reading.

    If I didn’t have an iPhone and an iPad I’d get a Note 2 if I could only have 1 device.


    “I’m thinking that walled gardens and degrees-of-freedom are largely orthogonal.”

    I’m thinking that walled gardens limit degrees-of-freedom by *definition*. That’s their purpose.

    Meh…depends on what you mean by degrees of freedom. In terms of the kind of apps, the number of apps, and use cases you can meet I think not so much.

  211. @DMcCunney: “Apple’s restrictions require the developer to observe various conventions in how the progam behaves and how the user will interact with it, but don’t arbitrarily restrict *what* the developer can write.”

    This is simply not true.

    Among other things you cannot write something p_rn_gr_ph_c or other “offensive” material, and you can’t write anything that substantially duplicates/replaces functionality of existing Apple apps or features. And you have no guarantee Apple won’t deny your app and then steal your idea and write their own to copy it. Those are just the few restrictions I’m aware of, there are probably others.

    (Duplicate post, we’ll see if this one gets thru ESR’s prudish moderation filters.)

  212. @DMcCunney:

    What plugin? JavaScript is interpreted by the rendering engine

    I meant extension, not plugin. Tangentially, if I am not mistaken, JS is interpreted by the JS virtual machine, not the rendering engine.

    Browser development’s current direction is in part “Plugins are bad. The user should be able to do everything from within the browser without needing them.”

    I discussed this with Patrick Maupin in an earlier blog.

    “Apple can’t both enforce certification and have general user programmability.”

    Sure it can, as long as the programs conform to the certification.

    I meant end user doing the programming. I mean users running novel scripts at will any time of day or night.

    And this is different from any other platform *how*, exactly?

    Open vs. closed source.

  213. @Jay Maynard:

    The danger is that there may well be no way to monetize the mindspace. People’s expectations are that they can get it all for free, and those who try to monetize what used to be free tend to face severe backlash.

    Perhaps (some of) you are championing closed source, walled gardens, and DRM because of fear that all revenue streams would otherwise eventually disappear?

    The only reason anything is free is because someone is willing to pay to develop it. If they are not getting a return on investment, then they are funding from debt (from some where in the statism). There is no other possibility. Debt can not go on forever. I think we are already at roughly $40+ trillion globally (not including unfunded liabilities for welfare and pension promises) with nearly a $quadrillion of derivatives to keep that sovereign debt propped up.

    Open source and paying for software are not mutually exclusive. One model is to close a portion of the source. There are other models that work for open source. I have proposed a new model, where the designated open source is never free to use in a commercial application and everyone is always paid a % of gross sales for the modules, with non-commercial volumes exempted.

  214. I didn’t mean to imply that all free software is funded from debt. There are many models for which software can be free and funded.

    1. Ad-ware
    2. Up sales
    3. Corporate strategical move.
    4. Author is wealthy and is paid in user appreciation.
    5. Author lives incredibly modestly on a tipjar.
    etc…

  215. @Jay Maynard:
    > Until you solve the strong AI problem, you’re not going to get [aesthetics].
    Evidently, you don’t know squat about AI either.

    Or perhaps you’re just so jaded from AI’s past failed promises that you can no longer even believe in the things that have already been achieved. Which is understandable if you started watching several Winters ago, I suppose; but it’s still epistemically irrational.

  216. Yes, it probably has been quite a few winters since Jay has been an arrogant young fuck.

    Computational aesthetics is an interesting subfield I have not kept up with as much as I’d like except when it appears in SIGCHI journals I don’t seem to have time to read anymore.

    My recollection is that it’s not nearly as mature as you make it out to be but I can inquire with some (old) folks that do actual research in that area and see.

  217. @Jay Maynard:

    You can talk about maximizing performance functions all you want, but your clients won’t care about mathematical solutions. If they don’t like the result, you lose.

    Many of my classes in school were taught by working designers (they all were supposed to be, but a couple of them were only “working” in the sense that they worked for non-profits for free).

    One of my instructors had a guest in to talk about his work one day–about how you’re really supposed to do design with clients, not the sort of masturbatory school work BS, but the real thing. One of the “rules” was to always give the client 5 designs to select from at first, and then narrow down, improve etc.

    On the eve of a meeting with a client this particular designer only had 4 designs. He really needed 5. So he whipped out this awful design in orange.

    http://en.wikipedia.org/wiki/File:Jewel_Food_Logo.PNG

    The Customer *LOVED* it.

    It’s awful. Especially in the original orange.

    AI can’t fix bad taste.

  218. @Morgan Greywolf:

    All those Adobe fanboys out there who claim they don’t work in print media, but claim to “need” Photoshop don’t actually know what they need.

    I’m not claiming that people need photoshop. I’m asserting that people need the sort of features that Adobe products generally provide.

    X-Press by Quark is (or at least was) a REALLY good page layout program. I don’t know of any non-adobe *good* vector based drawing program (I need to spend more time in Inkscape, but it’s not Illustrator) because Adobe absorbed Freehand and then I left the industry.

    Mind you, when I say “print media,” I mean something other than printing to a color inkjet or color laser printer. I mean offset printing. Printing to a color laser printer or a color inkjet printer or other color printer doesn’t need CMYK color support, CMYK seps, none of it, because there aren’t any printers that produce accurate color matching anyway. Period. Go ask a real designer and get off my lawn if you don’t believe me.

    I thought some of the higher end large format inkjets that were really CMYK/hexachrome could get really really close if you fed it the right paper?

  219. TomA: “evolutionary cycles are still relatively long and it might be a few decades before the smoke clears on this issue. But the trend is pretty obvious.”

    In a few decades, the way we use computers will be completely unrecognizable to folks today. For that matter, in a few decades, I probably won’t be here; I expect to live 25 years or so more at most.

    Until then, people claiming that open source will crowd Photoshop out of the market are just blowing smoke.

    JustSaying: “Open source and paying for software are not mutually exclusive.” In theory, no, and Eric’s done a lot of work in showing how they don’t have to be.

    The problem is that it’s largely theory, and there’s not a lot of people making a living, or even substantial money, off of open source in the real world. How many Red Hats are there?

    People have gotten an expectation of open source being freely available. Breaking that expectation is going to be even more difficult than breaking the expectation that content on the Internet should be freely available, because there’s a large majority of open source developers who buy into the idea.

    ” I have proposed a new model, where the designated open source is never free to use in a commercial application and everyone is always paid a % of gross sales for the modules, with non-commercial volumes exempted.”

    BTDT. Eric will be happy to point you at a discussion of just what the problems are with this kind of approach, starting with defining just what constitutes “use in a commercial application”. The problem is much harder than it would seem intuitively.

    Edward: “Or perhaps you’re just so jaded from AI’s past failed promises that you can no longer even believe in the things that have already been achieved.”

    Say what? How in the precise hell can a computer optimize aesthetics? That is if the concept even has any meaning at all…

    Detailed, verifiable examples, or I’m calling bullshit.

    Nigel: “Yes, it probably has been quite a few winters since Jay has been an arrogant young fuck.”

    Yeah. Now I’m a crotchety old fart.

    William: “AI can’t fix bad taste.”

    +1000. Further, it can’t predict, or optimize, for what the customer’s tastes are, and that’s what matters. If your computer does manage to optimize the aesthetic function, but your customer hates it, you lose.

  220. Jay, FWIW, I don’t think there is any reason to believe that aesthetics is not computable. Let me offer a concrete example: it is fairly well established that the “beauty” of a face is computable, based on the symmetry of the features, and also the relative size, and spacing of the core features (eyes, nose, lips ears, cheek bones etc.) From what I remember there is considerable academic work on this subject. Similarly, a waist to hip ratio is readily computable and it is known to be considered optimal around 0.7 for females.

    I’m not designer, but I am sure there are many rules about measures of beauty outside of the human form, though I am also pretty sure that only a tiny fraction of the rules the human brain uses are discovered never mind encoded in AI.

    My point is that, with sufficient work, they could be, and once they are, I personally doubt they would be particularly hard to compute.

    That would be a cool plug in, something that gave an objective measure of the attractiveness of your image.

  221. @Jessica

    I haven’t looked at this stuff since at least 2007 but a quick literature search for 2012 CAe conference (co-located with SIGGRAPH) papers on computational aesthetics indicates that while progress has been made none of it struck me as earth shattering. More classifiers and feature extraction research, more use of neural nets and heuristics, etc. Good solid progress but if there was something that was ready for market and ready to cause a paradigm shift I missed it in my quick and dirty appraisal of the SOTA.

    In terms of “objective measures of attractiveness of your image” my recollection, reinforced from my quick perusal, is that many of the approaches relied on learning from human scoring of aesthetically pleasing pictures and correlation with specific measurable/extractable features in the image (saturation, hue, exposure, rule of thirds, etc) to predict a high/low aesthetic score of an image. I dunno that qualifies as “objective” but possibly useful for an amateur photographer.

  222. edit: There was a abstract on affective motion textures that looked very interesting but I wasn’t logged in so I couldn’t get the paper. Maybe on Monday I’ll go figure out how to access my account so I can read it. That looked like it had real world application in the near term.

  223. @Jay
    “The problem is that it’s largely theory, and there’s not a lot of people making a living, or even substantial money, off of open source in the real world. How many Red Hats are there?”

    The same could be said for lawyers and doctors. Their knowledge is not “proprietary”, and the number of lawyers and doctors working for the global law and medical companies is miniscule. But I would not say there is no money in law of health care. And you can make a living doing medical and legal research.

  224. The objective computational measures of aesthetics would be useful in tools for non-designers that are replacing Adobe on the low-end. And I am positing that pro designers will be a smaller percent of the internet design market than the print market, although the nominal demand for designers will probably increase. So Edward Cree wins that debate on the balance of future market effect?

    @William O. B’Livion:

    http://en.wikipedia.org/wiki/File:Jewel_Food_Logo.PNG

    The Customer *LOVED* it.

    It’s awful. Especially in the original orange.

    Reminds me of the Target and Jiffy Lube logos. So apparently bad taste is popular.

    Remember the web design back in the early days on Yahoo Geocities with saturated color gif animations and blinking text.

    Oh wait I see Jeff Read linked to an example of that. Hilarious. Hahaha. No wonder designers feel the need for totalitarian control. I think I’m getting the culture now.

    @Jay Maynard:

    The problem is that it’s largely theory, and there’s not a lot of people making a living, or even substantial money, off of open source in the real world.

    I assume that is true. If someone changes that, they will change the world.

    BTDT. Eric will be happy to point you at a discussion of just what the problems are with this kind of approach, starting with defining just what constitutes “use in a commercial application”. The problem is much harder than it would seem intuitively.

    *Please* do. I would very, very much appreciate reading the prior discussion on this.

    I was thinking of a definition that was installed units. I believe in mostly an honor system, because corporations don’t want to get sued and bad press (public image).

    I am thinking they pay some percent of gross sales or their development costs, which ever is greater.

    I am most challenged about how to divide the royalties to the module authors fairly and how to divide the ownership of modules amongst all contributors to each module. I have some ideas which I think are reasonable, but need to test them against peer review and then the market.

    Now I’m a crotchety old fart.

    And I bet you can still write a lot of code if you saw an opportunity that could have a big impact.

  225. So Edward Cree wins that debate on the balance of future market effect?

    No, despite all his “you don’t know anything about AI” trash talk my assessment is that the current state of the art is nowhere close to the bullshit he’s spouting and I suspect that actual folks in the field of study would agree it will be a while before designers and their specialized knowledge are “obsolete”.

    Given that Adobe is very active in this arena (go figure) if there was a magic algorithm to do what he suggests it would already be baked into Photoshop to make it even more irreplaceable to the designer workflow. The whole idea is to make the user more efficient vs using lesser, cheaper tools.

  226. “I bet you can still write a lot of code if you saw an opportunity that could have a big impact.”

    These days, what it takes to get me to sit down and write serious amounts of code is either a personal interest in using that code (hence my work on Second Life stuff) or real, hard, cold cash. I’m well beyond being burned out on change-the-world idealism; I’ve seen far too much of it fade into nothingness.

    1. >I’m well beyond being burned out on change-the-world idealism; I’ve seen far too much of it fade into nothingness.

      Tch. This just means you haven’t been doing it right.

  227. The perception of specimen “beauty” is a visual selection trait that reinforces the physical evolutionary cycle. It is neither abstract nor absolute. If you are attracted to someone’s appearance, it is because your evolutionary programming has biased you toward traits that may enhance fertility and robustness of offspring. Pheromones also play a big role is the perception of beauty/mating appeal. As does voice and verbal intelligence cues. I’m not sure if an AI can mimic this ongoing long-term process, but if so, it will have to incorporate evolutionary reinforcement correlations. As a side note, individual disparities from the social/cultural norms are the mutations in the process.

  228. @nigel
    > Good solid progress but if there was something that was ready for market and ready to cause a paradigm shift I missed it in

    To be clear I am not claiming that aethetics can be computed today, I am rather claiming they are computable. Another commenter pointed out that attractiveness of a person is based on more that visual cues, which is plainly right (though pheremones? I’m skeptical.) But these things are also measurable with suitable peripherals too. But I was just offering human aesthetics as a specific example of an aesthetic that has been studied and is measurable computationally. The experiments I have read tended to indicate that symmetry and appropriate relationship of the distances between features and equal sizes of bilateral features, are extremely accurate predictors for the partial, and possibly full, ordering of attractiveness of face pictures. But I’m not an expert by any means.

    > learning from human scoring of aesthetically pleasing pictures … I dunno that qualifies as “objective”

    It is objective in the sense that it is computationally predictive. “What humans like” is encoded into our brains by various genetic, memetic and experiential means, and, absent a means to dismantle a functioning brain, a black box reverse engineering approach is the only practical way.

    My arc of meaning here is that there is nothing special about aesthetics. We are meat computers, and meat computers can be duplicated in silicon. Sometimes writing the spec is very difficult, and sometimes you need a lot of silicon, nonetheless, there is no magic here.

  229. Tch. This just means you haven’t been doing it right.

    Some of us have to put food on the table, and don’t have the time or resources to commit to improving our world-changing technique.

    1. >Some of us have to put food on the table, and don’t have the time or resources to commit to improving our world-changing technique.

      Excuses, excuses. The real problems most idealists have have nothing to do with “time or resources”; they are (1) false-to-fact ideas about how things and people work, and (2) insufficient drive and ruthlessness.

  230. @esr: “This just means you haven’t been doing it right.”

    Sounds like fodder for a new post.

  231. The fact that a discussion of the inadequacies of Gimp (which are much more numerous than lack of support for proprietary color mixing systems) has degraded into a fallacious argument that AI can ascertain aesthetic value is exactly why the Gimp will never be a Photoshop replacement.

  232. Excuses, excuses. The real problems most idealists have have nothing to do with “time or resources”; they are (1) false-to-fact ideas about how things and people work, and (2) insufficient drive and ruthlessness.

    Discretion is the better part of valor; the better part of idealism is knowing when your opponents have you licked. I learned this the hard way, but am unwilling to discuss details here.

    Closed source has open source licked in just about any user-facing application software used for professional work that isn’t programming. There simply isn’t a replacement for Photoshop, Illustrator, or InDesign coming from the open source camp — and there won’t be any time soon because of structural limitations in the open-source development model. Thinking otherwise is entirely due to “false-to-fact ideas about how things and people work”.

    GIMP is shit, Inkscape is shit, Cinepaint is slightly less shit but doesn’t address anywhere near the spectrum of professional workflows that Photoshop does. None are in a position to displace Adobe.

  233. @esr:

    false-to-fact ideas about how things and people work

    Seems developers go look for an API or tool that accomplished some well-defined task, e.g. your gif library. So modules can’t be too granular, else developers probably won’t be able to figure out what to use them for.

    Problem with charging for open source code is the owner is ill-defined. My thought is that if the module (project) is granular enough, then there is probably a lead who did most of the work and should be the owner and gift some percent to the other contributors.

    Businesses license libraries so I don’t see the problem there.

  234. “Jay, FWIW, I don’t think there is any reason to believe that aesthetics is not computable. Let me offer a concrete example: it is fairly well established that the “beauty” of a face is computable, based on the symmetry of the features, and also the relative size, and spacing of the core features (eyes, nose, lips ears, cheek bones etc.) From what I remember there is considerable academic work on this subject. Similarly, a waist to hip ratio is readily computable and it is known to be considered optimal around 0.7 for females.”

    This is for a living species. This is formed by evolutionary imperitives. There is a very narrow range of aesthetic expression (i.e, the range of body variations in humans is much smaller than what can be done with sound, visual media, or language). Now perform the same exercise for a purer art, something less directly tied to evolutionary biology… take literature for example:

    Is Jame Joyce’s Ulysses more or less aesthetically pleasing than Albert Camu’s The Stranger? Why? Why do different people disagree? Why do the same people disagree with themselves over different time intervals? Does the aesthetic value of the work change over time or is it the culture itself that is changing? If it is changing, was any evaluation tied to any one specific time ever valid to begin with? If it’s determined that Ulysses is more aesthetically pleasing than The Stranger does it follow that a work that is parametrically similar to Ulysses (say Recognitions by William Gaddis or Proust or a Pynchon novel) is also superior to The Stranger? Vice versa? What of the determination is subjective, bound by individuals and cultures, and what can be determined to be objectively “beautiful”? Can this AI produce a work of superior aesthetic value?

    Aesthetics is most assuredly THE strong AI problem. Most philosophers consider it fundamentally intertwined with the largest problems in philosophy, on equal footing with ethics — or, even, a greater fundamental problem, and those that do think they can tackle and formalize aesthetics have made complete fools of themselves.

  235. Tim F. on Friday, May 17 2013 at 3:27 pm said:
    …take literature for example…
    A few months ago, I saw a BBC (?) special about some computational linguists taking on the works of Agatha Christie. The object was to determine, at least to some degree, why she has been the best-selling writer in English after Shakespeare,

    They didn’t come up with much, but they did find a few things. So there is at least the possibility of success in “computational aesthetics” for literature.

  236. Umm, no. Because a BBC special covered someone trying to explore something does not demonstrate that there is the possibility of success for “computational aesthetics” for literature.

    This is like saying someone jumping off a cliff to their death shows there is a possibility that humans can someday fly without mechanical assistance.

  237. @Tim F.:
    Then how can any design attain widespread agreement? By triggering some common primitive instinct that overpowers the chaotic whims? Is this what advertising and “reality distortion field” fanboyism is all out?

  238. @Tim F.
    > Now perform the same exercise for a purer art, something less directly tied to evolutionary biology… take literature for example: Is Jame Joyce’s Ulysses more or less aesthetically pleasing than Albert Camu’s The Stranger? Why?

    But your are making the mistake of thinking “computational” necessarily means contextually independent numeric measure. Plainly that isn’t the case. This particular example is confounded rather by the broader question of how a computer can understand human language. And, from what I see, the main problem here from a purely computational point of view, is the fact that computers don’t experience human life: human language is profoundly bound with both culture and experience.

    FWIW, this is best answered by thinking about how a computer would answer this question: “Which is more expensive, a laundry machine costing two ninety nine or laundry detergent costing three ninety nine.” To answer this question the computer would need to have gone grocery shopping.

    So, I guess what I am saying is that this is an unfair question. Whether a computer could determine the relative qualities of these texts is confounded by the fact that the computer can’t understand those texts, and so is a special case. Were the computer able to understand the text, I imagine some rules of quality could readily be reverse engineered from the judgement of humans on the matter.

    And since a judgement of aesthetic quality for such a thing is reverse engineered out of a human brain (which is to say the definition of “beautiful” or “superior” is encoded into the brain) there is plainly a spectrum of such judgement criteria, and consequently, the measure has to be parametrized with the context.

    To give a computer example: which is better for creating a document, TeX or Microsoft Word? Obviously the answer to that question depends entirely on both the nature of the document, and the skill and experience of the user. Don Knuth writing TAoCP: TeX, for sure. Grandma making a “Did you see our lost kitty” poster? Microsoft word for sure.

    These are both objective measures, and so the choice of document system is objective too (even if there are no readily usable metrics for either.)

  239. @Rich Rostrom: I found this article which is presumptively on the same special you are referring to: http://suite101.com/article/agatha-christie-addiction-a74978

    There are several flaws with this (but maybe the special answers the questions raised better than this article):

    1. Firstly, it appears they were attempting to ascertain a code to popularity rather than aesthetics. The difference between the two is enormous.
    2. This appears to be a conventional formalist, with a little structuralist analysis, thrown in. Anyone familiar with a little literary theory will see the problem with this (Viktor Shklovsky was already discerning the flaws to formalism by the 1930s; all of the leading structuralists of the 50s and 60s, except for Levi-Strauss, had become post-structuralists by the late 60s…)
    3. This very same analysis has been performed, and likely done better, for the last 50 years. 3 out of 7 students forced to take a literary theory course at the college level probably performed this same analysis (on Christie specifically or any other crime author or on the genre generally because the structures are readily apparent).
    4. Their alleged “code” is not unique for Agatha Christie whatsoever. It may be true to all of her works, but it’s also true of 90% of all detective fiction. Additionally, I don’t see how this code accounts for the differing receptions of her own works amongst her audience even when the very same “code” is being utilized.

    It seems to me you fell for a puff popsci documentary that was likely entertaining, informative, and revealing but in no way at the forefront of aesthetic or literary theory, never mind a computational model for determining, evaluating, and creating aesthetic value with machine intelligence.

  240. “that buying proprietary software puts you at the wrong end of a power relationship with its vendor.”

    And this never happens with open source. Just ask the people who invested time in learning Gnome, and now have to suffer the green booger known as Gnome 3, or have to look for alternatives (Unity, Cinamon etc) and invest learning time again. Because the Gnome folks abused their power as project leaders. No, wait! Wrong example.

    At least with proprietary software, the vendor has a financial interest in keeping you not too disappointed, lest you start looking for other proprierary packages or some company emerges to fill the gap if there are no alternatives. But with open source, it’s more like “you don’t pay me, so I ‘ll do anything you want, no I don’t care if you have invested learning time in it or if you have a productivity mechanism dependent on it”. See Gnome. Or Ubuntu when they introduced still alpha-stage PulseAudio and broke user’s audio (people who had set up audio production systems around Ubuntu and audacity weren’t amused).

    But with open source you can always fork and split the already small developer manpower in half. Yay.

  241. @ Jessica Boxer – “though pheremones? I’m skeptical”

    A photograph or painting does not emit pheromones or verbal cues, and therefore the aesthetic value judgment is strictly visual. But photos and paintings are an evolutionarily recent phenomenon. Our ancient ancestors used a succession of perceptions in order to evaluate potential mates, and these were essentially distance-dependent. The first was typically a visual cue from a distance. Then pheromone detection (sense of smell can convey fertility or disease, or even an emotional state such as desire or fear). And if contact is made, then voice and verbal cues complete the picture so-to-speak. Afterward, the cumulative perception is summarized as attractiveness; of which, the beauty value judgment pertains to the visual cues.

  242. “But your are making the mistake of thinking “computational” necessarily means contextually independent numeric measure.”

    Nope, no, I am not. Feed your AI every language, every book, every literary theory, every philosophical treatise, and whatever else in the universe you’d like to provide for context. It’s still not going to succeed.

    “To answer this question the computer would need to have gone grocery shopping.” I don’t remotely see an analogy. Or it’s a tautology. Yes, a computer will need to know what “beauty” is in order to evaluate it. It’s my position that you won’t be able to teach a computer what “beauty” is.

    “Were the computer able to understand the text, I imagine some rules of quality could readily be reverse engineered from the judgement of humans on the matter.”

    I find that wholely suspect in every regard. Having read tons of crap that tries to be Joyce or Faulkner (and actually does so in a very thorough, knowledgable, and even well crafted manner) and utterly fails — the chances that you could remotely define rules for what makes Ulysses works on its on are miniscule. Never mind approaching how something like The Stranger that uses completely different “rules” to achieve a comparable expression of “beauty”. We have 50 years of modern literary theory and hundreds of years of philosophical thought — no one today is going to claim they have an aesthetic theory that explains what is and is not beautiful (at least not without looking very foolish in the end).

    “And since a judgement of aesthetic quality for such a thing is reverse engineered out of a human brain (which is to say the definition of “beautiful” or “superior” is encoded into the brain) there is plainly a spectrum of such judgement criteria, and consequently, the measure has to be parametrized with the context.”

    I see nothing “plain” about critera for beauty. What are these plain criteria?

    “To give a computer example: which is better for creating a document, TeX or Microsoft Word? Obviously the answer to that question depends entirely on both the nature of the document, and the skill and experience of the user. Don Knuth writing TAoCP: TeX, for sure. Grandma making a “Did you see our lost kitty” poster? Microsoft word for sure.”

    This isn’t an example. It’s not even an analogy. This is like saying 1 + 1 is an example of a proof of Fermi’s last theorem.

  243. And, again, the point is: rather than create a practical alternative for Photoshop (which would most assuredly, if it arose, come from one of the many commercial alternatives that exists today, developed by developers who are both artists and developers and NOT Gimp), the solution that has been suggested here is to solve just one of the Gimp’s many flaws by circumventing the commercial Pantone system by creating AI capable of aesthetic judgment. Does that sound remotely realistic to anyone? Even those of you claiming it is viable? Seriously?

  244. “This particular example is confounded rather by the broader question of how a computer can understand human language.”

    Actually, it’s made quite a bit easier really. If people are positing that an AI can “understand” beauty, it sure as hell better be capable of understanding language. Literature, being tied to language, should be vastly easier to evaluate for beauty than generalized sound or visual media which can be far more detached from semantics.

    Just take a few scribblings from some preschoolers and some highly regarded abstract art and see how much more difficult it is to evaluate the aesthetics of visual media than media created from language.

  245. @Tim F.
    > Yes, a computer will need to know what “beauty” is in order to evaluate it. It’s my position that you won’t be able to teach a computer what “beauty” is.

    But parents teach a computer every new generation what beauty is. The fact that the substrate isn’t silicon is an engineering detail.

    > no one today is going to claim they have an aesthetic theory that explains what is and is not beautiful

    So? I don’t claim to either. My claim is that it is computable, not that I know the algorithm currently. Why do I make that claim? For the reason I already gave. There is no magical essence floating around in the cellular bundles contained in our heads, just computational elements.

    > I see nothing “plain” about critera for beauty. What are these plain criteria?

    Really, you don’t agree that the judgement criteria for beauty are different for painting and for novels? That they are different for the short story and the haiku?

  246. “But parents teach a computer every new generation what beauty is. The fact that the substrate isn’t silicon is an engineering detail.”

    Huh?

    “There is no magical essence floating around in the cellular bundles contained in our heads, just computational elements.”

    Huh?

    “Really, you don’t agree that the judgement criteria for beauty are different for painting and for novels? That they are different for the short story and the haiku?”

    Huh? I asked what these plain criteria are. I do not see me ever expressing doubt that different works have different criteria or attributes.

    Your answer to licensing available, commercially-valuable information is “babies are computers; someday maybe we’ll give birth to a silicon baby.” Again, this is why open source loses.

  247. @Tim F.
    > I do not see me ever expressing doubt that different works have different criteria or attributes.

    Oh, we must have a disconnect, I thought that is exactly what you said, when you asked about my use of “plain” in “plainly a spectrum of judgement”, nonetheless, lets not strain at that gnat excessively.

    > Your answer to licensing available, commercially-valuable information is “babies are computers; someday maybe we’ll give birth to a silicon baby.”

    You are mistaking me for someone else. I was only arguing that aesthetics were computationally derivable, I have made no claim about licensing, or commercially valuable information or any such thing.

    > Again, this is why open source loses.

    Why? Because peripherally people like to discuss deeper philosophical questions? That i silly. And again, you are mistaking me for someone else. I am not an advocate for OSS (though not an opponent either), as a matter of fact, I am a professional software developer who mostly makes my living from selling commercial, closed source software.

    Furthermore, open source plainly didn’t loose. It is used in nearly every data center in the world, probably on most of the embedded devices in your house, your phone, your web server, and probably pretty soon in your watch. I tend to agree with some here that OSS doesn’t seem to work for some kinds of software real well, but I don’t agree really strongly. I could readily be convinced otherwise. But just because a tool is not good for some things doesn’t mean it isn’t good for anything, or that it looses.

  248. “You are mistaking me for someone else. I was only arguing that aesthetics were computationally derivable, I have made no claim about licensing, or commercially valuable information or any such thing.”

    No, I understand that you didn’t make the claim originally. However, yes, I think pursuing this digression (and incorrectly) under this topic is certainly indicative of something.

    “I am not an advocate for OSS (though not an opponent either), as a matter of fact, I am a professional software developer who mostly makes my living from selling commercial, closed source software.”

    Did I say somewhere that you couldn’t possibly be a professional software developer who makes money selling commercial, closed-source software?

    “Furthermore, open source plainly didn’t loose.” Didn’t? When? It must certainly does lose. Often.

    “It is used in nearly every data center in the world, probably on most of the embedded devices in your house, your phone, your web server, and probably pretty soon in your watch.”

    And all of which also use closed-source software.

    “But just because a tool is not good for some things doesn’t mean it isn’t good for anything, or that it looses.”

    Who said is loses because it isn’t good at some things? I said it loses because it’s not incentivized by profit, its dependent on disparate interests doing whatever they find intellectually compelling rather than what is most useful or which people are most willing to pay for, largely not performed by the leading experts of the field, and foreclosed from interacting with some other commercial works.

  249. @Tim F.
    > However, yes, I think pursuing this digression (and incorrectly) under this topic is certainly indicative of something.

    If you think that pursuing a digression somehow has to relate back to the OP, then I can only assume you are new here.

    > I said it loses because it’s not incentivized by profit,

    Of course it is incentivized by profit, unless you are using the word “profit” in a very narrow pecuniary fashion. As Winter pointed out above, economics is not the study of money.

  250. I recently created a logo employing the silhouette of a man’s face. I didn’t like the aesthetic feel of any of the (expression and facial structure of the) available silhouettes I found as existing clipart, so I found a line art of a face I liked and traced in Photo Pos Pro, then filled the selection and a few other tweaks to get it to render well at 16×16 icon resolution as well as higher resolutions. I am not a professional designer. My art skills stopped with reproducing by hand color album covers (e.g. Journey) on my white painted walls in my bedroom in high school.

    It seems to me professional designers are going to be displaced (in terms of share of graphic design in the market) by clipart and simple manipulations by normal people who need a quick design for electronically displayed media. The pro designers will be called in for those trial balloons which succeed to the point of justifying the effort and expense.

    The sort of tasks are not necessary a strong AI problem and AI could probably be realistically applied.

  251. Could any give me a reason why if there was a library of high-quality and professionally (varied paid developers with core expertise) maintained orthogonal open source APIs that solved many desired functions in your favorite language, why you wouldn’t be willing to pay a few percent of gross sales (perhaps sliding to a fraction of a percent at high volumes) especially if the library eased your development costs by much more than a few percent?

    The only logical reason I can think of is if you can find it for free else where, where another company was funding a larger project monolithically (else without funding you could not assume maintenance will continue), but then the APIs may be designed for that monolithic project and not for maximum modularity and reuse in other projects.

    What am I missing? Is it just because it hasn’t been done before so there is no evidence that it can be done, or is there some fundamental problem?

  252. I posted a comment about Michael Pettis’s hypothesis on China’s capital over investment model, which I related to my comments here theorizing about the boom and bust caused by shareholder capital versus an optimal long-term growth model.

    There is a counter-perspective on China’s ghost cities that says it is not fundamentally different than the western equivalents, only different in appearance (aggregated scale) due to central planning and the property law model:

    Michael’s article explains why it is different and fundamentally more bankrupt to build ghost cities in the provinces.

    I have been posting thoughts about the efficiency of stored capital in knowledge production.

    The conclusion is that monetary capital causes booms and busts to the degree there is an impedance mismatch between instances of stored knowledge capital. The relevancy to this blog is that capital investment has to be optimally matched to value of the production that can be generated by the existing knowledge capital, in order to minimize booms and busts.

  253. tackle and formalize aesthetics have made complete fools of themselves.

    One of the criticisms of computational aesthetics is that most approaches seem to use formalistic aesthetics as the basis. This makes a lot of sense from a computer science perspective but is arguably flawed from an artistic one. The Artificial Artist papers I’ve read appear to do this.

    That’s probably a good enough approach for the “custom art” produced factory line style for tourists in some places…

  254. “As Winter pointed out above, economics is not the study of money.”

    Relative to the profit I am discussing versus what you and Winter are discussing, your “profit” is asymptotically approaching zero, i.e., it’s not incentivized by profit.

    “It seems to me professional designers are going to be displaced (in terms of share of graphic design in the market) by clipart and simple manipulations by normal people who need a quick design for electronically displayed media. The pro designers will be called in for those trial balloons which succeed to the point of justifying the effort and expense.”

    There has always been a low-end of non-professional “designers” who will do low-quality work cheaply. There is no evidence that amateurs will ever displace an entire industry. Someone will always be making enough profit to care to have a better designer produce higher quality work.

    Do you actually think Apple is going to start hiring you and your clip art in your basement?

  255. JustSaying, there’s far more to the problem of open source competition with commercial software than just making a freely available library. You have to turn that library into an integrated, usable application, and that’s something that open source developers aren’t real good at either.

  256. “If you think that pursuing a digression somehow has to relate back to the OP, then I can only assume you are new here.”

    I think that the discussion here of aesthetics is pathetically embarrassing… and shows that geeks like to daydream on subjects they know nothing about rather than actually learning about them.

  257. @ JustSaying – “in order to minimize booms and busts.”

    The long-term physical evolutionary process frequently benefits from booms and busts, particularly in which environmental stress and mutation combine to accelerate genetic change. Economic booms and busts may also play the same role in social/cultural evolution, and consequently minimizing them could be disadvantageous in the long run. Affluent societies frequently suffer from robustness atrophy because governments tend to insulate their citizens from the low grade hardships that are needed to build resilience.

  258. Tim F. on Saturday, May 18 2013 at 8:57 am said:
    > Relative to the profit I am discussing versus what you and Winter are discussing, your “profit” is asymptotically approaching zero, i.e., it’s not incentivized by profit.

    “Asymptotically”, really? What are the axes on that graph? I really don’t understand what you are talking about. But if you think the profit creators get from making open source software is approaching zero, you are plainly wrong.

    > I think that the discussion here of aesthetics is pathetically embarrassing… and shows that geeks like to daydream on subjects they know nothing about rather than actually learning about them.

    Geeks do like the daydream about subjects they know nothing about, but they are also usually very good at learning them too, and they have a disturbing habit of asking “why” a lot, which often utterly undermines the fundamental assumptions of other skillsets. People don’t like uppity outsiders asking uncomfortable questions.

    However, it is the nature of the work that we programmers do to ask these sorts of questions. We frequently have to write software that operates in domains we are not familiar with. A while ago I wrote software for a medical device, and I don’t know much about medicine. But as I was designing the plan for the software I had to understand and attack some of the basic assumptions the medical staff had about process and analysis. This requires stripping it down to the bare bones of what is actually needed, so that the best program model can be built. Which is to say, challenging assumptions in domains in which we are not experts is a core part of our job.

    My experience is that you can tell real experts from faux experts by observing how the putative expert reacts to the questions from uppity outsiders, especially when those questions are serendipitously challenging. Real experts find the questions refreshing and interesting, or, if the uppity outsider is plainly wrong, clearly and simply explain why the uppity outsider is off track, and helps them see it in a new way.

    Faux experts appeal to authority, call the uppity outsider pathetic, embarrassing, say things like “it’s too complicated to explain, you’d never understand, just do it my way” and storm off in a rage.

    I’m a real expert on some things, and a faux expert on others. This description would characterize my behavior in those domains too.

  259. Jessica Boxer, I haven’t seen the remotest evidence of anyone here presenting a good economic argument for replacing a trivial expense ($3-5 thousand dollars of software annually, hell, even call it $10 grand) with open source software that, thus far, shows no real quality or profit potential to attract the true experts in this field. If you are unable to see that for designers the ROI on (seemingly, to freetards) expensive software is the easiest expense they have to justify and that this, in turn, creates relatively large numbers of opportunities for the skilled developers in the field to have profit opportunities… and that relative to the open source alternatives, this relative comparison is a good living versus zero profit, that’s for you to contend with.

    I haven’t seen one intelligent question in this discussion about aesthetics or the capability to reproduce aesthetics using AI, never mind one which is “serendipitously challenging” whatever you hope that phrase to mean.. Sorry if that offends you. I prefer reading a book to listening to stupidity. And, yes, I find unchallenging, misinformed discussion a distraction endemic to ideologues of an OSS bent.

  260. Additionally, the “problem” being “solved” here is one which is already solved. Relatively trivially. And the “attempts” to “solve” the problem here haven’t remotely scratched the surface of trying to understand the domain of design, the processes of a designer. So you can talk all you want about needing to inform yourself of a new domain to provide a programming solution to a software problem — but this is not what’s occuring here. Instead, it’s all pie-in-sky nonsense about someday, maybe being able to solve the strong AI problem in order to avoid a trivial licensing expense.

  261. @Tim F.
    > I haven’t seen the remotest evidence of anyone here presenting a good economic argument for replacing a trivial expense ($3-5 thousand dollars of software annually, hell, even call it $10 grand) with open source software that, thus far,

    You are confusing free beer and free speech. You should read Eric’s book. But I will give you a personal example, though not from graphic design, since it isn’t an area I work in.

    I did some work with a company that wanted to integrate their corporate address book so that it downloaded into a program from UPS, the shipping company. This UPS software printed shipping labels. UPS makes money from shipping, not software, but I think you had to pay $50 for the software.

    It had an import function, but unfortunately that import function had a bug in it, which clipped one of the address fields to 50 characters. The software itself could store 255 characters, but the import program clipped it. Unfortunately this limitation made the import function useless, and people had to actually type this stuff in by hand.

    No doubt it was a trivial bug, and I could have fixed it in twenty minutes, however, the software was closed source, and so I couldn’t. The problem here wasn’t the price of the software, it was its closed nature.

    One of the places where this is most troubling is with data jail, where programs put in place control over the accessibility of your data files, either by secret formats, or by actually physically controlling the data. Again, a personal story, I bought an early ebook from amazon (I think it was amazon) and then they switched their service, so that my book was no longer available to me.

    I am not an advocate of OSS particularly, as I say, I make my living selling closed source software, but at least I do understand the arguments in its favor. If you think Richard Stallman’s concern was the software’s price was too high, you are completely wrong. Again, if you want to have a semblance of an idea of what this is about, you should read Eric’s books.

    >. Sorry if that offends you. I prefer reading a book to listening to stupidity.

    It doesn’t offend me, and if you prefer reading a book, why don’t you? Last I heard, participation was voluntary.

    > And, yes, I find unchallenging, misinformed discussion a distraction endemic to ideologues of an OSS bent.

    And you think that is somehow especially true in the OSS community? I hear foolish, uninformed discussion all the time, arty types are particularly prone to unrealistic models of the world. I guarantee you the quality of discussion here is much higher than on any other blog I have read, notwithstanding that we tolerate a few crazies.

  262. Jessica, here’s how I would summarize the “problems” in this post:

    1. Adobe wants to avoid the revenue loss from pirates who really do not need $700-1500 design software. Those who pirate it will be no loss to Adobe. The small sliver of a market that may be lost where the change in the product offering does become a real financial choice will not be a great loss; in fact, they may recoup it through the clients who still have the incentive to pay a subscription even if the cost is greater than the previous cost of ownership. Anyone who actually “needs” their software will not chaff at the cost of subscriptions; the user concerns will be if their is a degradation, an exact offering, or increased benefit to the change in the actual product.

    2. OSS’s challenge, if they want to challenge Adobe, is to create software that orders of magnitude greater than what Adobe provides. Why? Because even a poor, low-end designer is likely charging $100 per hour — they have recouped this expense with one or two 10-hour jobs, never mind the tax deductions. In fact, they are probably spending less money on design software than they are on physical or digital assets (fonts, stock photography, textures, clip art, paper, ink, etc.), never mind other expenses like, you know, salaries and benefits. How you create this imagined, far-superior software without being able to attract the best developers in the domain of design with profit incentive, while avoiding patents and commercially-licensed technology is an extremely tall order. A tall order not best served by daydreaming about AI, but rather focusing on real-world problems and questions. And even then it’s highly unlikely.

  263. “You are confusing free beer and free speech. You should read Eric’s book.”

    No, I am not. I have, it hasn’t aged well with time (I presume you mean The Magic Cauldron).

    “And you think that is somehow especially true in the OSS community?”

    Yes, most assuredly. Does thinking so mean that I think no other domain has the same problem? Absolutely not. However, do I think that the digressive, anything-that-interests-you quality of OSS is most assuredly one of its biggest problems and that it’s far less of a problem for other domains (i.e., it actually can directly lead to success and profit in art, as an example)? Absolutely. Or if it is a significant problem, does it improve the situation for OSS to simply say, “we aren’t the only ones with this problem”? Absolutely not.

  264. @Tim F.:

    There is no evidence that amateurs will ever displace an entire industry.

    Perhaps you missed the sentence where I posited that the *share* of graphic design for electronic media will decline for professional designers, although the nominal demand for pro designers may increase (because the electronic media space is larger than print media ever was).

    Declining share of operating systems is being displaced to Linux, which was initiated and substantially created by an amateur Linus Torvalds and others.

    I haven’t seen one intelligent question in this discussion about aesthetics or the capability to reproduce aesthetics using AI

    You and others have basically argued that aesthetics is personal opinion and discordant. So I explained that many people can design something they are personally satisfied with, using freeware or shareware.

    maybe being able to solve the strong AI problem in order to avoid a trivial licensing expense

    No need. We can apply weaker AI to improve tools for non-professionals to produce their own cookie cutter designs. And disrupt the share of the professional aesthetics market.

  265. @Jay Maynard:

    You have to turn that library into an integrated, usable application, and that’s something that open source developers aren’t real good at either

    Closed source folk license libraries and use open source libraries, so they could do the integration glue in closed-source for their target market. Modularity is really the key I think, otherwise the closed-source knowledge (and effort) leaks into the open source, which is arguably a fundamental incompatibility.

    @TomA:
    I wasn’t arguing for eliminating booms and busts, rather to make them more frequent correcting and less extreme, thus more rapidly adapting with less overshoot waste. This ends up being seen as long-term growth, e.g. all the corrective open source edits on a module such as Eric’s gif code giving it a long life and expanding usage (and thus royalties in my proposed model). I posit the way to accomplish this is to more optimally match the cost functions of producers and consumers (I explained this in more detail upthread).

  266. “Perhaps you missed the sentence where I posited that the *share* of graphic design for electronic media will decline for professional designers, although the nominal demand for pro designers may increase (because the electronic media space is larger than print media ever was).”

    No, I didn’t miss anything. As someone who was in graphic design from 1988 to 2002 and remain closely associated with the field and people involved in the field, I’m fully aware of the shifting medium and the various revolutions afforded by technology. However, this just creates a cycle of talent: the new amateurs become the old pros. Even if print were to entirely die, unlikely for a very, very long time (there will still be physical products that carry branding, branding will be placed on physical objects), The Gimp’s problems are not isolated to color reproduction in spot printing.

    There have always been and always will be amateurs who can do a sufficient job themselves or for free or cheaper than a professional designer…. However, once their business actually achieves success, the cost of a professional designer using professional software is negligible in comparison to the return from a professional design. You are lightyears from demonstrating that the amatuer with amateur tools is just as capable from the professional using pro tools.

    “Declining share of operating systems is being displaced to Linux, which was initiated and substantially created by an amateur Linus Torvalds and others.”

    That first clause makes little sense or I’m missing it, but even if I try to read into it, what is the relevance to the problem being discussed?

    “You and others have basically argued that aesthetics is personal opinion and discordant.”

    I would say this is not even remotely my view.

    “So I explained that many people can design something they are personally satisfied with, using freeware or shareware.”

    And when they want a business that appeals to a customer base larger than their own, individual personal satisfaction?

    “No need. We can apply weaker AI to improve tools for non-professionals to produce their own cookie cutter designs. And disrupt the share of the professional aesthetics market.”

    No, you can’t. And if you tried, you’d be solving the wrong problems in the most inefficient manner. Just saying over and over that it can be done does not make it so.

  267. Two open software libraries I use regularly: JFreeChart and WorldWindJava.

    If you don’t really care what your charts look like JFreeChart is great but it looks like you don’t care about UI.

    Worldwind java is great but FSF zealots don’t like the license but weirdly Google Earth is somehow acceptable.

    Eh, some OSS libraries work but wierd FOSS politics and lack of attention to aesthetics hampers the effectiveness of others.

  268. @Tim F.:
    I think you continue to miss my point three times now, which is the majority of the internet are websites not earning a (significant) commercial profit, e.g. blogs. I am positing that most of the publishing will be done without professional designers. Of course pro designers will be employed for the declining percentage of the publishing market which is commercial.

    In short, I am positing your much vaunted professional aesthetics is too slow for a dynamic publishing market, and is being pushed towards irrelevance.

    You can also refer to my comment upthread (which you admitted you did not understand) where I took your point about aesthetics being highly discordant and variable, and posited that basically commercial advertising is appealing to some primordial instinct to induce a groupthink, e.g. sex.

    No, you can’t

    I suspect you have no clue that I meant we can create tools for non-pros that facilitate their cookie cutter style of design and potential automate with weak AI. Refer to my description upthread of the logo I created using clipart as a starting point and then tracing and filling to achieve a silhouette. That would be entitled the Silhouette feature (filter).

  269. A “typical” OSS story I saw unvold. About why free is important.

    A complete field of research depended on a closed source piece of software. This software had once been developed at Bell Labs, but had been spin off years ago. Then one day, MS bought the company just for the developers. All licenses extensions and all development were cancelled immediately. The users begged to be allowed to take development in their own hands. Which was cheerfully refused by MS.

    The users migrated to a competing GPLed product and paid the developers a few bucks to polisf a tew rough edges. The whole field vowed to stay open source.

  270. Winter, there was also a case of Microsoft destryoing a market by acquiring and then open sourcing one of the major players. (Bars & Pipes for the Amiga, in case you’re curious; and Microsoft acquired it to get access to its developer, who went on to create DirectMusic for Windows.) The destructive effect of open source that Jay Maynard describes is real; it puts real people’s livelihoods at risk; and Microsoft is not ideologically opposed to using it to further its own business aims.

  271. @Tim F.
    > 1. Adobe wants to avoid the revenue loss from pirates who really do not need $700-1500 design software.

    I think that is a generous analysis of what Adobe is doing. My view is that the old days of upgrade every two years to get useful new features and significant bug changes, are gone. That was the subscription model in the past. Since it is gone they are moving to a more explicit one.

    However, I don’t particularly have a problem with that. I subscribe to a variety of web services that have a very similar model. However, I am picky. I subscribe to ones that let me pull my data out of their jail. The thing that worries me about both this move by Adobe, and the walled garden of Apple, is the taking away control from the user of the software they have paid for, and the content they create with it.

    > 2. OSS’s challenge, if they want to challenge Adobe, is to create software that orders of magnitude greater than what Adobe provides.

    Not true, they need to meet the needs of specific users, accounting for their growth, and provide suitable interoperability. I don’t claim they do this, in fact the temperature here seems to be that the GIMP doesn’t. I don’t design stuff for a living, so don’t have an opinion on the matter. Nonetheless, aside from legal barriers, there is no reason why they can’t.

    > How you create this imagined, far-superior software without being able to attract the best developers in the domain of design with profit incentive, while avoiding patents and commercially-licensed technology is an extremely tall order.

    Creating a piece of software with a profit incentive, and with patent support and with commercially licensed technology is a tall order too. However, you only look at one side of the coin. It is extraordinarily difficult to make good software in large companies, because large companies have sclerotic software development departments. Furthermore, it is just a plain fact that one highly motivated, highly skilled programmer can get more useful work done on a weekend than a group of ten or twenty mediocre programmers can in a week, or even a month, especially so when the weekend guy is not weighed down by the burdens of the red tape of super large companies. I’ve seen it, the productivity level of most corporate programmers is dreadful, it may be below zero in many cases.

    And finally, the curious thing is that the very best programmers in a domain are attracted to OSS, because they create it out of an intrinsic love for the problem domain. I have hired, managed and fired many programmers, and I can tell you intrinsic motivation is a thousand times more powerful for creating awesomeness than the extrinsic motivation of a paycheck.

    Nonetheless, I do agree that there are ugly legal barriers, such as patents and copyright that are hard for OSS to overcome. That of course is because they are evil, but Adobe certainly has a fiduciary duty to get in bed with the devil to boost their EPS.

    > No, I am not. I have, it hasn’t aged well with time (I presume you mean The Magic Cauldron).

    No, I mean The Cathederal and the Bazaar. The Magic Caudron is a chapter in that book. You can get it at your local library, or many fine book retailers. It is the manifesto, and the best economic analysis of OSS. You obviously aren’t familiar with its arguments, or you wouldn’t confuse free beer and free speech as you do.

    I doubt there are many people here who care about paying for the software, they do care about paying to have someone put handcuffs on them, and their work.

    > Absolutely. Or if it is a significant problem, does it improve the situation for OSS to simply say, “we aren’t the only ones with this problem”? Absolutely not.

    It is the nature of curious people. It is in fact a second order consequence to the thing that makes OSS successful, namely the innate curiosity of hacker-folk. Without that curiosity they’d follow your blind path an genuflect to anything the man tosses their way.

  272. @Jeff Read
    > The destructive effect of open source that Jay Maynard describes is real; it puts real people’s livelihoods at risk

    So? That is like saying the destructive effect of fuel injection technology on carburetors is real, it puts real people’s livelihoods at risk. The purpose of software development is to get people the software they want an need, its purpose is not to provide employment for programmers.

    The plain fact is that there is a shocking shortage of even mildly competent programmers in the USA. I know, I hire and interview them and I’d say that perhaps 1 in 30 of the people I talk to even reach the standard of mediocre. So good software developers need not worry. Bad ones should go do something else.

    1. >That is like saying the destructive effect of fuel injection technology on carburetors is real, it puts real people’s livelihoods at risk. The purpose of software development is to get people the software they want an need, its purpose is not to provide employment for programmers.

      Careful. You’ve followed a valid counterargument with an invalid one here.

      It is indeed ridiculous to fret about programmers being done out of jobs by open source, and for exactly the same reason we shouldn’t be upset that buggy-whip-manufacturers lost markets when automobiles became common. As human beings invent better, more efficient ways to do things job markets are going to perpetually turn over in response; so far, the net effect of such changes has always been to generate more economic activity and jobs than they destroy.

      But you’re on shaky ground when you propose that software development is all demand-driven. As with fine art and many other forms of useful artisanship, the expressive desires of the makers shape what is produced. What demand is there for robotfindskitten?

  273. @esr
    > we shouldn’t be upset that buggy-whip-manufacturers lost markets when automobiles became common.

    Dude, I totally went out of my way to avoid the old cliche of “buggy whips” with my brilliant new cliche of fuel injection. See that is the problem with cliches, they are SO overused.

    > As with fine art and many other forms of useful artisanship, the expressive desires of the makers shape what is produced.

    That is a fair point, it is certainly a push/pull thing. And you would be right to say I don’t get to define the purpose of software. But I do know that the purpose of nearly all commercial entities is to produce a profit. It is their duty to boost demand and minimize the cost of satisfying that demand.

  274. @esr:

    the expressive desires of the makers shape what is produced

    The freedom-of-expression is critical. Jay Maynard wrote that at his age, he only wants to work really hard if on something that has personal interest to him. As I wrote above, I don’t want to have to go work for Corel and be subservient to the stockholders’ incongruent demands (see my reply to DMcCunney upthread), in order to touch on innovation of tools for graphic design and image editing.

    The potential of open source is that I can go take some existing code and morph it to what ever I want to accomplish, with much less work. If we can increase modularity of code, perhaps even one person could create whole projects which are mostly mashups of pre-existing code. Then the Theory of the Firm is more disrupted.

  275. JustSaying, your argument that software can be broken down to a bunch of building blocks and then bolted together is true and yet misses the point utterly. Software modularity and reuse only goes so far, and I think you’ll find that once you get out of the common problem spaces, the availability of modules is thin to nonexistent. (Not every module is as widely applicable as Webkit.) If I write a module that’s only good for one problem, then have I written a module at all?

    And this doesn’t even begin to address the problems that Stallmanite politics and related licensing differences throw into the problem. If I’m not interested in the Stallmanite utopia, and the only module available is GPLd, then I’m SOL.

  276. Eric: ” As with fine art and many other forms of useful artisanship, the expressive desires of the makers shape what is produced.”

    True. OTOH, how does this help free the world from the tyranny of Adobe? The expressive desires of the makers of open source image manipulation software haven’t managed to produce something usable.

    1. >True. OTOH, how does this help free the world from the tyranny of Adobe?

      I didn’t write that sentence to involve myself in that argument. The fundamental economics tells me that the proprietary model is in general doomed (with specific exceptions I’ve described in my papers). That doesn’t tell me specifically how the doom will eventuate, nor does it guarantee that the open-source culture you and I know will write the Photoshop-killer. It only tells me that at some point in the future somebody (or some consortium of somebodies) will find that the internalized cost of writing a replacement is lower than Adobe’s rent.

      I’m seeing a lot of logic errors on both sides of the Photoshop-vs.-GIMP argument in this thread. But for the moment I’m going to limit myself to pointing out yours…

      >The expressive desires of the makers of open source image manipulation software haven’t managed to produce something usable.

      By whose definition of “usable”? I successfully used the GIMP to edit the cover of The Art of Unix Programming. The draft version had a blank space where the disciple is now; I scanned him out of a copy of Zen Comics and composited him in. Nobody at Addison-Wesley tried to tell me the result wasn’t professional-quality; about all their artist did was add a drop shadow and that is by Goddess what we shipped.

      I’m not unique. There are any number of web designers and graphics artists who rely on the GIMP and proudly advertise the fact. And given that it sometimes enables people like me who are not professional designers to do work pro designers don’t sneer at, I think the line that it’s unusable garbage is pure horse exhaust.

  277. Jessica: “The purpose of software development is to get people the software they want an need, its purpose is not to provide employment for programmers.”

    The connection between the two is called the free market. People who provide software users want and need are able to sell it and make money at it, and thus goes the economy, just as with any other producer/customer relationship.

    There are holes there, to be sure; there is software in demand that does not manage to actually pay people money for maintaining it. Those people maintain it anyway, as a labor of love, with the result that some of the people we depend on most are living on other people’s couches and eating ramen. I’m sure that there are leftists who will cite this as examples of the failure of free markets; to me, it’s more that the market has found people willing to provide their labor at low or no cost, and so it has reached the price equilibrium.

  278. That doesn’t tell me specifically how the doom will eventuate, nor does it guarantee that the open-source culture you and I know will write the Photoshop-killer. It only tells me that at some point in the future somebody (or some consortium of somebodies) will find that the internalized cost of writing a replacement is lower than Adobe’s rent.

    Yes, and in the long run we’re all dead.

    By whose definition of “usable”?

    As has been said many times before, not by yours.

    Hackers have more adaptable brains than most, and are more tolerant of wonky and just plain bad user interfaces than most. (If you don’t believe me, look to the extreme longevity of vim and emacs as an example.) So something that gets the job done for you may be completely unusable to 99% of the people out there.

    Secondly, the industry has literally evolved around Adobe products. Just about everybody doing prepress work has used Photoshop. When such a designer sits down in front of GIMP, he or she expects their Photoshop muscle memory to work and gets frustrated when it doesn’t. Then he gets justifiably angry at the developers for expecting him to be retrained on this new tool, completely killing his productivity during the retraining process. Yes, this is GIMP’s fault, because GIMP did not properly address its ostensible target audience — people for whom Photoshop is the industry standard.

    Understanding that UX matters and that most people’s concept of a good UX in no way matches your own is key to understanding why the Macintosh has so flattened Linux on the desktop — even in technical fields — that Linux is back down to statistical-noise numbers in terms of desktop market share.

    I successfully used the GIMP to edit the cover of The Art of Unix Programming. The draft version had a blank space where the disciple is now; I scanned him out of a copy of Zen Comics and composited him in. Nobody at Addison-Wesley tried to tell me the result wasn’t professional-quality; about all their artist did was add a drop shadow and that is by Goddess what we shipped.

    Well, you can expect your publisher to make that sort of accommodation because you’re Eric fucking Raymond. It’s just like how Vernor Vinge can use emacs, or the butterfly effect if he wanted, but if you’re a new author shopping your manuscript around, it has to be in Word.

    It may be interesting to inquire how the publisher prepped the cover image you submitted for press. Doubtless it still had to go through a color-separation phase into CMYK, PANTONE, or other spot colors. I betcha the program they used to do that was Photoshop. It sure as fuck wasn’t the GIMP.

    See, that’s what separates a professional tool from an interesting toy for dilettantes: using Photoshop, the professional designer can color-sep his image for press at his desk, then go back and edit it some more if he doesn’t like the results. He can submit it to the publisher already separated, saving them some steps. He can make multiple phases of the workflow more productive. By choosing the proprietary, expensive, buggy, DRM-encrusted Photoshop, over the open source, free-love, multiplatform, scriptable but completely inadequate GIMP.

  279. @Jeff Read
    You tell us professional designers refuse to use anything but 100+% Photoshop & Pantone. Then you add it is illegal in the US to produce a 100% copy of those applications and that designers do not believe in an alternative and won’t invest time in it.

    So, how is the fact that there is no demand for, no believe in, and an illegal status of a suitable alternative a fault of the open source development model?

    1. >So, how is the fact that there is no demand for, no believe in, and an illegal status of a suitable alternative a fault of the open source development model?

      I’m not sure that even matters. The point Jeff (along with several other GIMP-haters here, including Jay Maynard) is determinedly ignoring is that there is a large and identifiable constituency of people whose needs the GIMP does in fact meet. They conflate a real issue (GIMP can’t do CMYK separation and a couple other high-end tasks specific to print design) with a spurious one (its alleged unusability, a claim that is refuted by the fact that quite a few web designers and people like me editing production graphics do actually use it quite happily).

      If the GIMP-haters stuck to real problems, they’d have a real case. But they don’t, and this makes their judgment on the whole topic suspect. Matters are not helped by the fact that several of the GIMP’s defenders here have babbled a lot of stupid shit about some imaginary future magic wand of computational aesthetics. This is missing the point just as surely as asserting that the GIMP sucks because its UI doesn’t behave like a Photoshop clone.

      Both sides need to get their heads out of their asses. When I speak of rent overrunning the transition costs, the transition costs include both developing the missing features and retraining for a new UI. It doesn’t matter how you measure those costs. What matters is that as Adobe’s rent goes up, it will eventually exceed them.

      One possible result is parallel to what happened to Apache and the Linux kernel – large companies with graphic-design requirements seconding designers and programmers to make the GIMP do what they need it to do, knowing that by pooling talent they can reduce costs without giving away core business knowledge. This model is a very general way to fund open-source infrastructure that doesn’t require anyone to make large asymmetrical commitments to commons development before seeing benefits.

  280. Furthermore, it is just a plain fact that one highly motivated, highly skilled programmer can get more useful work done on a weekend than a group of ten or twenty mediocre programmers can in a week, or even a month, especially so when the weekend guy is not weighed down by the burdens of the red tape of super large companies. I’ve seen it, the productivity level of most corporate programmers is dreadful, it may be below zero in many cases

    Bullshit. You are applying zero value to documentation, quality control, and most importantly scale when you make such pronouncements.

    Large corporate teams are large because the projects are larger in scope. I have never seen even a modest team of 20 FTEs assigned to a small or even medium task. Likewise I have never seen a single developer complete a medium sized production quality project (100ksloc+) solo in any reasonable amount of time (aka not measured in years) even if they can slam out 20k sloc on a good weekend project.

    This is exactly like comparing a great handymans ability to construct a shed from scratch in a weekend with the hordes of workers that can’t do anything in a week in the construction of a building that needs to meet code.

    And finally, the curious thing is that the very best programmers in a domain are attracted to OSS, because they create it out of an intrinsic love for the problem domain. I have hired, managed and fired many programmers, and I can tell you intrinsic motivation is a thousand times more powerful for creating awesomeness than the extrinsic motivation of a paycheck.

    The best and brightest are getting paid to code. That they tend to code anyway when their work doesn’t correspond to their passion is not directly tied to any merit intrinsic to OSS. These weekend projects today are just as likely, if not more, to end up as proprietary android and iOS apps.

  281. @Jessica
    “I’d say that perhaps 1 in 30 of the people I talk to even reach the standard of mediocre.”

    I knew the distribution of coding skills was skewed. But I never imagined 29 out of 30 programmers were below average ;-)

  282. @Winter
    > I knew the distribution of coding skills was skewed. But I never imagined 29 out of 30 programmers were below average ;-)

    Average is not the same as adequate. The average politician is certainly not adequate, the average pizza is more than adequate.

    Lest we be confused as the to mathematical meaning of average, did you ever consider that the vast majority of people have an above average number of legs?

  283. @ Jessica Boxer – “Average is not the same as adequate. The average politician is certainly not adequate, the average pizza is more than adequate. Lest we be confused as the to mathematical meaning of average, did you ever consider that the vast majority of people have an above average number of legs?”

    Jessica, you are a gem.

  284. Bullshit. You are applying zero value to documentation, quality control, and most importantly scale when you make such pronouncements.

    She is making an argument about quality of programmers’ code output, in which documentation and quality control are separate, downstream concerns — particularly in a corporate environment in which devs, tech writers, and QA personnel are distinct persons, each having one role.

    Large corporate teams are large because the projects are larger in scope.

    I’m with Jessica on this. Though she has more real-world experience with development in large corporate shops than I.

    My observation has been that large corporate teams are large as a form of insurance against the old “bus error” problem. It’s possible to get a lot done with a small team of one, two, or up to ten really good programmers; and by the time they’re finished they will have produced less code that does the same thing as the code a much larger team would have produced, because of Conway’s Law. But you are putting a lot more chips on each individual programmer; if any one leaves the company or is incapacitated you have lost a lot more intellectual capital. Plus, good programmers are more difficult to find, retain, and train.

    “Well,” reasons management, “n mediocre programmers are equivalent in brainpower to one good programmer; and if we lose any one mediocre programmer, our losses are much smaller. Plus, we can find a replacement on the order of weeks rather than months; and provided that they have the requisite number of years in J2ee servlet container factory factories, we can train them up on the order of months rather than years.” Except it doesn’t work this way at all. Fred Brooks knew this back in the 60s. Jesus Christ, man.

    Fun fact: I once tried building a J2ee app on my personal box, and quickly ran into a solid brick wall just trying to get “hello world” up and running. Then I realized — I’m not supposed to be building these apps by myself at all. The entire framework is designed to be used by twenty-man teams at the smallest, each of whom is coding one component of an overall whole no one member of the team can truly understand. And it’s built that way so that different people on the team don’t step on each other’s toes — for no other reason than this. Not for “scalability” reasons. Tim Berners-Lee wrote an information service that scales to planet-size on a fucking NeXTstation.

  285. I knew the distribution of coding skills was skewed. But I never imagined 29 out of 30 programmers were below average ;-)

    Primarily because 97.32% of all statistics are made up.

    DeMarco and Lister, two decades ago or so, did coding war games.

    http://dwp.bigplanet.com/pdkconsulting/nss-folder/pdfdownloads1/Why_Measure%20_DeMarco3.20.01.pdf

    When you can point me to any study at all that indicates that most programmers are barely competent/marginal/adequate/whatever I’ll believe that our industry is mostly composed of incompetent people. My experience is that most coders know what they are doing or quickly move out of actual code development into something else. On the other hand I have been relatively picky where I work but it ranges from lowball defense contractors to government agencies to startups to academia.

    If you’re calling 30 people for phone interviews and 29 are idiots than you’re not pre-screening correctly. I can believe that 29 out of 30 resumes don’t warrant a call back for a variety of reasons.

    Definition of productivity is problematic across organizations and sometimes within organizations. What do you want to use as a common unit of measure? Function Points? Use Case Points? Story Points? Team velocity for small projects will be much higher than aggregate velocity of multiple teams working a large project.

    Does functional code for incorrect features count for anything? Obviously someone blew it on the requirements analysis side but often the customer is part of the error there. Does functional code for hard to use features *cough*GIMP*cough* count? If the users largely reject a solution is any of that code actually useful regardless of code quality?

    When counting the mythical effectiveness of the open source developer do you use the bullshit metrics provided by Ohloh? Because for the few projects I have some visibility into actuals I can tell the values provided are hugely overestimated. For example, Ohloh lists WorldWind Java as 122 staff years effort. Given it started in the 2005-2006 time frame that’s 15-17 staff years per year and I know the WWJ team wasn’t nearly that big. And that’s a set of government developers (actually all the ones I have met are really sharp but there’s probably a negative impression of devs that work for the USG vs say ones that work for Google).

    GIMP has an “estimated cost” of $11M for 201 staff years effort. LOL.

    I guess their excuse is that they are using COCOMO for costing estimation but I see these numbers used by FOSS proponents and I wonder if they really don’t know better or if they don’t care that they are lying by statistic.

  286. She is making an argument about quality of programmers’ code output, in which documentation and quality control are separate, downstream concerns — particularly in a corporate environment in which devs, tech writers, and QA personnel are distinct persons, each having one role.

    Industry average varies from 10 to 50 lines of code per day. That includes requirements analysis, design, testing, documentation, etc. These are the industry average numbers that software estimation models use in lieu of you having historical metrics to use.

    Anyone who believes that the average corporate developer isn’t able to write large amounts of code in a day on days where they actually spend coding has a very very dim view of our industry. And in my opinion a very incorrect one. Even when you take into account offshoring.

    Anyone that doesn’t include requirements analysis, design, documentation, q/a, etc isn’t IMHO doing good software engineering. A discipline that is 90% useless (meaning you don’t need a SwE as long as you maintain semi-rational development methodology) except when you are talking about developing large scale software (1 MSLOC or more) or mission critical software.

    I’m with Jessica on this. Though she has more real-world experience with development in large corporate shops than I.

    For large developmental teams the code size is huge. As you move beyond 1 MSLOC the code complexity increases tremendously and you have to apply more rigor to keep things moving.

    Do you really believe that the managers at Adobe have not heard of Brooks? Do you really think that Adobe hires mediocre dev staff as a matter of policy because some might leave? Or Apple? Or IBM? Or MS? Or Google? Or even Yahoo?

    Let me clue you in. No matter how good you are, your leaving is rarely more than a temporary inconvenience. ESPECIALLY in an organization composed of mostly top 10%ers anyway. Top software companies don’t hire mediocrity as a matter of policy. These are the companies creating large software products.

    And Tim wrote an information service that could scale to “planet size” on a NeXT box because in 1989 the internet “planet” was pretty damned small at the time and if all 1.1M users decided to hit his web page at CERN at once the magnesium case would have caught fire and burned down the building.

    For me a “very large team” is along the lines of the Win7 team – 25 feature teams of 40 or so people each. This includes all coders, testers, writers, managers, etc. For MS the ratio of testers to coders tends to be 1:1.

    A modestly “large team” is about the size of the core Adobe Photoshop team (about 100). CS6 was 4.5+ MSLOC. That’s a very modestly sized team for maintaining and expanding a codebase of that size but a lot of it is legacy and it’s been around a long while.

    For mid-sized teams of around 20-30 developers it’s often the worst of both worlds. Small teams can often beat medium sized teams on small projects (100 KSLOC) because you typically can’t efficiently decompose a small problem sufficiently to get working team sizes down to the 4-6 range. But no small team (4-8…whatever you feel is the largest practical for 2 pizzas) is likely to be able to complete a 1 MSLOC project in the time frame of most product cycles. A properly structured team of 20-30 developers + support staff can.

  287. @Nigel
    >Bullshit. You are applying zero value to documentation, quality control, and most importantly scale when you make such pronouncements.

    Not at all. In fact those mediocre programmers are even worse at documentation and QC than they are at coding. Furthermore, documentation for most large software systems is close to useless most of the time, except certain specific types of documentation (specifically stuff that is auto generated out of the code, or extracted from the code) and some higher level architecture documentation, that the drone coders aren’t allowed near.

    Scale is a different matter.First of all, let me say that most big software projects are big because of incompetence, shockingly poor problem analysis, even worse design, and bureaucratic bullshit. Nonetheless, there are no doubt some projects that have to be big.

    However, those crappy developers are THE WORST thing you could have on large projects. They just confound the already bucketloads of bullshit that such projects already are. Big software projects remind me a lot of the government: governments do everything very, VERY badly, but somethings can only be done by the government. Nonetheless, those things that can only be done by the government are still done very, VERY badly.

    So with large software projects. Sometimes you have to throw a quarter of a billion dollars at a project (like one I am pretty familiar with), nonetheless you have to do so with the realization that 95% of that will be wasted on bullshit. However that is the price you have to pay for the 5%.

    FWIW, OSS has proved pretty good at certain types of very large projects. Though they also have a shockingly high amount of waste too, it is just less visible.

    > never seen a single developer complete a medium sized production quality project (100ksloc+)

    But there is your problem right there Nigel. I know of one company that spent millions of dollars and > 100klocs developing a bug tracking system/task management systme for their project. Took two or three teams to get it right. Another project people just installed bugzilla. So there you go. One guy did the equivalent of 100kloc of code in an afternoon.

    That is the whole point, really good developers don’t do the same things as these screwballs, but still achieve the same results. This isn’t fantasy. I see it all the time. It is absolutely common practice in commercial software development.

    > This is exactly like comparing a great handymans ability to construct a shed from scratch

    You are completely wrong. Programmers and handymen/builders are completely different in character. There is no engineer that can build a bridge 100 times faster than the average engineer. However, that is just commonplace in the software development world. You analogy just simply doesn’t work because the nature of the engineering is very different.

    > The best and brightest are getting paid to code.

    Some are, lots aren’t and regardless, lots of OSS comes on the back of those paid gigs. Lots of people write code for money, code that never gets used. They then go home and write jquery, or php, and change the world.

    > These weekend projects today are just as likely, if not more, to end up as proprietary android and iOS apps.

    This is a new phenomenon. I don’t know that world well enough to comment.

  288. “They conflate a real issue (GIMP can’t do CMYK separation and a couple other high-end tasks specific to print design) with a spurious one (its alleged unusability, a claim that is refuted by the fact that quite a few web designers and people like me editing production graphics do actually use it quite happily).”

    I can’t speak to professional designers. What I can speak to is my personal experience in trying to use the GIMP to do what I had been doing in Photoshop.

    Objects in Second Life are made up, for the most part, of primitive shapes with faces to which textures are applied. A Second Life texture is a JPEG, GIF, PNG, or TGA image in a power-of-2 size not larger than 1024 pixels in either dimension; the viewer converts the image to JPEG2000 (why, I have no idea t all, but it does) before uploading it to the Second Life asset servers. It’s sent back as JPEG2000 and then displayed by any viewer that has the object or avatar in its viewing area. (This is a simplification, but nothing essential to this discussion is lost.)

    What I was trying to do was make a sign. I wanted to make a 512×512 texture with a background image and text. I have templates for this that are PSD files with the background on one layer and different pieces of text on different layers, with various fonts, colors and layer blending options used to create a consistent design style. I originally did this in Photoshop because that’s the tool I had available; I’ve owned legitimate copies of Photoshop since 2001.

    I took the PSD into the GIMP and started working on it. The UI I could get over; while it sucked compared to Photoshop, I figured that was a matter of getting used to it, and I didn’t do this job enough that the learning curve was a significant cost. What I couldn’t get over was that the output just plain sucked. Despite several attempts, I was unable to make a sign that looked like the signs I’d made in Photoshop.

    This result was not acceptable to me. I went back to Photoshop and haven’t looked back.

    There’s no CMYK separation involved, no spot coloring, no high-end design work. Just manipulating graphics and text to produce an image, something the GIMP users here claim it can handle well. That was not my experience. If it won’t do that job right, how can it be taken seriously?

  289. @nigel
    > Industry average varies from 10 to 50 lines of code per day.

    See, this is the essence of where you and I are on a completely different page. For one thing 10-50 lines of what sort of code? Perl is very different than Ada. But the main point why loc is such a dreadful measure is that it doesn’t measure the end result, and what alternative paths could have been used to achieve it.

    If I want to render a gif image, I could write fifty thousand lines of code to process the gif format, or I could just use an off the shelf library in one line of code. Who is more productive? The 50,000 line guy, or the 1 line guy?

    If I refactor a large system to extract out common features into, for example, a base class, or a shared function library, my total LOC for that timespan could easily be negative, perhaps large negative. Is that not a productive activity?

    And that is the difference between great programmers and ridiculous ones — knowing when not to write code, and when they do write it, how to write it in such a manner as to meet the core quality criteria that is being sought with the minimum amount of effort.

    10-50 lines of code a day is a totally bullshit measure. It is a classic management blunder that comes from a failure to recognize how different software engineering is from other types of engineering. Managers need to measure something, even if the measure is meaningless, which LOC absolutely is.

  290. They conflate a real issue (GIMP can’t do CMYK separation and a couple other high-end tasks specific to print design) with a spurious one (its alleged unusability, a claim that is refuted by the fact that quite a few web designers and people like me editing production graphics do actually use it quite happily).

    As Jay pointed out, his problem had nothing to do with color separation. Rather, GIMP couldn’t scale his Second Life textures down without making them look like ass. You’re in pretty sad fucking shape as an image editor if you can’t even scale images properly.

    And GIMP is just full of boners like this. GIMP “plug-ins” are separate processes that snarf in image data through… pipes or shared memory or something, I don’t know. Some dev wanted to stroke his throbbing Unix dick. The result is that doing any sort of image processing is sloooooooooooooow. And that’s for C plugins; I haven’t even gotten into the use of Scheme yet.

    By contrast, Photoshop plug-ins are, effectively, DLLs; they go right into the Photoshop process and have access to the exact same image data in RAM that Photoshop does. Usually, they are compiled so as to be optimized for the latest hardware Apple has available, so they are as fast as an image-processing algorithm can possibly be, operating on something as complex as a PSD in a pro workflow.

    So it’s not just that GIMP can’t do CMYK or that it has a shitty UI. These are just symptoms of general poor quality and the fact that the GIMP team does not take the goal of making a decent image editor seriously.

  291. Another thing: it wasn’t until very recently that GIMP had truly sizeable brushes. Before that, it had this PC Paintbrush bullshit where each brush was a bitmap and you could apply a scaling factor to the bitmap. So if you wanted, say, a size 30 brush, the biggest was 19. But you could apply a scale factor of 2 to a size 15 brush. Or something.

    Anyway, it was bullshit, and meanwhile, Photoshop had brushes that could be dynamically resized in size and hardness, either manually or via the pressure axis of your Wacom tablet. Or both.

    Ah, but does Photoshop have the pepper brush?

  292. @nigel
    Let us take a concrete exapmle: Windows CIFF/SMB and Samba.

    Now compare team and code size and quality of documentation.

    Hint, Windows SMB is at the core of MS business network stack. They have assembled the best team money can hire. Samba is one of those Free software project run by a few volunteers.

  293. Just a few numbers. Samba is around 2MSloc and the team has around 40 members with commit rights of which ~15 seem to be active.

  294. So with large software projects. Sometimes you have to throw a quarter of a billion dollars at a project (like one I am pretty familiar with), nonetheless you have to do so with the realization that 95% of that will be wasted on bullshit. However that is the price you have to pay for the 5%.

    After analysis of the failure rate of large software engineering projects it didn’t appear to me to be much worse than any other large engineering project. Civil engineering, a far more mature discipline, had it’s fair share of hundred million+ fiascos.

    A 250M software project is very large. Using a burdened rate of 180K/developer/year that’s over a 1000 staff years.


    > never seen a single developer complete a medium sized production quality project (100ksloc+)

    But there is your problem right there Nigel. I know of one company that spent millions of dollars and > 100klocs developing a bug tracking system/task management systme for their project. Took two or three teams to get it right. Another project people just installed bugzilla. So there you go. One guy did the equivalent of 100kloc of code in an afternoon.

    That’s a really bogus example and you know it. That many folks have used off the shelf solutions vs custom development is not any sort of indicator of the individual productivity of developers.

    In any case, I would almost never pick Bugzilla over something else. It also isn’t a very effective task management system. In the OSS vs proprietary cost analysis I would have to consider the costs between an Atlassian license for Jira/Fisheye/Bamboo and manually integrating Redmine, Jenkins, etc.

    Want to bet that most OSS developers would make the wrong choice in that scenario?


    That is the whole point, really good developers don’t do the same things as these screwballs, but still achieve the same results. This isn’t fantasy. I see it all the time. It is absolutely common practice in commercial software development.

    The point is that your example is one of management failure not developer incompetence.


    > This is exactly like comparing a great handymans ability to construct a shed from scratch

    You are completely wrong. Programmers and handymen/builders are completely different in character. There is no engineer that can build a bridge 100 times faster than the average engineer. However, that is just commonplace in the software development world. You analogy just simply doesn’t work because the nature of the engineering is very different.

    Except that it’s not. The observed differences between best and worst developers is 10-1.

    http://dwp.bigplanet.com/pdkconsulting/nss-folder/pdfdownloads1/Why_Measure%20_DeMarco3.20.01.pdf

    It’s sad that 30 years later we’re still quoting the same study but if you have a study that shows the differential is 100-1 you can provide it and I’ll believe your made up statistic isn’t made up.

    And you completely miss the point. The increase in complexity between a large engineering projects solved by large software teams and the complexity of small projects is not linear.

    You cannot simply multiply the productivity of a small team on a 100KSLOC project and project the performance of that team on a 1MSLOC project. It doesn’t simply take 10 times more effort.

    Anyone with large project experience knows this.

  295. > Industry average varies from 10 to 50 lines of code per day.

    See, this is the essence of where you and I are on a completely different page. For one thing 10-50 lines of what sort of code? Perl is very different than Ada. But the main point why loc is such a dreadful measure is that it doesn’t measure the end result, and what alternative paths could have been used to achieve it.

    A large range is provided in part due to language differences. Given you are an expert in large scale software development presumably you are more than aware of this for all the large scale software estimation models (COCOMO II, PRICE, UCP, etc).

    Those models that do not use KSLOC as a unit of measure often provide a translation between their unit of measure (Function Points, Use Case Points, Pixie Fairy Dust Ratio, whatever) or provide their own “rules of thumb” metrics when it comes to average software developer productivity (average developer productivity is 23 unadjusted Pixie Fairy Dust units/fortnight which is then multiplied by the Blue vs Red Dust Technical Complexity Factor – BVRDTCF).


    If I want to render a gif image, I could write fifty thousand lines of code to process the gif format, or I could just use an off the shelf library in one line of code. Who is more productive? The 50,000 line guy, or the 1 line guy?

    Bogus example. Code reuse is a metric that many large corporate software development companies track. Also, you are ignoring the licensing cost for that “off the shelf library” whether that is a straight monetary cost or of that of something like GPL.

    If the royalty cost for redistribution of the library in my final product exceeds my total cost for developing 50 KSLOC then rolling my own is potentially the right choice. If using that library must mean I need to release my proprietary code as GPL then rolling my own is almost certainly the right choice if there is no other alternative on a proprietary product.


    If I refactor a large system to extract out common features into, for example, a base class, or a shared function library, my total LOC for that timespan could easily be negative, perhaps large negative. Is that not a productive activity?

    Of course. Is your contention that corporate software developers do not also do this or do less of this? Citation please.

    And that is the difference between great programmers and ridiculous ones — knowing when not to write code, and when they do write it, how to write it in such a manner as to meet the core quality criteria that is being sought with the minimum amount of effort.

    A platitude that does not indicate to anyone that OSS devs are significantly better than corporate devs.

    10-50 lines of code a day is a totally bullshit measure. It is a classic management blunder that comes from a failure to recognize how different software engineering is from other types of engineering. Managers need to measure something, even if the measure is meaningless, which LOC absolutely is.

    Yes, LOC over time is the worst measure of software productivity except for all the others. 50 years since the founding of software engineering as a discipline and after many the forays into alternatives I find that all the other metrics result in equally, if not more, subjective measures of productivity.

    FP, UCP, etc all depend on fairly subjective assessments of complexity/difficulty as part of the equation. In practice I find if a FP or UCP estimation fails the gut check of the lead system engineer that unit complexity estimations and environmental assumptions get tweaked.

    The only reason any of these models work, and the reason it only really works in large projects, is that the individual estimation errors cancel out over the long run and you approach the historical productivity values for your organization. The easiest way to gather this very important metric is to run a KSLOC count against past projects and compare with total project cost and schedule if you don’t have a very mature metrics program (or even if you do, the results gives you a sanity check over your metrics collection practices).

    The rule of thumb for developer productivity is bullshit EXCEPT that most organizations, whether FOSS or corporate IT, have no freaking clue what their historical productivity rates are (not even the simple KSLOC/project cost one).

    So yes, KSLOC is a terrible unit of measure with the only redeeming value being that it’s an objective measure as opposed to pretty much every other measure. As an objective measure you can compare your organization efficiency in terms of KSLOC per Function Point or Use Case Point or Pixie Fairy Dust Ratio.

    If your average FP/UCP/PFDRP requires less code than peers using the same language and in the same problem domain (no fair comparing business enterprise software against aerospace software) you have a good indicator that your developers probably are very efficient in terms of code count.

    If not, then you have an indication that your developers may be writing more code to get the same amount of work done.

    Thus far your assertions, (29 in 30 developer suck, best developers are two orders of magnitude better performers than average, OSS developers better than corporate developers) strike me as unsupported by any studies or published literature and are much higher than presumed by the SwE body of knowledge.

    Finally, large scale software engineering is not actually all that different from other large scale engineering disciplines because the human factor is the largest determinant of success or failure than technical determinants. 1000 staff years worth of software engineering has the same human communication, teamwork, system complexity issues as 1000 staff years worth of civil engineering. The people problems are much harder to solve than the technical ones as most people with the T-shirts will tell you.

    It is thus no surprise that software engineering project disasters look amazingly like civil engineering project disasters with the same sorts of root causes and IIRC about the same failure rates (I’d have to go hunt for a citation for this but that’s my recollection from my Sys Eng and PjM course work but the case studies for things like Denver Airport and nuclear plant blunders read very much like case studies for the classic software engineering blunders).

  296. @nigel: “A platitude that does not indicate to anyone that OSS devs are significantly better than corporate devs.”

    Has someone here claimed that OSS devs are significantly better than corporate devs? Note that this is a different question of whether OSS as a methodology is better/worse than CSS. Red Hat is corporate and OSS.

    “50 years since the founding of software engineering as a discipline…”

    I’m still looking for someone to make a convincing case that creation and maintenance of software really is an engineering discipline (as compared to, say, civil, mechanical, industrial, etc.). It still to me appears more like artistry than engineering. In which case Jessica’s 100:1 would be perfectly reasonable, even expected.

  297. Let us take a concrete exapmle: Windows CIFF/SMB and Samba.

    Now compare team and code size and quality of documentation.

    Hint, Windows SMB is at the core of MS business network stack. They have assembled the best team money can hire. Samba is one of those Free software project run by a few volunteers.

    I haven’t had a chance to figure out the size of the SMB team at MS nor their SLOC count. If you have these great. Then a comparison can be made. I haven’t read either of the Samba or SMB documentation but MS does a very good job of documentation in general.

  298. @nigel
    > Civil engineering, a far more mature discipline, had it’s fair share of hundred million+ fiascos.

    At one time software projects were failing to deliver full expectations at something like 80%. The two failure rates aren’t comparable at all.

    > That’s [bugizlla rewrite] a really bogus example and you know it.

    No it isn’t. I’ll grant you it is an extreme example, but it is certainly representative of the sort of dumb ass decisions that get made all the time. And they are not primarily management decisions, they are usually pushed up from the development team, especially the senior ones, on to a management who don’t have the technical skill to distinguish. (Which they shouldn’t have, they hire programmers to advise them.)

    Here is another example, from Joel Splosky, who worked on the Micosoft Excel development team. Apparently the Excel team wrote their own C compiler for the project, rather than use the Microsoft one. You judge for yourself if that was a good decision.

    One time I spent a couple of weeks refactoring a bad piece of software. I cleaned it up and deleted thousands of lines of code a day. You must think I was the worst programmer in the world those two weeks.

    > Except that it’s not. The observed differences between best and worst developers is 10-1.

    This is plainly incorrect. My personal observation is that between 10% and 20% of “professional programmers” make a net negative contribution on projects. This comports with the experience of a dozen other senior programming people I know in my area. If that is the case then plainly it is not 10-1.

    > And you completely miss the point. The increase in complexity between a large engineering projects solved by large software teams and the complexity of small projects is not linear.

    I know it isn’t, I’d say it grows faster than quadratic. But the negative effects of bad programmers actually grows faster than the positive effects of good programmers, and consequently big projects are the worst types of projects to have bad programmers on.

    Which is a dilemma, because you have to fill the slots. Again, most big projects are ill conceived ideas in the first place. Primarily because of the subject we are discussing, that management types think if you throw enough bodies at the problem it goes away. The key to making big software is in what you don’t do rather than what you do do.

    >If you’re calling 30 people for phone interviews and 29 are idiots than you’re not pre-screening correctly. I can believe that 29 out of 30 resumes don’t warrant a call back for a variety of reasons.

    Unfortunately people lie on their resumes all the time. I ask people to write code during interviews, often the reaction to that request is enough to determine if they are going to be a fit.

    I ask tough questions like: “So you have 10 years SQL experience? If you had a table with one column for dollar sales amount and another for the date of the sale, please write a query that shows the total monthly sales for the past two years.” Or of course the classic Fizz Buzz problem.

    Really, if you can’t do either of these things you basically have no skills at all. But at least 50% of people, whose resumes show a decade of project experience, have no clue where to start.

    (BTW, usually on a programming blog, posting about the fizz buzz problem makes people want to post their solutions… please don’t…)

  299. @michael

    “And finally, the curious thing is that the very best programmers in a domain are attracted to OSS, because they create it out of an intrinsic love for the problem domain.”

    “I’ve seen it, the productivity level of most corporate programmers is dreadful, it may be below zero in many cases.”

    As far as whether software engineering is an engineering discipline I’m rather tired of folks underestimating the complexities of modern software projects. What do you call getting several million moving parts of anything to solve a problem? The space shuttle has 2.5M parts and while obviously far more complex than a 2.5M SLOC software project to argue that software isn’t an engineering discipline at this level but an art form is trivialization of hard engineering problems required to solve problems of this complexity.

    Frankly, there’s a large level of artistry/craftsmanship that goes into spacecraft engineering as well.

  300. @Nigel
    “I haven’t read either of the Samba or SMB documentation but MS does a very good job of documentation in general.”

    MS was unable to come up with documentation of the CIF protocol when ordered so by the European courts. Rumors had it that MS engineers had to resort to the SAMBA documentation to get it right. Actually, it was widely believed that there was no one at MS who understood the protocol. The code was the protocol.

  301. @nigel: “As far as whether software engineering is an engineering discipline I’m rather tired of folks underestimating the complexities of modern software projects. What do you call getting several million moving parts of anything to solve a problem?”

    Please switch your Comprehension setting to True.

    Calling it artistry rather than engineering isn’t trivializing it, it is elevating it. Engineering is characterized by applying well understood solutions using well understood methodologies. IMHO, software has neither and possibly never will. IMHO, it is orders of magnitude more complex and more abstract.

    The space shuttle is a bad example because they were trying to do things that had never been done before and they mostly succeeded. Software projects still fail massively even when doing things that have been done many times before in many places.

  302. “The draft version had a blank space where the disciple is now; I scanned him out of a copy of Zen Comics and composited him in. Nobody at Addison-Wesley tried to tell me the result wasn’t professional-quality; about all their artist did was add a drop shadow and that is by Goddess what we shipped.”

    Wow, you were able to cut-and-paste black/white line art and paste it on a white background? It’s amazing what The Gimp can do! You are right, this does suffice for all the work a graphic designer may encounter!

    What program did the publisher use to put in that drop shadow?

  303. At one time software projects were failing to deliver full expectations at something like 80%. The two failure rates aren’t comparable at all.

    “As a result, the nuclear industry has a very poor track record in predicting plant construction costs and avoiding cost overruns. Indeed, as shown by data in a study by the Department of Energy, the actual costs of 75 of the existing nuclear power plants in the U.S. exceeded the initially estimated costs of these units by over 200 percent. The following table shows the overruns experienced by these 75 nuclear plants by the year in which construction of the nuclear power plant began.

    In fact, the data in the previous table understates the cost overruns experienced by the U.S. nuclear industry because (1) the cost figures do not reflect escalation and financing costs and (2) the database does not include some of the most expensive nuclear power plants built in the U.S. – e.g., Comanche Peak, South Texas, Seabrook, and Vogtle. For example, the cost of the two unit Vogtle plant in Georgia increased from $660 million to $8.7 billion in nominal dollars – a 1,200 percent overrun.

    There were a number of significant consequences as a result of these cost overruns. First, only one-half of the nuclear power plants that were proposed were actually built and ratepayers frequently had to bear many millions of dollars of sunk costs for abandoned projects. Second, the cost of power from completed nuclear power plants became much more expensive for ratepayers than the proponents had claimed. In some instances this led to rate increases so large that they spawned the term “rate shock.””

    http://www.synapse-energy.com/Downloads/SynapsePaper.2008-07.0.Nuclear-Plant-Construction-Costs.A0022.pdf

    (picking a google result at random)

    Also the 80% failure rate is bogus. I’d have to find the DeMarco (I think it was DeMarco) article that debunks this myth again but here’s a typical example of the software statistic:

    “The Standish Group, which exists solely to track IT successes and failures, sets out very strict criteria for success. For its Chaos Report, The Standish Group surveyed 13,522 projects last year and showed that unqualified project successes are well below 50 percent, 34 percent to be exact. Out-and-out failures, defined as projects abandoned midstream, are at 15 percent. Falling in between the two are completed but “challenged” projects. The report says challenged projects represent 51 percent of all IT projects and are defined as projects with cost overruns, time overruns, and projects not delivered with the right functionality to support the business.”

    http://www.infoworld.com/t/business/it-myth-5-most-it-projects-fail-008

    You’ll want to only compare very large software projects vs very large civil engineering projects (like nuke plants) rather than over the spectrum of the 13K projects but the failure rates are VERY comparable.

    This is plainly incorrect. My personal observation is that between 10% and 20% of “professional programmers” make a net negative contribution on projects. This comports with the experience of a dozen other senior programming people I know in my area. If that is the case then plainly it is not 10-1.

    Funny that anecdotal evidence isn’t and the plural of anecdotal isn’t data. Find me any study of 600 developers from 92 companies that indicates a 100-1 difference in productivity. Heck, find me ANY published study with vaguely reasonable statistical value that shows a 100-1 difference in productivity for median professional software developers and a top end developer or even that the mode is skewed far to the left of the median.

    Plainly incorrect because you say so? LOL.

  304. Calling it artistry rather than engineering isn’t trivializing it, it is elevating it. Engineering is characterized by applying well understood solutions using well understood methodologies. IMHO, software has neither and possibly never will. IMHO, it is orders of magnitude more complex and more abstract.

    And now you’re trivializing other large engineering disciplines. And calling it artistry is a typical method of denigrating the software as a non-engineering discipline lacking sufficient rigor. Which is true but no more true that other engineering disciplines at scale.

    Software projects still fail massively even when doing things that have been done many times before in many places.

    See the above comparison of software “failures” with nuclear plant project “failures”.

  305. @nigel: “And calling it artistry is a typical method of denigrating the software as a non-engineering discipline lacking sufficient rigor.”

    It would only be denigrating it if such rigor were both possible and available. I’m not certain it is possible (at this time). I am almost certain that no-one that this date knows how do it.

    Picture the Highway Department in your locale and their ability to build bridges. Generally they can build one bridge after another, sometimes in projects spanning multiple years, and deliver exactly what is expected and with solid control over timeline and budgets. And most of the people on the project will never have set foot on a college campus – indeed a significant number will have a resume that essentially reads “day labor”.

    I don’t think building bridges is actually simple but it does not rival the interacting complexity of software. But, more importantly, bridge building is a well understood and well developed discipline and has been for a long time (centuries?). Can any of those things be said for software?

  306. Once software professionals routinely deliver large systems that have been proven not to fail under real-world use conditions — under pain of being fired or jailed — then I will call software an engineering discipline.

  307. @michael

    Google bridge cost overruns. Bridge projects appear to be typically 5-8% overrun with large projects like the San Fran bay bridge incurring much larger overruns and problems.

    Here’s a gem:

    “Every Calatrava-designed Bridge Built in the Past Five Years Has Had Massive Cost Overruns, So Why Are We Surprised?

    Yesterday, the Dallas City Council approved funding for the second Santiago Calatrava-designed bridge in the city, the Margaret McDermott Bridge. The price tag is $115 million, $12 million higher than the last estimate, and $41 million higher than the original. Really, though, the Council shouldn’t be surprised. Just look at other “signature bridges” designed by Calatrava.

    In 2009, Samuel Beckett Bridge opened in Dublin. Original estimate? Ten million euros. Final cost? Sixty million. The cost of the Jerusalem Chords Bridge grew from NIS 80 million to NIS 129 million to NIS 246 million, or roughly $70 million. Calgary’s Peace Bridge climbed from $19 million to $25 million (contractors picked up the cost increase on that one, due to a pretty smart city contract). And Ponte della Costituzione in Venice’s cost grew from 6.7 million euros to 11.8 million euros, an increase that caused protests in the streets.

    The point is simple. We can complain all we want about the cost of these bridges, but a simple look through history shows that these structures a.) take a long time to approve/build, and b.) usually run over-budget, grossly.”

    http://frontburner.dmagazine.com/2013/01/24/every-calatrava-designed-bridge-built-in-the-past-five-years-has-had-massive-cost-overruns-so-why-are-we-surprised/

  308. @Jeff You can get a PE in software soon. Which means the software engineering PE that signs off on a large project that fails will be subject to the same penalties as a civil engineering PE that signs off on a large project that fails. Fines, jail time, etc.

  309. @JustSaying:

    “I think you continue to miss my point three times now, which is the majority of the internet are websites not earning a (significant) commercial profit, e.g. blogs. I am positing that most of the publishing will be done without professional designers. Of course pro designers will be employed for the declining percentage of the publishing market which is commercial.”

    No, I did not. You are wrong.

    “In short, I am positing your much vaunted professional aesthetics is too slow for a dynamic publishing market, and is being pushed towards irrelevance.”

    You’ve provided zero evidence of this claim, and you are wrong.

    “You can also refer to my comment upthread (which you admitted you did not understand) where I took your point about aesthetics being highly discordant and variable, and posited that basically commercial advertising is appealing to some primordial instinct to induce a groupthink, e.g. sex.”

    I thought your comment made no sense and you never clarified (or I missed it). I’ve already stated that my viewpoint is far from “aesthetics being highly discordant and variable.”

    “I suspect you have no clue that I meant we can create tools for non-pros that facilitate their cookie cutter style of design and potential automate with weak AI.”

    I fully understand and stil think you cannot accomplish it.

  310. @michael

    Oh, another gem:

    “Cost Escalation and Its Causes

    On the basis of the first statistically significant study of cost escalation in transport infrastructure projects, in a previous paper (Flyvbjerg et al., 2003b) we showed that cost escalation is a pervasive phenomenon in transport infrastructure projects across project types, geographical location and historical period. More specifi- cally, we showed the following (all conclusions highly significant and most likely conservative):
    * Nine of 10 transport infrastructure projects fall victim to cost escalation (n = 258).
    * For rail, average cost escalation is 45% (n = 58, SD = 38).
    * For fixed links (bridges and tunnels), average cost escalation is 34% (n = 33, SD = 62).
    * For roads, average cost escalation is 20% (n = 167, SD = 30).
    * Cost escalation exists across 20 nations and five continents; it appears to be a
    global phenomenon (n = 258).
    * Cost escalation appears to be more pronounced in developing nations than in
    North America and Europe (n = 58, data for rail only).
    * Cost escalation has not decreased over the past 70 years. No learning seems to
    take place (n = 111/246).
    The sample used to arrive at these results is the largest of its kind, covering 258 transport infrastructure projects in 20 nations worth approximately US$90 billion (1995 prices).”

    http://flyvbjerg.plan.aau.dk/COSTCAUSESASPUBLISHED.pdf

    Someone tell me again that software and civil engineering are not comparable. Arguably the oldest and youngest engineering disciplines share very similar behaviors.

    (I hope it wasn’t my fault the DB crashed earlier…it seemed to happen after I cut and pasted this except but the asterisks were some funny unprintable character)

  311. @Jay Maynard:

    I think you’ll find that once you get out of the common problem spaces, the availability of modules is thin to nonexistent.

    If software is written more modularly, we can reuse it, e.g. image effects and filters could be invokable from within any application that displays an image.

    I should be able to pick which features I want to include, or at least extract or change a feature, without dragging on a spaghetti noodle that touches a plethora of unrelated source code files.

    The tiling memory manager in Adobe could be reused by any image editing application that wanted to support large image files.

    @esr:

    large companies with graphic-design requirements seconding designers and programmers to make the GIMP do what they need it to do, knowing that by pooling talent they can reduce costs without giving away core business knowledge. This model is a very general way to fund open-source infrastructure that doesn’t require anyone to make large asymmetrical commitments to commons development before seeing benefits.

    Your logic up to this quoted text agreed with mine upthread, but this probably fails because Adobe will make sure they charge a fraction of the production value. Companies not only don’t have a strategic interest to optimize away every irrelevant cost, the 80/20 rule says they will be penalized if they do so.

    Perhaps there will be companies that do much image editing which has very low production value, but I can’t think of what they would be.

    Rather as I posited upthread, I think the disruption will come from those markets that use image editing and graphic design less frequently, but want it always available when they need it occasionally. Most of these non-professional designers will not bother to go jump into a month of Adobe’s walled garden every so often and instead look for an always available app (as any text editor is to HTML editing) that does what they need.

    @Tim F.:
    You are apparently erroneously asserting that professional design will not lose share of the internet publishing, in spite of the obvious fact that the amateur blogs and self-published e-books are capturing an ever increasing share of internet publishing. As I posited, the bloggers don’t have time to wait for a professional design every time they want to publish a new article.

  312. @nigel
    > Plainly incorrect because you say so? LOL.

    Laugh away Nigel, laughter is good for the soul. However, what you are saying does not correspond to the experience of what I and many, many people I know in the industry see every day. I don’t have the time to do a detailed methodological analysis of the surveys you cite, but I am going to guess they just aren’t using the right measure. As I said, producing code is not the same as being productive.

    Just off the top of my head I can think of a dozen people I have had the misfortune to work with that were without a slightest shadow of a doubt net negative in their contribution. Not because they didn’t write lines of code, but what they did write was crap, and poisoned the whole body.

  313. “You are apparently erroneously asserting that professional design will not lose share of the internet publishing, in spite of the obvious fact that the amateur blogs and self-published e-books are capturing an ever increasing share of internet publishing. As I posited, the bloggers don’t have time to wait for a professional design every time they want to publish a new article.”

    I’m not arguing this at all. I’m arguing that amateur blogs and self-publishing aren’t using professional design tools so they aren’t disrupting professional design tools. I’m arguing that as blogs become more established, they move upmarket: aol properties, HuffPo, TheVerge, Bleacher Report, et al. I’m arguing that graphic design is more than print/publishing. I’m arguing that print/publishing never fully go away.

  314. @nigel:

    You are applying zero value to documentation, quality control, and most importantly scale when you make such pronouncements.

    Large corporate teams are large because the projects are larger in scope.

    The Mythical Man Month says this comes at the cost of declining production per man hour, as the communication overhead swaps productive coding.

    My understanding is open source tends to reduce this overhead, because contributors code their patches autonomously with no centralized oversight required until commit time.

    This is another reason I dream about more granular modularity and individuals coding orthogonally with less communication overhead. Btw, I showed some concrete example code for this more modular higher-level semantics in a long discussion with Roger Maupin and Jessica Boxer in an earlier blog.

    When you can point me to any study at all that indicates that most programmers are barely competent/marginal/adequate/whatever I’ll believe that our industry is mostly composed of incompetent people.

    In a prior blog, someone linked to study about the double-humped bell curve among computer science students. And there were only a few percent up in the 90+% test score range.

    No matter how good you are, your leaving is rarely more than a temporary inconvenience.

    Unless you leave and disrupt their “rigor”-mortis with a competing software.

    The observed differences between best and worst developers is 10-1.

    The worst produce something which doesn’t work or breaks what used to work. The difference is infinite. Large teams have to expend a lot of effort to contain and correct the damage done by the less talent developers.

  315. @Tim F.:

    I’m arguing that amateur blogs and self-publishing aren’t using professional design tools so they aren’t disrupting professional design tools.

    I define disruption as the share of global wealth is declining for the professional design tools, if the non-professional publishing market is becoming the larger market for design tools, and they don’t use the professional tools.

    If most of the world no longer uses professional design tools, then they are more and more irrelevant. That is disruption.

    I’m arguing that as blogs become more established, they move upmarket

    If everyone becomes a professional designer, then the market for professional tools will so large, that Adobe will be disrupted by numerous competitors who want a piece of those huge rents. But we can be nearly certain that everyone won’t become a professional designer, and that waiting on a professional designer to finish a communicated design before publishing a blog article won’t happen in large scale. It is too slow and antithetical to the key attribute of blogging, which is spontaneity of timing and information sharing.

  316. However, what you are saying does not correspond to the experience of what I and many, many people I know in the industry see every day.

    Really. And the basis of 100:1 is what? Why not assert 1000:1? Or 300:1? Or 50:1?

    I laugh because it was a plain appeal to authority (I do this for a living therefore my claim of 100:1 must be correct and 10:1 far too low) based on anecdotal experience in the face of conflicting data.

    I don’t have the time to do a detailed methodological analysis of the surveys you cite, but I am going to guess they just aren’t using the right measure.

    Translation: I don’t like the data therefore it must be done wrong.

    The 10:1 is reasonably well supported from a variety of sources if you don’t like the results from the DeMarco and Lister study. Brooks mentions this 10:1 rule of thumb in Mythical Man Month based on other studies. This highest ratio I can recollect was something like 30:1. The lowest around 2:1 (and in those results the mode was skewed heavily to the left but the overall differences not nearly as severe).

    http://page.mi.fu-berlin.de/prechelt/Biblio/varianceTR.pdf

    If you wish to state that software productivity is difficult to quantify I would agree. If you’d like to assert that software productivity is a useless thing to attempt to measure I mildly disagree. But to argue either of these and then steadfastly defend an arbitrary number like 100:1 is hilarious.

  317. “I define disruption as the share of global wealth is declining for the professional design tools, if the non-professional publishing market is becoming the larger market for design tools, and they don’t use the professional tools.

    If most of the world no longer uses professional design tools, then they are more and more irrelevant. That is disruption.”

    But this is all nonsense. Look at Adobe’s income statement. So… you are wrong.

    Because amateur blogs do not replace professional design.

    “If everyone becomes a professional designer, then the market for professional tools will so large…”

    Who the hell said that? Maybe English isn’t your primary language?

  318. @Nigel: “Someone tell me again that software and civil engineering are not comparable. Arguably the oldest and youngest engineering disciplines share very similar behaviors.”

    Nigel, you ability to type key words into Google is not a substitute for eyes, ears, a brain, and … much more importantly … discernment. I presume it is not intentional that you give the impression of being a preteen with good keyboard skills.

    Nothing you posted changes anything we know and observe about software projects vs. well understood engineering disciplines (i.e. that the two have little in common).

    Getting a PE in software won’t fix the problem, and the influence of government protecting the “candle makers” will likely make it worse. Perhaps there should be a PE licensing board for modern art. You do understand that all there is to a PE is a licensing board, don’t you?

  319. @Nigel
    > If you wish to state that software productivity is difficult to quantify I would agree.

    Unless it is that old reliable number of LOC per person day, huh?

    > If you’d like to assert that software productivity is a useless thing to attempt to measure I mildly disagree.

    I have never made such a claim. I have claimed that measuring useless measures is, well, useless.

    > But to argue either of these and then steadfastly defend an arbitrary number like 100:1 is hilarious.

    I’m so happy that this discussion is offering you so much mirth. It is so nice to be able to add a little to the happiness of the world, don’t you think?

    So let’s stop talking abstractly, let’s get specific. You have a project that requires a really complex GUI on a client side web app. You are given the choice: John Resig or 100 offshore Indian programmers with six years “out of school” experience working on a dozen miscellaneous ASP.NET apps. Who you going to choose? John charges you $1000 per hour, the Indians $25 per hour. Which is the best spend?

    Ah, you might say, but these big projects usually consist of hundreds of forms and reports that are sufficiently simple that the Indian newbies can do them, so it doesn’t apply there.

    So who would you rather have, some hotshot who can auto-generate the reports from whatever specifications the BAs come up with (perhaps providing them with a tool to do so) or 100 of these self same newbies, cranking out the reports in an ugly, line by line, individually hand coded fashion?

    These aren’t bullshit examples, that is what really happens in the development world. If your answer is in Mumbai, you don’t know what you are talking about.

  320. Google bridge cost overruns. Bridge projects appear to be typically 5-8% overrun with large projects like the San Fran bay bridge incurring much larger overruns and problems.

    A cost overrun of only 5-8% on a large software project might get you promoted.

    You can get a PE in software soon. Which means the software engineering PE that signs off on a large project that fails will be subject to the same penalties as a civil engineering PE that signs off on a large project that fails. Fines, jail time, etc.

    There are places where software is in fact engineering. The Space Shuttle, for instance; and avionics in general. But the practices that are driving that sort of development are not driving the development of large business applications, or Photoshop. Time-to-market and feature ticklists triumph over correctness and safety for the vast majority of business or productivity applications. From an engineering perspective, if avionics control software is building bridges, then most end-user-facing applications are Junkyard Wars.

    1. >There are places where software is in fact engineering. The Space Shuttle, for instance; and avionics in general. But the practices that are driving that sort of development are not driving the development of large business applications, or Photoshop. Time-to-market and feature ticklists triumph over correctness and safety for the vast majority of business or productivity applications.

      This is one of the major reasons the open-source competition for them is, in general, more reliable and better.

      GPSD has a Shuttle-avionics-like error rate because there was never any asshat project manager telling me to “just ship it” and demanding idiotic checklist features. The time I’d have wasted on those went into developing a really good test suite instead – something proprietary shops never budget anywhere near enough for.

      Many developers want to work to the standards of excellence manifested in good civil engineering, the kind where the artifacts last centuries with failures so rare that individual ones make the news. It is possible to do this in software, but the dysfunctional culture of proprietary development (which Jessica describes so vividly and accurately in some recent comments) won’t let them do it.

      For software engineering to become a mature engineering discipline, that culture must die and be replaced by open-source practice.

  321. @michael

    The data says otherwise. What you wrote regarding on time and on budget bridges is largely incorrect and mature engineering disciplines are also over budget, late and fail and the failure rates are little different.

    A discipline needs more than just a board. It needs a testable body of knowledge that defines good engineering practice. The SWEBOK documentation at IEEE defines that sufficiently for PE certification to begin to occur.

    Sorry if reality is intruding on your prejudice against software professionals.

  322. @Nigel: “A discipline needs more than just a board.”

    Nigel, can you read? I didn’t say the *discipline* has only a board, I said the *PE* is only a board. But given the body of your writings here I’m not surprised you are unable to discern the difference between the two. Sorry I haven’t time to tutor you.

    You claimed “50 years since the founding of software engineering as a discipline” . You’ve made no case for any of that. Burden is on you.

    “Sorry if reality is intruding on your prejudice against software professionals.”

    Nigel, dear child, I *am* a software professional. It’s a significant part of how I make a living. (At the moment I’m working on some reports in Django and wishing I had it done already as I so despise Javascript but that’s a different rant.) And I’ve worked on both the coding and management side. I am not biased against myself, I am just grounded in reality as to how our industry operates. I am now utterly certain you are not.

  323. I wonder if there is a parallel between the historical evolution of life on earth versus the recent and ongoing evolution of programming code? Life forms started out as very simple things and have evolved toward greater complexity. These life forms got very large (but not sentient) during the age of dinosaurs. Then there was a big extinction event and many species perished. Eventually, sentient homo sapiens evolved, which are smaller packages with larger organic processors. Is bloatware the equivalent of the doomed dinosaur?

  324. Michael Hipp,

    Are you a principal author of mpg123?

    Also:

    Getting a PE in software won’t fix the problem, and the influence of government protecting the “candle makers” will likely make it worse.

    Have you been outside the USA? Like, ever? Because quite frankly, the state of U.S. infrastructure sucks by world standards — and first-world countries that have stronger government involvement in protecting and enforcing engineering standards tend to have better infrastructure.

    1. >Because quite frankly, the state of U.S. infrastructure sucks by world standards

      Interesting planet you live on. I wonder what color the sky is there?

      Those of us who have lived outside the U.S. on planet Earth know how utterly ludicrous this claim is.

  325. @Jeff Read: “Michael Hipp, Are you a principal author of mpg123?”

    No, sorry.

    Also:

    Getting a PE in software won’t fix the problem, and the influence of government protecting the “candle makers” will likely make it worse.

    “Have you been outside the USA? Like, ever?”

    Yes.

    “first-world countries that have stronger government involvement in protecting and enforcing engineering standards tend to have better infrastructure.”

    I think you’re using hand-waving indirection to support a very dubious conclusion.

    Is the state of US infrastructure due to poor engineering *standards*? Or is it perhaps due to things more banal like lack of investment? Or misplaced political priorities? Or corporate corruption? etc.

    Where, exactly, are US engineering standards deficient?

  326. @Tim F.:

    I’ve already stated that my viewpoint is far from “aesthetics being highly discordant and variable.”

    You attempt to disown what you already wrote, probably because you don’t like the conclusion that can be formed from the logic which you presented:

    Is Jame Joyce’s Ulysses more or less aesthetically pleasing than Albert Camu’s The Stranger? Why? Why do different people disagree? Why do the same people disagree with themselves over different time intervals? Does the aesthetic value of the work change over time or is it the culture itself that is changing?

    Here you go again:

    But this is all nonsense. Look at Adobe’s income statement. So… you are wrong.

    Because amateur blogs do not replace professional design.

    Your reading comprehension is horrendous. Did you not see that I wrote the *share*, not the *nominal*? The dictionary can clarify for you the meaning of those words. Upthread I even noted that the nominal demand and supply of professional designers was likely to increase, so why are you repeating what I already said and failing to comprehend what I wrote?

    @Jessica Boxer:

    You are given the choice: John Resig or 100 offshore Indian programmers with six years “out of school” experience working on a dozen miscellaneous ASP.NET apps.

    Exactly.

    @esr:

    Those of us who have lived outside the U.S. on planet Earth know how utterly ludicrous this claim is.

    Jeff Read, take a drive down to Mexico and be careful to not drive into an open ditch that substitutes for a road shoulder. Here where I am, I renewed my car registration in January, and no car in country has received their new license plate tags for 2014. I wondering if this is so the police have an excuse to pull over any vehicle and do some extortion to fund their mistresses.

  327. @me:
    > for 2014

    2013.

    @Tim F.:
    You claimed upthread you understand the shift underway in publishing. Based on your latter comments, I don’t think you understand that professional design is more suited to creative processes that change less frequently. We are moving beyond the era of the TV and static print media, where the reader was a dumb slave, to the self-publishing era where the reader is the creator process, e.g. all of us commenting here. Where is the professional design in our comments here?

    Please stop proclaiming “nonsense” and “you are wrong” without presenting any cogent argument to substantiate such accusations. It is not only impolite; it is also a disingenuous and useless paradigm of discourse.

  328. @me:
    > swaps productive coding

    Should be “swamps”. It should be clear (by) now from my numerous typos that I have a handicap with writing. I can produce clean writing, if I take the a lot of time to proof-read. (I almost missed the “by” above)

  329. @Nigel: “A discipline needs more than just a board.”

    Nigel, can you read? I didn’t say the *discipline* has only a board, I said the *PE* is only a board. But given the body of your writings here I’m not surprised you are unable to discern the difference between the two. Sorry I haven’t time to tutor you.

    Yes, I can read and my point is equally simple: for a discipline to be accepted for PE testing more than simply a PE licensing board must exist as you state. The fact is that SwE has had a very long road for acceptance by the rest of the engineering community as an engineering discipline. Both IEEE and NCEES accept SwE as a sufficiently mature discipline to be accepted as “engineering”.

    I would say that the burden of proof today lies on you to show how SwE is insufficiently mature to be considered such beyond handwaving that “everyone knows it’s not”.

    You claimed “50 years since the founding of software engineering as a discipline” . You’ve made no case for any of that. Burden is on you.

    I’m off by 5 years. SwE is accepted to have have been “founded” in 1968. I dunno what needs to be proved here. Obviously SwE practices existed in some rudimentary form prior to the term being coined given it was for a DoD conference for military software.


    “Sorry if reality is intruding on your prejudice against software professionals.”

    Nigel, dear child, I *am* a software professional. It’s a significant part of how I make a living. (At the moment I’m working on some reports in Django and wishing I had it done already as I so despise Javascript but that’s a different rant.) And I’ve worked on both the coding and management side. I am not biased against myself, I am just grounded in reality as to how our industry operates. I am now utterly certain you are not.

    LOL. If you have more than 5 years on me I’d be surprised and I’m not sure that playing the age card in a software field is all that wise anyway.

    Yes, you are biased and most software professionals aren’t software engineers. Like I said, 90% of the time SwE is a relatively unneeded specialty so formal training as such is not all that required so that’s not a dig at you. Just like typically you only need 1 PE to sign off on a project you only need a small number of SwE to help organize a large project although all software developers should be adhering to good software practices.

    Few developers ever work in an organization where such rigor is desired or required for success. DoD, NASA, medical are the typical domains that demand high rigor and not uniformly. Most developers only read about the high profile failures and believe that because a lot of folks claim to be software engineers but are just coders that no real engineering discipline exists.

    I’ve provided citations that show that Software Engineering as a discipline is no more prone to schedule and cost failure than a mature engineering discipline as Civil Engineering.

    Actually, I’m rather surprised that anyone believes that the average state department of transportation does anything on time or on budget…

  330. @esr
    >GPSD has a Shuttle-avionics-like error rate because there was never any asshat project manager telling me to “just ship it” and demanding idiotic checklist features.

    Just to be clear — quality is a feature just like any other feature, and is just as subject to trade offs as other features. Many users have the preference that something that has all the features they need and is a little buggy, is better than something that isn’t buggy but doesn’t have all the features.

    However, unlike most features, quality, up to a point, is free, in fact a negative cost. Because late bugs cost so much more to fix than early bugs, the effort to build quality in from the start (such as the aforementioned regression test suites) has a negative cost, because it reduces the combinatorial explosion of bugs that manifest late in the cycle when they cost a lot more to fix.

    However, this is only up to a point. Once a certain level of quality is achieved, more quality starts to become more and more expensive, growing at a frightening rate. To get avionics level quality is insanely more expensive than Photoshop quality. It is for this reason that such critical devices tend to be highly isolated in nature, and then connected through to a lower quality control device.

    To give an example (that I worked on for a while) the device that remotely injects insulin into a patient has software that is very simple, very isolated, and VERY high quality with many, many safeguards against error. The device that monitors and logs it, that lets you get advice on what to eat, or reads your blood sugar off a strip, is much more complex, and much lower quality.

  331. @Nigel: …for a discipline to be accepted for PE testing more than simply a PE licensing board must exist as you state.”

    Nigel, I play the “age” card because you continually show a naiveity that frankly would seem more at home on Slashdot than here.

    Example is believing that endorsement by gatekeepers to be given more gatekeeping powers proves anything.

    Any state can pass a law/regulation allowing licensing of any profession or subset of a profession. The PE&LS Board in my locate consists (last I looked) of a part-time secretary and a part-time chair whose job is to send and receive forms and payments and conduct meetings of the appointed boards. They administer tests written by someone else and award licenses based on the tests and some checkoff items. That’s about all it proves – you passed some tests.

    “Both IEEE and NCEES accept SwE as a sufficiently mature discipline to be accepted as “engineering”.”

    See above. Does this really impress you?

    “I would say that the burden of proof today lies on you to show how SwE is insufficiently mature to be considered such beyond handwaving that “everyone knows it’s not”.”

    Again, show me how software consists of well developed solutions applied to well understood problems using well accepted methodologies.

    Is Agile SwE? How about the other fad-of-the moment ways of attempting to manage the complexity?

    “LOL. If you have more than 5 years on me I’d be surprised and I’m not sure that playing the age card in a software field is all that wise anyway.”

    That’s comical. In all true engineering fields gray hair, experience and maturity are considered valuable and nearly indispensable. Wonder what is different?

    “most software professionals aren’t software engineers.”

    You’re the one that equated the terms.

    “90% of the time SwE is a relatively unneeded specialty”

    What percent was it of made-up statistics? So is it an engineering discipline or a relatively unneeded specialty.? With real engineering disciplines the discipline is applied to all projects in the domain using things like the NEC that codify the discipline into a form that can be followed by an 18-year helper with a GED. Can software artistry approach that in any fashion?

    “… that no real engineering discipline exists.”

    I’m still rooting for you to learn to read my expert reframing friend. Essentially every professional coder attempts to apply some discipline and best practices. But that’s a long way from claiming that an entire field is a fully fleshed-out and performing engineering discipline.

    Now certain groups have some pretty amazing track records .I read such about the software team for the Apollo project. Quite an amazing story and they are not the only ones. But here’s the rub: such high performance software teams usually have to invent much of their methodology and practices on the spot. There is no rigorous and well established engineering discipline that can be applied in semi-turnkey fashion and be expected to work. The complexity most always defeats it.

    “I’ve provided citations that show that Software Engineering as a discipline is no more prone to schedule and cost failure than a mature engineering discipline as Civil Engineering.’

    You’ve done no such thing.

    “Actually, I’m rather surprised that anyone believes that the average state department of transportation does anything on time or on budget…”

    Really? Who exactly said that? Or are you reframing again, as seems to be your primary talent.

  332. Unless it is that old reliable number of LOC per person day, huh?

    Yep. As I said, as a metric it’s the worst one except for all the others. Very smart folks like Capers Jones would vehemently disagree with me regarding FP vs KSLOC as the best of the worst.

    They might be right but my feeling is that with something that is likely to be wrong anyway that a simpler model you accept with larger error bars is better than a more complex model with more hidden assumptions that is more accurate in specific circumstances when done by very competent practitioners.


    > If you’d like to assert that software productivity is a useless thing to attempt to measure I mildly disagree.

    I have never made such a claim. I have claimed that measuring useless measures is, well, useless.

    It would be a better claim than a tautology based on a dubious premise. Some folks do argue that software productivity is in fact not measurable in any useful sense. While they have some good points, like I said I mildly disagree.


    > But to argue either of these and then steadfastly defend an arbitrary number like 100:1 is hilarious.

    I’m so happy that this discussion is offering you so much mirth. It is so nice to be able to add a little to the happiness of the world, don’t you think?

    Yep. Much better than getting pissed off about something as silly as software.


    So let’s stop talking abstractly, let’s get specific. You have a project that requires a really complex GUI on a client side web app. You are given the choice: John Resig or 100 offshore Indian programmers with six years “out of school” experience working on a dozen miscellaneous ASP.NET apps. Who you going to choose? John charges you $1000 per hour, the Indians $25 per hour. Which is the best spend?

    Ah, you might say, but these big projects usually consist of hundreds of forms and reports that are sufficiently simple that the Indian newbies can do them, so it doesn’t apply there.

    So who would you rather have, some hotshot who can auto-generate the reports from whatever specifications the BAs come up with (perhaps providing them with a tool to do so) or 100 of these self same newbies, cranking out the reports in an ugly, line by line, individually hand coded fashion?

    LOL. Lets get specific with a unrealistic scenario designed to elicit the outcome you desire? Okay, sure.

    I don’t personally know John I suspect it would very hard for me to lure him from Khan where he’s doing work that makes (IMHO) a huge real world difference in education even for $1000 an hour to work on ye olde boring corporate web app. Well, okay, for $1000 an hour probably but I suspect that John would tell you he’s not the best fit for this sort of thing because a) it’s not interesting and b) UI design isn’t necessarily his thing but rather doing really cool stuff in javascript which is related but not quite the same. Like I said I dunno John but looking at his body of work that’s the feel I get.

    The “real world” answer, as opposed to an academic exercise, is neither as both would more than likely fail. A “really complex” UI is extremely difficult to do well procedurally so you either end up with a UI that works and looks bad or you have a scenario where John lacks sufficient support to complete the task in the desired time to market at a sufficient level of UI quality even automating the production of some screens procedurally or via templates.

    For a well crafted, complex UI many of the elements of that UI will be hand crafted by good designers and developers.

    A UI that consists of a bunch of simple forms, even when there are a lot of them, isn’t “really complex” but mind stupefyingly tedious for both developer and user.

    Someone that suggests a procedurally generated UI for any complex interaction leaves me thinking that I wouldn’t want to use their product because therein lies a programmer developed UI of property lists of attributes and a poorly laid out workflow (there an inside, self-deprecating joke here so I’m not picking on you).

    100 Indian developers with the wrong skill set isn’t likely getting from here to there either. Also, 100 indian developers for $25 per hour is $2500 per hour…

    So the “best spend” in your contrived example is $1000 per hour of John’s time + $1500 per hour worth of designer and other support for him. From India if need be. Indian asp.net developers if absolutely need be. At which point John would be earning every penny of that $1000/hour fee…


    These aren’t bullshit examples, that is what really happens in the development world. If your answer is in Mumbai, you don’t know what you are talking about.

    This is a very bullshit example even ignoring your budgeting issue.

    The last time I had indian offshore developers we always had some very sharp developers at the beginning to sell us on the idea that XXX corporation had really sharp developers for really cheap $$$. For example, the lead dev we had at the very start was really sharp. As sharp as John Resig? Maybe not but would rank as a top performer in most US software companies. The first set of devs from this particular Tier 1 company were all reasonably good.

    We were also never offered, nor would we have considered anything as bogus as 100 random junior devs with experience in the wrong technology for $25 per hour which at the time was (and still is even today) overpriced for a relatively junior offshore dev even in a Tier 1 city like Mumbai. Plus the Tier 1 companies aren’t going to offer 100 junior devs with the wrong skill set. Well not up front anyway. They’ll try to rotate these folks in later but probably not so many that the probability of failure is 100%. They do have a reputation to maintain.

    So it is doubly important to specify key staff in working with offshore developers because there, like here, contractors will bait and switch you and move their A team on to capture the next customer and try to stick you with their B and C teams. It is also important to maintain significant oversight on the work produced AND include stiff penalties and the assumption of liabilities in the contract for IP violations (don’t ask but that was a pricey lesson for someone to learn).

    It requires good management and rigor but is workable if a bit risky. Too much a hassle for my tastes but for the right company with the right needs and the right expectations it can work. I would not say it is always the wrong answer like you do.

    For a $2500/hour burn rate for web app UI development I’d get some a core of very solid 3-4 HTML5 developers, a couple backend support developers, a manager and a couple designers at $150-$250/hr loaded backed by junior programmers, graphic artists, tech writers and testers for around $50-$100/hr loaded. The exact mix, team size, number of teams, etc depends on a lot of details not provided.

  333. Nigel, I play the “age” card because you continually show a naiveity that frankly would seem more at home on Slashdot than here.

    Heh. You mean like believing that bridges and roads are built on time and on budget?

    Attempting to pull the I’m older that you card on an old guy still seems really silly.

    Example is believing that endorsement by gatekeepers to be given more gatekeeping powers proves anything.

    Any state can pass a law/regulation allowing licensing of any profession or subset of a profession. The PE&LS Board in my locate consists (last I looked) of a part-time secretary and a part-time chair whose job is to send and receive forms and payments and conduct meetings of the appointed boards. They administer tests written by someone else and award licenses based on the tests and some checkoff items. That’s about all it proves – you passed some tests.

    “Both IEEE and NCEES accept SwE as a sufficiently mature discipline to be accepted as “engineering”.”

    See above. Does this really impress you?

    I know where we were in the 80s and where we are today. The IEEE SWEBOK is more mature and more nuanced than the SEI CMM was. It is highly unlikely that we would have been able to convince NCEES that Software Engineering was engineering and not just coders putting on airs twenty years ago. Part of that is politics but a larger part is the actual maturing of the discipline in the intervening decades.

    I am not overly impressed by titles nor by certification but there’s a point where a discipline emerges from being alchemy to an actual practice based on science or engineering. I believe that we have reached that point and our peers seem to agree. That’s a positive development.

    That there is art and craftsmanship involved in software is not an indication that there is also not reproducible rigor underlaying our profession.

    Again, show me how software consists of well developed solutions applied to well understood problems using well accepted methodologies.

    Is Agile SwE? How about the other fad-of-the moment ways of attempting to manage the complexity?

    Agile is included in the SWEBOK as one development methodology. Is it a “fad”? No more than any other development lifecycle model or practice.

    Are agile concepts a fad in civil engineering or simply part of their body of knowledge?

    http://www.infoq.com/news/2010/01/adaptive-reuse


    “LOL. If you have more than 5 years on me I’d be surprised and I’m not sure that playing the age card in a software field is all that wise anyway.”

    That’s comical. In all true engineering fields gray hair, experience and maturity are considered valuable and nearly indispensable. Wonder what is different?

    That’s a fair point but all the practicing software engineers I know gave gray hair. /shrug

    “90% of the time SwE is a relatively unneeded specialty”

    What percent was it of made-up statistics?

    True again. To rephrase, my position is simple: Typically the projects that requires the rigor of full up SwE, like the signing by a PE are of larger size, higher complexity or criticality. Most projects don’t need this level of rigor to be successful any more than you need to draft complex blueprints and have a PE review it to make a shed or barn except when demanded by code.

    My opinion is that a lot of what we build today are the equivalent of sheds in civil engineering. There are houses which does require more rigor but that we, as a profession, typically do not currently apply and should. But our equivalent to skyscrapers do apply the engineering rigor required by our discipline or they predictably and repeatably fail.

    So is it an engineering discipline or a relatively unneeded specialty? With real engineering disciplines the discipline is applied to all projects in the domain using things like the NEC that codify the discipline into a form that can be followed by an 18-year helper with a GED. Can software artistry approach that in any fashion?

    While the definition of an engineering discipline does include codes and standards it more importantly requires a body of knowledge with methods based on the understanding of the underlying theory and science and those methods verified by empirical testing. Our underlying basic science is that of logic and mathematics. The SWEBOK and CMM before it codifies our higher level methods while things like basic algorithms, data structures, encapsulation, abstraction, etc codifies our lower level methods. The success and failure of specific methods and practices within our BOK is documented by studies and the empirical results of CMMI Level 5 type of organizations with high levels of metrics collections and repeatability of work.


    Now certain groups have some pretty amazing track records .I read such about the software team for the Apollo project. Quite an amazing story and they are not the only ones. But here’s the rub: such high performance software teams usually have to invent much of their methodology and practices on the spot. There is no rigorous and well established engineering discipline that can be applied in semi-turnkey fashion and be expected to work. The complexity most always defeats it.

    I believe that you cannot apply any method in a semi-turnkey fashion in any engineering discipline and have it be reliably expected to work when any significant complexity is involved whether that is a large unique structure in civil engineering or a large software project in software engineering.

    Hence similar failure rates seen in studies.

    In the case of SwE the knowledge areas (methodologies) for requirements analysis, configuration management, testing, etc are not invented by every high performance team from scratch but build on the existing body of knowledge. That high performance teams often will expand our set of effective methodologies is part of an engineering discipline not an indication that it does not exist.

    “I’ve provided citations that show that Software Engineering as a discipline is no more prone to schedule and cost failure than a mature engineering discipline as Civil Engineering.’

    You’ve done no such thing.

    Really? There wasn’t a link to a study showing that civil engineering project have a poor on budget success rates? Are the numbers provided in that study not similar to the numbers shown for software projects which is the basis for the perception that 80% of software projects fail in terms of meeting schedule, budget or expectations?

    Really? Who exactly said that? Or are you reframing again, as seems to be your primary talent.

    Heh. Here you go:

    “Picture the Highway Department in your locale and their ability to build bridges. Generally they can build one bridge after another, sometimes in projects spanning multiple years, and deliver exactly what is expected and with solid control over timeline and budgets.”

    I get the impression you like to try to goad people into anger. That seems far more slashdotish than providing links to contrary data. Jessica believes I laugh too much but I believe if you don’t laugh (or at least chuckle) you’ll just get all pissed off over something some random person on the internet wrote to try to get under your skin. That provides too much amusement for them and too much aggravation for you.

  334. @esr:

    >Interesting planet you live on. I wonder what color the sky is there?

    >Those of us who have lived outside the U.S. on planet Earth know how utterly ludicrous this claim is.

    During the time I spent in Germany, the only noticeable difference in infrastructure quality was in public transportation, which was indeed orders of magnitude better there than here in Dallas. On the other hand, I think that had much more to do with population density than anything else. Plus, the time I spent in London on my way back from Germany left me with a very poor impression of the road infrastructure in the UK.

    1. >Plus, the time I spent in London on my way back from Germany left me with a very poor impression of the road infrastructure in the UK.

      For good reason; I used to live there and know. One of the many things naive Americans trip over when they travel abroad is how rapidly the quality of the road network degrades as population density decreases. We expect rural roads to average as good as ours and they don’t, not by a long shot.

      In Ireland (to take another nearby example) there is exactly one stretch of motorway built to the standards of a U.S. limited-access highway. Much of the rest of their net is one-lane gravelled roads regularly interrupted by sheep crossings.

      And this is before we even start on post-Communist eastern Europe or the Third World…

  335. @esr:

    Many developers want to work to the standards of excellence manifested in good civil engineering, the kind where the artifacts last centuries with failures so rare that individual ones make the news.

    Agreed but I still think software development is engineering mixed with art, because it is impossible to design software such that it never needs to be refactored to account for the unexpected future needs. The art is to make good design decisions about expectations for the future given incomplete and inexact information. We can improve, e.g. my ideas about increased granularity of modularity, yet we won’t be able to fully engineer the projections of the future needs over the intended lifespan as we can with civil engineering. Software models (encodes) real life processes, e.g. the development and refinement of a Space Shuttle, and real life is inherently dynamic.

    I assert that the most general model that yields the most optimum theoretical fitness for general dynamic development, is simulated annealing (c.f. my prior description), i.e. many granular independent actors and thus open source. The rigamortis of large top-down management “software engineering” can’t come close.

  336. @nigel
    “So yes, KSLOC is a terrible unit of measure with the only redeeming value being that it’s an objective measure as opposed to pretty much every other measure. As an objective measure you can compare your organization efficiency in terms of KSLOC per Function Point or Use Case Point or Pixie Fairy Dust Ratio.”

    SLOC is a figure related to cost. It is not related to merit nor to productivity.

    Given the size of a project in SLOC, its final costs (including testing, debugging, and documentation) can be fairly well estimated, if necessary adapted by using some measure of “hardness”, e.g., OS kernel code versus webpage browser Javascript.

    Just changing the design or implementation language changes both the SLOC and the cost, without affecting the functionality.

  337. Just changing the design or implementation language changes both the SLOC and the cost, without affecting the functionality.

    Yes, it’s truly a terrible yardstick to use.

    But tell me how the is different from FP or UCP or your favorite mechanism as a unit of measure for estimation.

    Language is typically already choosen so that’s not as much a factor. You should already have at least a rough idea of the productivity of your team in a given language.

    While you have some idea of design at this early point you don’t typically have details yet enough to modify your estimates much beyond “I think this feature will be costly to implement” or “I think we can use library x to do this”.

    So yes it is primarily about cost at the start BUT as I said, you can determine useful information about merit or productivity as a ratio of X vs KSLOC. How many defects per KSLOC (quality). How many KSLOC per use case or function point (efficiency). How many KSLOC per day (gross productivity).

    These are all useful metrics for estimating the next project or identifying areas of improvement.

    You can certain do defect per FP, hours per FP, etc but FP analysis and UCP analysis can differ from engineer to engineer whereas KSLOC is a very mechanical measurement and very low cost to collect.

  338. Many developers want to work to the standards of excellence manifested in good civil engineering, the kind where the artifacts last centuries with failures so rare that individual ones make the news.

    Given the statistics an on time and on budget civil engineering project is what is rare and civil engineering failures strike me as common as software failures. A failure on the order of the Mars Climate Observer smashing into the planet due to a software defect is IMHO as memorable as the failure of the Tacoma Narrows Bridge because of the rarity of this kind of catastrophic failure.

  339. @nigel
    “So yes it is primarily about cost at the start BUT as I said, you can determine useful information about merit or productivity as a ratio of X vs KSLOC. How many defects per KSLOC (quality). How many KSLOC per use case or function point (efficiency). How many KSLOC per day (gross productivity).”

    Yes, but as with all “alternative uses” of primary metrics, here be dragons. SLOC measure overall size of a completed project, and it includes testing, debugging, and documentation. Beyond that, there is maintenance cost.

    Defects per KSLOC of a finished product is indeed an enticing measure of quality. With all the caveats in rating “defects”. At least it includes testing and debugging.

    However, KSLOC per day (gross productivity) would be prior to testing, debugging, and documentation. Which is a measure designed for fraud by a crook or con man.

    And I do not see where merit enters the equation. As Jessica already stated, if one programmer cleans the code of all other programmers and removes half the lines, who gets the merit and who the blame of the final SLOC?

  340. Agreed but I still think software development is engineering mixed with art, because it is impossible to design software such that it never needs to be refactored to account for the unexpected future needs. The art is to make good design decisions about expectations for the future given incomplete and inexact information. We can improve, e.g. my ideas about increased granularity of modularity, yet we won’t be able to fully engineer the projections of the future needs over the intended lifespan as we can with civil engineering.

    You guys have never seen airports, bridges and roads that no longer meet traffic needs? Even sometimes brand new ones? You guys don’t believe that other engineers apply “art” into the design of their artifacts or make good (or bad) design decisions?

    As with Michael I think many folks, even practitioners, have a overly critical opinion of software and an overly rosy one of civil engineering. This bias is both amusing and maddening.

  341. However, KSLOC per day (gross productivity) would be prior to testing, debugging, and documentation. Which is a measure designed for fraud by a crook or con man.

    Nope, this measurement it taken afterwards and why the average KSLOC/day count for industry is such a low number. The time measurement starts on day one and goes up to delivery.

    Long-term maintenance cost is typically not factored in because you don’t have that data yet.

    And I do not see where merit enters the equation. As Jessica already stated, if one programmer cleans the code of all other programmers and removes half the lines, who gets the merit and who the blame of the final SLOC?

    You don’t want to use any of these metrics as a measure of merit because you then get really bad project results and very poor metrics collection. You use these metrics to determine overall team performance for estimating the next project not determine raises or promotion.

    A good manager and a good team is required to constructively determine merit or blame. If you have a high performance team that is happy and stable you don’t care who did what. You certainly don’t want to change the dynamics by removing someone that has poor numbers but is the catalyst for a great team.

    When you have a poor or average team then you want to figure out why the team is weak and change the dynamics. Technical skill of the individuals is only one of many factors.

  342. “In Ireland (to take another nearby example) there is exactly one stretch of motorway built to the standards of a U.S. limited-access highway. Much of the rest of their net is one-lane gravelled roads regularly interrupted by sheep crossings.”

    You don’t even have to leave North America to see this. I spent a month in Medicine Hat, Alberta one week, and to get there, I flew in and out of Calgary. I was expecting the Trans-Canada Highway to be like an Interstate out in rural areas. Nope. Once you get out of Calgary, it drops to a divided highway, like a 4-lane US highway: unlimited access, but generally designed for speeds of 70 MPH or so. I was really surprised to see a railroad grade crossing.

  343. @nigel
    > Yep. As I said, as a metric it’s the worst one except for all the others.

    Some metrics are useless, others are worse than useless. LOC is worse than useless because it pretends to convey information and it doesn’t.

    Programmer productivity is really the rate of delivering software features at the required quality level. LOC is an attempt to measure that by proxy, but it is terrible for doing so. It is like trying to judge the quality of a novel by the number of pages. Certainly, number of pages is a reasonable measure of amount of effort, it tells you nothing about the delivery of the story.

    Would you measure the power of a car engine by the number of bolts use to hold it together, or the amount of metal in the block? Both are easy to measure, and both have a vague, handwavy, very rough correspondence to the actual thing you want to measure. But nobody would seriously use these as actual metrics of power. Nonetheless, what they both do is blur the difference between great engine designers and bad ones.

    Like I have said before, good programmers deliver functionality in less lines of code. Consequently, LOC is exactly the wrong measure for this discussion. The idea that someone seriously talks about 10-50 LOC per day being a serious measure of productivity bears no resemblance to the reality of software development. All it is is a readily obtained number that can be put on a spreadsheet. It gives a sense of control, it makes people who don’t know what they are talking about have some feeling of seeing progress and change, even though LOC doesn’t show that at all.

    I think a false sense of control is worse than a sense of no control. I have seen it many times where the LOC measure is going way up, because people are cranking out lots of bad code that doesn’t do what is really needed. (That bug tracking system? Super high LOC per day. That sort of work is fun and easy.)

    > Yep. Much better than getting pissed off about something as silly as software.

    Software isn’t silly. It is extraordinarily important. The fact that it is so difficult to write, and that the world is so full of really bad programmers is a serious problem.

    > LOL. Lets get specific with a unrealistic scenario designed to elicit the outcome you desire?

    With the exception of Resig working for me (he is obviously a proxy for a REALLY good programmer) these aren’t unrealistic. They are the real decisions that are made programming shops every day, how to allocate resources. FWIW, most programming shops make the wrong decisions about this sort of stuff because they think the way you do, as if programmers are fungible resources, which they aren’t.

    I’m not going to pick apart your silly analysis. It is pretty simple. If you give enough monkeys enough typewriters and enough time they will eventually produce Hamlet. However, it seems more prudent to hire Shakspeare to do in the first place instead. You get a lot less wasted paper and a lot less monkey poop.

  344. “If you give enough monkeys enough typewriters and enough time they will eventually produce Hamlet. However, it seems more prudent to hire Shakspeare to do in the first place instead. You get a lot less wasted paper and a lot less monkey poop.”

    The problem is that you have to make sure you’re not hiring Tolkien, or worse (if that’s possible).

  345. @nigel:

    You guys have never seen airports, bridges and roads that no longer meet traffic needs?

    I didn’t write that construction never fails to no longer meet needs, rather I wrote that designing for anticipated needs is even plausible because the targeted variables are reasonably well-defined and thus engineer-able. Whereas the feature set of most software applications is open-ended and dynamic. I posit the way to attack this is to write more granular modules that thus have wider range of reuse targets.

    You guys don’t believe that other engineers apply “art” into the design of their artifacts or make good (or bad) design decisions?

    If they are allowed the leeway of targeting unknowns not just in magnitude of known variables (e.g. over specify strengths to allow for unexpected forces) but also unknown variables, to that extent it is no longer engineering. I doubt there is much of that in the true engineering disciples.

    As with Michael I think many folks, even practitioners, have a overly critical opinion of software and an overly rosy one of civil engineering.

    The issue is not the failure rate of running software app versus a running artifact of civil engineering, rather the failure of adaptability of software to an open-ended set of unknown future features.

  346. @Jessica Boxer:

    LOC is worse than useless because it pretends to convey information and it doesn’t.

    Programmer productivity is really the rate of delivering software features at the required quality level. LOC is an attempt to measure that by proxy, but it is terrible for doing so.

    I agree in the general context. However, I’m proposing to use LOC (actually function calls as I posit it is more uniform metric in a functional programming language) as the metric to decide how to divide the royalties among module owners in my open source module licensing repository model upthread. My assumptions are that the consumer of the modules is only going to choose those that are best, thus the LOC do represent desired features. And I propose that any one can fork a module and they will own the proportion of the decrease in the LOC of the forked module (the consumer will choose their preferred fork), thus module owners have an incentive to create the most compact code and modules desire to get smaller not feature bloat what can be separated into an orthogonal module. Thus I believe this converts LOC into a useful metric, while incentivizing desired behavior.

    …think the way you do, as if programmers are fungible resources, which they aren’t.

    +1

  347. @Jessica Boxer
    > Like I have said before, good programmers deliver functionality in less lines of code.

    I should add something to this. The one place where this is not true is in unit testing regression suite, which really today are absolutely a necessary part of a software delivery. Good programmers write A LOT of these, they are usually much longer than the actual program body itself. I’ll be the GPSD test suite is much longer than the code itself.

    I might add that as an LOC measure, these tend to be super verbose, consume lots of LOC and are mostly very easy to write (though exhastingly pedantic.) So that screws up that whole LOC measure even more.

  348. “You attempt to disown what you already wrote, probably because you don’t like the conclusion that can be formed from the logic which you presented”

    No, your reading comprehension is horrible. Your sentence structure is horrible and often completely nonsensical so at this point I’m assuming you aren’t primarily an English speaker.

    “Your reading comprehension is horrendous. Did you not see that I wrote the *share*, not the *nominal*? The dictionary can clarify for you the meaning of those words. Upthread I even noted that the nominal demand and supply of professional designers was likely to increase, so why are you repeating what I already said and failing to comprehend what I wrote?”

    Because you are misinterpreting what I am saying, putting words into my mouth, and what you are saying for yourself is wrong.

    “You claimed upthread you understand the shift underway in publishing.”

    Yes, I do. You, however, do not.

    “Based on your latter comments, I don’t think you understand that professional design is more suited to creative processes that change less frequently.”

    Nonsense. Professional designers do just fine with rapidly evolving as they do with slowly evolving.

    “We are moving beyond the era of the TV and static print media, where the reader was a dumb slave, to the self-publishing era where the reader is the creator process, e.g. all of us commenting here.”

    Nonsense. These are comments, nothing more. There is no design to your drivel.

    “Where is the professional design in our comments here?”

    There is no design here whatsoever. This is, again, a problem of OSS advocates. ESR throwing together a b/w glide emblem or using a stock WordPress template or scanning a b/w image and pasting it on a white background is the barest minimum of design, if design at all. Trying to compare this blog to, say, what Coca-Cola is going to employ designers for for the rest of their existence is nonsensical.

    “Please stop proclaiming “nonsense” and “you are wrong” without presenting any cogent argument to substantiate such accusations. It is not only impolite; it is also a disingenuous and useless paradigm of discourse.”

    I don’t find my comments lacking cogency whatsoever. It is your arguments that are devoid of cogency and, thus, the only appropriate response is to call it what it is.

    Come on, seriously, you think you OSS nerds on this blog replaces, subverts, disrupts, or whatever nonsensical attribute you want to ascribe to it to the design needs of Coca-Cola or any other business not running on pocket change and lint?

  349. @Jessica
    “The idea that someone seriously talks about 10-50 LOC per day being a serious measure of productivity bears no resemblance to the reality of software development.”

    No, it is a measure of cost. If you want to reduce cost of a piece of software, you must lower the SLOC. ;-)

    Obviously, if you hire a programmer that is brilliant enough to significantly lower the SLOC of your project, she will not be cheap.

    Btw, a really good programmer will make sure her code can be taken over by any mediocre programmer. If not, fire her.

  350. >For good reason; I used to live there and know. One of the many things naive Americans trip over when they travel abroad is how rapidly the quality of the road network degrades as population density decreases. We expect rural roads to average as good as ours and they don’t, not by a long shot.

    It wasn’t so much the rural roads that attracted my disapproval as the urban roads. Whether or not they were worse than ours, or than the roads in more urban areas, the rural roads were at least adequate. But I can think of no other words to describe the road network in London than “congestive collapse”. The bus from Berlin hit the London suburbs an hour and a half or two hours ahead of our scheduled arrival time, and poor, naive me, accustomed to US roads and a half hour drive from the suburbs to downtown in non-rush-hour conditions, thought “Wow. We’re going to be incredibly early”. We arrived pretty much exactly on schedule.

  351. @Jon Brase
    European cities were not build for cars. Only people who are desperate try to drive through the city centers of major and minor European cities. That is where public trsnsport is for.

  352. @Winter
    > Only people who are desperate try to drive through the city centers

    Based on my experience of London traffic, there must be a LOT of desperate people there!

  353. Would you measure the power of a car engine by the number of bolts use to hold it together, or the amount of metal in the block? Both are easy to measure, and both have a vague, handwavy, very rough correspondence to the actual thing you want to measure. But nobody would seriously use these as actual metrics of power. Nonetheless, what they both do is blur the difference between great engine designers and bad ones.

    You analogy is incorrect. LOC doesn’t measure the number of bolts or amount of metal in an engine but the displacement of the engine. While an imperfect measure of final performance it is a useful one.

    Personally, I believe that shaft horsepower is a somewhat better analogy for SLOC. How much of that power gets to the ground to do real work is dependent on many things.

    In any case, no one has yet provided an alternative metric they like better and how they use it in real life to do estimation.

    Like I have said before, good programmers deliver functionality in less lines of code. Consequently, LOC is exactly the wrong measure for this discussion. The idea that someone seriously talks about 10-50 LOC per day being a serious measure of productivity bears no resemblance to the reality of software development.

    /shrug

    I have provided several ways that LOC is useful for a software engineer. It is a poor smith that wont use useful tools because they can be used incorrectly by others.

    All I see is a lot of handwaving when you’re the one that asserts that open source coders are more productive than corporate coders. What metric do you prefer?

    It’s easy to throw rocks.

    All it is is a readily obtained number that can be put on a spreadsheet. It gives a sense of control, it makes people who don’t know what they are talking about have some feeling of seeing progress and change, even though LOC doesn’t show that at all.

    I think a false sense of control is worse than a sense of no control. I have seen it many times where the LOC measure is going way up, because people are cranking out lots of bad code that doesn’t do what is really needed. (That bug tracking system? Super high LOC per day. That sort of work is fun and easy.)

    That’s not the way a good engineer should use the SLOC metric. It isn’t about control, it’s about understanding.

    What makes you think that you have some lock on experience? I’ve seen as much and my opinion differs. Yes, LOC as a metric has been horribly abused in the past but so have guns. You don’t blame the tool, you blame the user.


    The fact that it is so difficult to write, and that the world is so full of really bad programmers is a serious problem.

    Please, this is a fallacy. Software is amazingly successful. Every aspect of modern life is impacted by the millions and millions lines of software written by developers all over the world. They weren’t written by the lone cowboy coder but by the thousands of developers (29 out of 30 evidently) you think very little of.

    Yes, there are a lot of bad developers out there but the vast majority of developers I believe are competent.


    > LOL. Lets get specific with a unrealistic scenario designed to elicit the outcome you desire?

    With the exception of Resig working for me (he is obviously a proxy for a REALLY good programmer) these aren’t unrealistic.

    It is absolutely unrealistic because the you provide the false dilemma of only choosing the very best or what you consider the very worst.

    Because apparently your belief is that the average developer is very very bad and only the few elite do all the real work.

    They are the real decisions that are made programming shops every day, how to allocate resources.

    Programming shops have to decide between top talent and offshore devs every day? That’s not “allocate resources”. Nothing in your scenario was about “allocating resources” but who to hire.

    FWIW, most programming shops make the wrong decisions about this sort of stuff because they think the way you do, as if programmers are fungible resources, which they aren’t.

    Where have I stated that? Never. It is my belief that if you hire the very best everything else works out in the end. I’m much more Peopleware than process. Odd, seems like no one knows who I’m talking about when I mention DeMarco.

    That said, hiring the very best individual is not a real world option for most companies because they aren’t Apple or Google or Facebook.

    And “most” programming shops do not hire developers as fungible resources in my experience. Aside from game companies anyway.

    I’m not going to pick apart your silly analysis.

    LOL…it is because you cannot pick apart my analysis. Aside from completely screwing up the budget you and I both know that’s nothing like a real world decision software houses need to make very often.

    My suspicion is that you’ve never actually met any developers from Mumbai or worked with any offshore teams. Some are very very sharp and much cheaper than we are which can be very scary. Some are very very bad and a waste of money but a good technical management team can typically mitigate that risk.

    But it doesn’t much matter. Our software industry is much more successful because we’re able to get usable power to the ground and achieve real performance in terms of usable product. Something India or China, despite having some really good developers, don’t so much.

    As you say, there is little to fear if you are a good dev. I just think there are more of us than you do.

    It is pretty simple. If you give enough monkeys enough typewriters and enough time they will eventually produce Hamlet. However, it seems more prudent to hire Shakspeare to do in the first place instead. You get a lot less wasted paper and a lot less monkey poop.

    So Indian developers are monkeys eh? Clueless little brown guys who can’t code like real men.

    I’ve seen good code and bad code come out of companies like WiPro. Have you? If not, keep your offensive little comparisons to yourself and think before you write.

  354. @Nigel
    > You analogy is incorrect.

    The analogy is entirely correct. The analogy is measuring one factor by a vaguely, related factor. LOC is precisely that.

    > In any case, no one has yet provided an alternative metric they like better and how they use it in real life to do estimation.

    Did you consider the fact that there isn’t a simple numeric metric? Software projects are notoriously hard to estimate because there is no simple numeric metric for requirements. LOC offers no utility whatsoever, except as a false sense of comfort.

    > I have provided several ways that LOC is useful for a software engineer.

    Not that I am aware of.

    > All I see is a lot of handwaving when you’re the one that asserts that open source coders are more productive than corporate coders. What metric do you prefer?

    I have never made such a claim, not intrinsically anyway.

    > It’s easy to throw rocks.

    It is also easy to live in an empty delusion (such as the delusion that the number of newline characters in your code files somehow corresponds to amount of useful work done.)

    > Please, this is a fallacy. Software is amazingly successful.

    That is true, but it is also true that a tiny fraction of software ever written is ever actually used. Software is successful for the same reason I mentioned before: we deal with the 99.9% waste because the 0.1% is so valuable. Have you ever seen how they extract diamonds out of a mine? That is a hell of a lot of waste, but apparently the few glinting rocks more than offset the cost of the waste.

    > Yes, there are a lot of bad developers out there but the vast majority of developers I believe are competent.

    I think you are completely wrong about this. I’d say the vast majority of people who claim to be developers are terrible. I can only think you live in a bubble of excellence, or that your standards aren’t all that high.

    > It is absolutely unrealistic because the you provide the false dilemma of only choosing the very best or what you consider the very worst.

    The purpose is to contrast using an expert with using lots of inexperts. That is exactly the choice people make all the time. It is why experts are grossly underpaid, and why inexperts are grossly overpaid.

    > My suspicion is that you’ve never actually met any developers from Mumbai or worked with any offshore teams.

    Ah, well there is where you would be wrong. I have worked with probably half a dozen Indian offshore firms, and a few Chinese firms, and I have never worked with an Indian or Chinese offshore developer who is “very sharp”. It is the nature of how the industry works there. The capable people don’t work for the body shops. I have worked with many very capable Indians and Chinese here in the USA, but the ones in the body shops are generally dreadful. FWIW, I have worked with offshore developers in Eastern Europe, and they are generally speaking excellent.

    > So Indian developers are monkeys eh? Clueless little brown guys who can’t code like real men.

    It is always a good strategy to attack the mode of argument rather than the substance of the argument. So good rhetorical ploy! “LOL” and “hilarious” are another excellent rhetorical strategy too, after all, derision is usually pretty effective, even if utterly insubstantial. Your conclusion that alluding to a common idiom is somehow racist should get you a show on MSNBC.

  355. @Jessica
    If you watch any subway station in London center, you will see even more people who are not desperate.

    The fastest mode of transport in central London is the bicycle. That has actually been proven by Top Gear once. The same is true for other Capitals in Europe ( most certainly in Amsterdam).

  356. @Jessica

    Did you consider the fact that there isn’t a simple numeric metric? Software projects are notoriously hard to estimate because there is no simple numeric metric for requirements. LOC offers no utility whatsoever, except as a false sense of comfort.

    All engineering projects are difficult to estimate but yet we do so. Yes, there is a numeric metric for size whether you want to count lines, functions or features in a backlog for software or cubic yards of steel and concrete for a nuclear plant.

    Given that the estimates (good ones anyway) come with error bars there’s no false sense of comfort. Just a set of probability of outcomes.

    You are implying is that you aren’t using any metrics at all or do any estimation. If you do, what are they?

    > I have provided several ways that LOC is useful for a software engineer.

    Not that I am aware of.

    Defect per KSLOC is a useful metric. KSLOC per function point or feature is a useful metric.
    KSLOC per hour is a useful metric.

    These three are very useful in estimating the schedule and cost for the next job. You can skip KSLOC and just use FP or UCP if you like. There are tradeoffs in using KSLOC vs FP.

    Ah, well there is where you would be wrong. I have worked with probably half a dozen Indian offshore firms, and a few Chinese firms, and I have never worked with an Indian or Chinese offshore developer who is “very sharp”.

    I have personal experience that contradicts this. You can argue that we got lucky or WiPro offered us better initial staff than you get offered. It is hard to manage correctly and we did not but that does not mean that there is no value in offshoring and that there is no talent over there.

    It is always a good strategy to attack the mode of argument rather than the substance of the argument.

    It’s your words and your poor choice of idiom. Given your comments here I don’t think you see these Indian developers as much more than monkeys if you’ve never met even one person in several companies that you consider sharp.

    And no, I’m no longer laughing with you or even at you.

  357. @Nigel
    > All engineering projects are difficult to estimate but yet we do so. Yes, there is a numeric metric for size

    You are right, I misspoke, there is a simple numeric metric, it is just useless, in fact, it is the opposite of useful. I could also measure the ratio of keywords to variable names pretty easily too, but that would be useless also.

    > You are implying is that you aren’t using any metrics at all or do any estimation. If you do, what are they?

    SWAG based on past experience. In practice what that means is categorizing features into sizes, 1-5 (or 5+ which requires special attention), and assigning an approximate effort based on the size, factoring in the skill and knowledge base of the assignee. This is what happens in real software development shops. Sometimes there is a traceability matrix to improve go forwards. However, they rarely have much value. A sophisticated version of this would be something like Evidenced Based Scheduling. I’m not convinced the feedback works better than SWAGs, but it is at least an attempt at a realistic feedback.

    > Defect per KSLOC is a useful metric.

    No it isn’t. Defect per week might be, or defect per “feature” or “user story”. All of these are soft, of course, and none of them are great, but defect per KSLOC is just something managers do to give them a false sense of control. If you really want to measure quality you do it by code coverage of reproducible test sets, and perhaps some sort of complexity metrics. (Of course quality is much more than the technical correctness of code, but it is at least a automatically generatable metric.)

    > KSLOC per function point or feature is a useful metric.

    I don’t think so.

    > KSLOC per hour is a useful metric.

    Seriously? Have you actually thought about any of the things I said about good developers minimizing the number of lines of code they write? KSLOC per hour measured is a recipe for disaster, it has completely wrong incentives.

    > I have personal experience that contradicts this.

    OK.

    > It’s your words and your poor choice of idiom.

    Not really,it is your contemptuous tone, and desire to misread the worst possible interpretation into other people’s words. Perhaps it is just me you are being an arrogant prick with, but I’d like to ask for a show of hands: anyone else here reading along thing that the “million monkeys typing code” thing had even the hint of racism in it?

    > Given your comments here I don’t think you see these Indian developers as much more than monkeys

    You don’t know me, so you can be forgiven for not knowing how ridiculous an accusation that is. Nonetheless, my opprobrium is reserved for sucky Indian programmers, just as it is equally applied to sucky american programmers (not a fan of sucky Belgian or Australian programmers either.) Oh, just to add to your litany of faults, I don’t care for sucky gay programmers, sucky female programmers, sucky old people programmers or sucky transgendered programmers.

    It is the suckiness I dislike, not their national origin.

    Every day I deal with people who ask questions that take my breath away with their stupidity and ignorance. Just today, someone stopped by to tell me that the user had complained that his software was running too slow, and what should he do about it. “Make it run faster” I said. “Ah, OK,” he says. Like is that not really obvious?

    Another time a guy brings me this problem in unit tests where he has two properties, reading one did some internal cached calculations, and the other assumed that the cached calculations had been done, meaning that the order in which you called the properties determined the results you got. He complained that the unit tests were unfairly discriminating against his code. I had to spend about forty five minutes convincing him that his design was a bad thing. These are people making $80-$100 per hour. How can they get away with being so awful? It is because the standard is so amazingly low that even barely mediocre counts as rock star.

  358. These are people making $80-$100 per hour.

    Shit, that’s more than I make! If I’d have known I could be paid this well to be stupid, I’d have paid off my college loans long ago.

  359. @Winter:
    >European cities were not build for cars. Only people who are desperate try to drive through the city centers of major and minor European cities. That is where public trsnsport is for.

    I’m not talking about the city center. I’m talking about the suburbs. And I’m picking on London specifically. From what I saw of the roads in downtown Berlin, things weren’t too much worse than they’d be in an American city, and when I left Germany, the bus got onto the Autobahn and out of Berlin quickly. When we arrived in London, the highway dumped us out onto surface streets in the suburbs, the surface streets weren’t laid out in any kind of reasonable grid, we were stopping regularly, and we were probably never doing more than 30 mph (50 km/h) when we were moving. As a result, what would have been a half hour drive in any city with a reasonable road infrastructure took an hour and a half to two hours.

  360. @Tim F.:
    You’ve made presented no rebuttal logic, no data, and simply continued argument by shouting “you are wrong”. At this point, I can only assume you have insufficient IQ (or refusal to try) to comprehend the points I am making. Sorry. If my sentence structure and logic is incomprehensible, then surely someone else here would point out logical flaws in what I have written.

    Professional designers do just fine with rapidly evolving as they do with slowly evolving.

    How can you fail to comprehend that if I need to publish my soccer club home page photos within the next 15 minutes, I don’t have time (nor the monetary incentive) to call up a professional designer?

    Come on, seriously, you think you OSS nerds on this blog replaces, subverts, disrupts, or whatever nonsensical attribute you want to ascribe to it to the design needs of Coca-Cola or any other business not running on pocket change and lint?

    How can you fail to comprehend the point I have made numerous times upthread, that more and more publishing is being done by individuals in real-time, and thus the *SHARE* of the publishing market attributable to the corporations (such as Coca-Cola) is declining?

    If you want to argue that such a *SHARE* loss is not meaningful, then follow that line of argument, but for as long as you ignore that *SHARE* loss, then we can’t have a logical debate.

    There is no design here whatsoever. This is, again, a problem of OSS advocates. ESR throwing together a b/w glide emblem or using a stock WordPress template or scanning a b/w image and pasting it on a white background is the barest minimum of design, if design at all. Trying to compare this blog to, say, what Coca-Cola is going to employ designers for for the rest of their existence is nonsensical.

    Disingenuous (bordering on liar or simply blind as a bat). There is some design here. Compare the look of this blog to the way it looked before a recent upgrade. Also there is a layout here and there are logos on the page.

    It is meaningful to compare different intensities of design effort, because it points to a cathedral versus bazaar model of the publishing future. You envision the world gradually all moving towards the cathedral model of top-down specifications and planning for publishing. Whereas, I can see the world is moving away from that towards a self-publishing paradigm, where to remain relevant professional designers will need to make their designs available before they are required, i.e. clipart and weak AI models for mashing up clipart. In fact, I got some ideas from this blog for some software I could write to make a lot of money and disrupt Adobe (another idea to add my TODO list).

  361. @Tim F.:

    Because you are misinterpreting what I am saying, putting words into my mouth, and what you are saying for yourself is wrong.

    You wrote that aesthetics is something people disagree about. Look up the definition of discordant. You wrote that aesthetics is changes over time, even for the same person. Look up the definition of variable.

    I took your logic and asked why if aesthetics is discordant and variable, does professional design become so much more important than the functionality of the artifact. I concluded that perhaps advertising appeals to primordial instincts (addiction, sex, etc) to create a groupthink. So I am saying professional graphic designers can code up reusable templates and are otherwise becoming a much less relevant portion of the economic value in society as self-publishing must prioritize functionality over discordant and variable targets.

    I remember Google started a textual home page. You claim that Google is irrelevant?

    @Winter:

    Btw, a really good programmer will make sure her code can be taken over by any mediocre programmer. If not, fire her.

    Although I agree with prioritizing the readability of code and I agree with well designed modular APIs that can be highly reused, I think I disagree I must be limited to algorithms that can only be understood by a mediocre programmer. The mediocre programmer can reuse my API but in most cases shouldn’t be touching my source code.

    @Nigel:

    In any case, no one has yet provided an alternative metric they like better and how they use it in real life to do estimation.

    Because the top-down managed style is not software engineering, rather rigamortis masquerading as meaningless statistics.

    because we’re able to get usable power to the ground and achieve real performance in terms of usable product. Something India or China, despite having some really good developers, don’t so much.

    Because our culture is to work independently, and get some work done instead of over managing ourselves with meetings, planning, requirements, specifications, liasons with the art department, and loads of other productivity wasters that I saw when I last worked for even a small closed-source development outfit. I can only imagine (shudder) the rigamortis of large shops.

  362. Seriously? Have you actually thought about any of the things I said about good developers minimizing the number of lines of code they write? KSLOC per hour measured is a recipe for disaster, it has completely wrong incentives.

    Yes, it’s covered in the KSLOC per Function Point/Use Case Point/Feature metric you failed to understand.

    If one team is able to implement features in less code than another team that’s an indication that either one team is more efficient than the other or your assessment of Function Points, Use Case Points or Features is too variable to use for forecasting.

    You go on about how good programmers use less code to do the same things as bad programmers but if you don’t track LOC vs feature HOW DO YOU KNOW? Answer: You don’t. You’re just assuming that’s the case based on random observations of specific instances. You have no data to prove or disprove your hypothesis.

    Also KSLOC/hour is not a control metric or used for incentives. It’s a metric used for forecasting unless you happen to know that your FP, UCP, feature estimation technique is repeatable across analysts and across projects.

    The problem is that I have found the variability of FP and UCP analysis to be quite high. A FP or UCP expert would disagree with me but I suspect you don’t know what I’m talking about at all because all you have done thus far is repeat the mantra that “SLOC is BAD because bad managers have used it the wrong way therefore tracking LOC is useless”.

    Someone who understands the domain better than I might scoff that I’m old fashioned and should try weighted micro function points which is both much better than LOC AND addresses variability (by, you know, basing the WMFPs on analysis of existing source code…).

    Finally, good programmers do not minimize the number of lines of code to do a task but rather trade off between maintainability/readability and performance. Writing 2 lines of dense and obscure code is not better than writing 20 lines of easy to read and maintain code that most of the time the compiler will make fast enough anyway.

    SWAG based on past experience. In practice what that means is categorizing features into sizes, 1-5 (or 5+ which requires special attention), and assigning an approximate effort based on the size, factoring in the skill and knowledge base of the assignee. This is what happens in real software development shops.

    This is what happens in SOME software development shops for projects of small size. This is the way I prefer to roll because these kinds of projects are more agile and do cooler work BUT having been on more demanding projects I understand the value and necessity of rigor.

    Barry Boehm did not invent COCOMO one day because he was bored. Folks have not developed multiple parametric estimation models because “SWAG” works on large projects.

    If you do this for projects of large size then there is no wonder that you believe that software development sucks and 99% of software written never gets used.

    EBR is not a sophisticated method for SWAGging but a lightweight and useful estimation technique for when you don’t want to pay the cost of a more formal estimation and metrics methodology. The historical feedback and monte carlo are key factors in it’s effectiveness since it provides you with a confidence level for a range of completion dates.

    This is the same kind of feedback loop using historical data when you take the documented hours spent on a past project and dividing by SLOC counts to determine rough team productivity and maturity with a specific domain and technology (C, C#, Java, whatever). You then pick your favorite method of determining effort, convert to SLOC and feed the historical results into a monte carlo sim to get that same rage of completion dates with a confidence level.

    Have you worked in a mature CMM software team (aka 4+)? If not, how do you know the strengths and weaknesses of one? There are significant weaknesses but there are also some significant strengths. It depends on the kind of project you do (if it’s repetitive, like successive avionic suites for similar aircraft, it’s often a good fit).

    Every day I deal with people who ask questions that take my breath away with their stupidity and ignorance. Just today, someone stopped by to tell me that the user had complained that his software was running too slow, and what should he do about it. “Make it run faster” I said. “Ah, OK,” he says. Like is that not really obvious?

    Gee, a helpful person might have offered their expertise with a profiler or something. “Make it run faster” was something he already knew.

  363. @Nigel:

    Finally, good programmers do not minimize the number of lines of code to do a task but rather trade off between maintainability/readability and performance.

    Any metric you attempt to measure is going to miss the art of programming. Period.

    A program works well, else someone hopefully has an incentive to improve it. It is impossible to develop some top-down metrics to tell you this (the non-dynamic, non-multivariate data will miss some aspect, over emphasize others, etc). This is why open-source trounces your cathedral, because it anneals (c.f. my link on simulated annealing upthread) to the optimum result in this dynamic, multivariate solution space known as software.

  364. “You wrote that aesthetics is something people disagree about. Look up the definition of discordant. You wrote that aesthetics is changes over time, even for the same person. Look up the definition of variable.”

    I posed several questions and views with the intent of saying these are unanswered (possibly answerable) philosophical questions.

    “I took your logic and asked why if aesthetics is discordant and variable, does professional design become so much more important than the functionality of the artifact.”

    And this is where you start to go wrong. This is not my logic.

    ” I concluded that perhaps advertising appeals to primordial instincts (addiction, sex, etc) to create a groupthink.”

    This is a crude and narrow conclusion only based on evolutionary biology which is often limited in its sphere (the natural, human beauty, weather, landscapes) and doesn’t explain the bulk of aesthetic judgment.

    “So I am saying professional graphic designers can code up reusable templates and are otherwise becoming a much less relevant portion of the economic value in society as self-publishing must prioritize functionality over discordant and variable targets.”

    And this is a truly absurd conclusion. You jumped from A to Z with little reason. How do these “templates” achieve aesthetic value? Why does self-publishing need to prioritize for functionality? Why do aesthetics become less relevant? Why does aesthetics get devalued?

  365. This is why open-source trounces your cathedral, because…

    Is it me or is it simply sad that even on ESR’s own blog that there are folks that don’t seem to understand that bazaar and cathedral methods are orthogonal to closed and open source? That the original example was Cathedral development of Emacs and GCC vs Bazaar model of Fetchmail?

    While closed source is often cathedral style in some cases it is not. We have projects run internally in a bazaar fashion. Google is another example where the internal code base is visible by all Google devs and some of those projects are run bazaar style with contributors using their 5% time on them. They are closed source and will remain as such if they depend on the secret sauce core google libraries.

    And finally closed Photoshop is trouncing open GIMP because of key advantages inherent in not depending on devs scratching a personal itch. When that itch doesn’t include something important like higher bit depths those features take more than a freaking decade to reach release status. GEGL has been in development for 12 years (since 2000) and is STILL only partially implemented in 2.6 and high bit depths not part of the mainline 2.6 capabilities.

    In the case of GIMP open source didn’t mitigate the communication corollary (communication costs increase as a square while work only increases linearly) in Brooks’ Law but made it worse because key functionality ended up in separate forks with separate code bases and separate dev teams that rarely, if ever, communicate. Which is why Cinepaint has had 32 bit color depths since forever and GIMP does not. Very few commercial projects are quite this dysfunctional in this regard even if they can be dysfunctional in other areas.

    1. >Is it me or is it simply sad that even on ESR’s own blog that there are folks that don’t seem to understand that bazaar and cathedral methods are orthogonal to closed and open source? That the original example was Cathedral development of Emacs and GCC vs Bazaar model of Fetchmail?

      You’re only half-right. Yes, these axes are theoretically orthogonal, but in practice the closed-source/bazaar combination either does not occur at all or occurs only in a marginal and unsustainable fashion. (There is ambiguity and room for interpretation in the real-world evidence.) There are several reasons for this.

      1. Nobody – and I mean nobody – funding proprietary development can afford to hire the number of developers at which the many-eyeballs effect visibly dominates the O(n**2) Brooks’s Law effect.

      2. The conditions which could make the bazaar effect really, er, effective are suppressed in various important ways when everybody in the pool ultimately reports to the same manager. Tpp many political incentives to keep a schedule and not rock the boat.

      3. Corporations can’t allow the kind of viewpoint diversity that makes many-eyeballs really effective without sacrificing their Coasian advantage in lower internal negotiation costs. This contradiction is fundamental, not accidental.

      Because of these problems, attempts to create internal proprietary bazaars generally incur the costs of the method without delivering the benefits. And fail. Rapidly.

  366. @esr

    The number of participants in most open source projects on sourceforge or github in terms of active contributors (as opposed to the even more limited number of active committers) is often very small and don’t have very many eyeballs despite being organized in a bazaar fashion.

    Many of our internal projects/libraries are kept on an internal GForge site with full read access for everyone. Users submit defects and patches to the commiters and more often than not if you simply ask you get commit privs. Code migration upstream and downstream often works as it might for an external open source project. There are probably a few hundred developers here and we are dwarfed by companies like Google.

    Is it marginal? I would more describe it as modest. There aren’t that many commonly used internally libraries used across projects.

    However, you don’t feel that Google has sufficient internal developer eyeballs to do bazaar effectively? If so then what hope for an Open Source project with a couple committers and a handful of developer contributors? My feel that even some project communities I consider vibrant are not all that big, certainly not in comparison to the number Google dev eyeballs that are applied to their internal core codebase.

    How many developer eyeballs do you estimate GIMP to have?

    With respect to viewpoint diversity…in the FOSS world that sometimes ends in an acrimonious fork and a fragmented dev and user base.

    1. >If so then what hope for an Open Source project with a couple committers and a handful of developer contributors? My feel that even some project communities I consider vibrant are not all that big, certainly not in comparison to the number Google dev eyeballs that are applied to their internal core codebase.

      Taking GPSD as representative of a smallish-to-midsize project, if you looked at our Ohloh stats you’d see 65 contributors, with the top 5 clearly dominating commit volume. But that probably underestimates the number of eyeballs on the code by an order of magnitude or so; many of the most valuable clues we get are from one-time bug reports coming through our patch tracker. So I think you’re seriously underestimating typical open-source developer pool sizes. Most people do, with some of the early Boston Consulting Group studies as a notable exception.

      The reason I describe GPSD as “smallish-to-midsized” is that I also know what large open-source projects look like. Take Battle for Wesnoth; 181 contributors on Ohloh with eyeballs (judging by bugtracker volume) well up towards 10000. Even Google would have trouble putting that much attention on a single project. It’s not so much that they can’t support an equivalent group of core devs (which smaller companies would find pretty much impossible) as that they probably can’t duplicate the halo size.

      >However, you don’t feel that Google has sufficient internal developer eyeballs to do bazaar effectively?

      For any individual project, probably not – though I concede that Google is a huge outlier and the most likely case to falsify my general claim. The problem is that their devs are spread across so many different projects that even with 20% of personal hacking time it’s hard to imagine them matching the many-eyeballs effect of 10K self-selected open-source devs in any one project.

      >How many developer eyeballs do you estimate GIMP to have?

      Ohloh says 496 contributors; that’s even bigger than Wesnoth. Probably well over 50K eyeballs and I wouldn’t be surprised if it cracked 100K.