The Smartphone Wars: Exit Steve Jobs

Steve Jobs resigned as CEO of Apple yesterday, handing the reins to designated successor Tim Cook. It could hardly happen at a more difficult juncture – for though Apple’s cash reserves and quarterly profits are eye-popping, the company faces serious challenges in the near future. Its strategic position rests on premises that are now in serious doubt, and it is on the wrong end of a serious example of what Clayton Christensen has called “disruption from below”.

Foremost among Apple’s problems is Android. 68% of the company’s profits come from its smartphone business, and another 21% from the iPad, leaving only 11% from other sources. But Android now has #1 market share both in the U.S. and worldwide, and is growing share and customers at twice the rate Apple is.

Until very recently, the best guess was that Apple and Android have been competing less against each other than for a gusher of dumbphone conversions so vast that both Apple and the Android army were production-limited. But I have been predicting since early 2011 that this would change in mid-3Q2011 – and the first signs of that change may be upon us now. WebOS is no more, Microsoft has arrested its slide, and after a tiny post-February bump Apple’s market share is flat again. There are several possible explanations for this, but one very likely one is that Android is now putting actual downward pressure on Apple’s market share.

Apple, and its fans, had promised the world that the moment in February that Apple went multicarrier would be when it began to regain ground against the upstart Android. As I also predicted for reasons very fundamental to the one-vs.-many competition of Apple against the Android army, this has completely failed to occur; Android’s sales are still growing faster than the overall smartphone market, and Apple’s are not. Tim Cook and Apple’s board cannot possibly be stupid enough to find this un-worrying.

For all Apple’s bravado and marketing flair, it now finds itself in a position where it is running second in sales and market share and playing technological catch-up with Android handsets that have 4G capability, faster processors, and more features. Not until October at the earliest will Apple have a product that can reply to where comparably-priced Android phones are positioned now – and in that approximately two-month time Androids won’t be standing still. The launch of the Nexus Prime could easily leave Apple as far behind on the technology curve as it is now, and with no realistic prospect of recovering for many months more.

Apple’s position in tablets is also weakening. One recent study finds Android-tablet shipments have climbed to 20% of market volume. This is a huge change from three months ago when they were statistical noise. Because Apple reports units sold rather than shipped, that 20% has to be discounted by the return rate on Android tablets – but the return rate would have to be ridiculously high (enough to make front-page technology-press news) in order to drive actual Android share down to a figure that shouldn’t worry Apple.

Indeed, the tablet market looks right now quite a bit like the smartphone market did in early 2010, with the upcoming release of Android Ice Cream (4.0) ready to supercharge Android tablet sales in much the same way 2.x did for Android smartphones then. Any Apple executive who isn’t nervous about this possibility is asleep and not earning his salary.

The feeding frenzy surrounding the HP TouchPad is another cause for worry. Nobody wanted them at HP’s SRP (which was, basically, pegged to Apple’s). But when the product was canned and dropped to $150 the stores couldn’t find enough to meet demand even given the unsupported software stack. This tells us something important: it tells us that the first Android tablet with hardware comparable to the TouchPad and a supported software stack that goes below $150 is going to meet even more frenzied demand. If Apple doesn’t get to that price-performance point sooner than Android, it’s going to bleed tablet market share like someone slashed an artery.

The second of Apple’s major problems is that by opening patent warfare on Android handset makers it may have started a legal battle it can’t win. It is, in effect, claiming to own the critical design elements of modern smartphones. But Google, after having acquired Motorola’s patent portfolio, may well be in a position to reply that it owns critical design elements of all cellphones, including Apple’s.

In retrospect, Apple may have sown dragon’s teeth when it sued to have sales of Samsung tablets in Germany blocked. Apple is now going to have much more trouble attacking Google for overreach if Google files for TROs on Apple’s entire smartphone and tablet line based on a Motorola blocking patent. This is no longer an implausible scenario – and even if it does not actually happen, the threat must constrain Apple’s behavior. At this point, the best outcome Apple can plausibly hope for is a patent truce with Google that takes IP threats off the table.

The third problem Apple now has is Jobs’s successor. Tim Cook is, by all accounts, a superb operations guy and has been a perfect complement to Steve Jobs’s vision-centered style of leadership. But there is no sign in Cook’s prior performance of his predecessor’s flair. He lacks Jobs’s hyperkinetic charisma, his ability to will an entire product category into existence and instantly persuade everyone it’s the next big thing.

Tim Cook’s style is very different; where Jobs obsessed about design and coolness, Cook’s history is of obsession with efficiency and execution – one cannot escape the sense that he is more interested in supply-chain management than in how the product looks and feels. And while Cook’s focus could be a valuable trait in a stable business environment, what Apple now faces is anything but that.

I’ve said before that I think Apple looks just like sustaining incumbents often do just before they undergo catastrophic disruption from below and their market share falls off a cliff. Google’s entire game plan has been aimed squarely at producing disruption from below, and with market share at 40% or above and Android’s brand looking extremely strong it is undeniable that they have executed on that plan extremely well. The near-term threat of an Apple market-share collapse to the 10% range or even lower is, in my judgment, quite significant – and comScore’s latest figures whisper that we may have reached a tipping point this month.

For Apple, the history of technology disruptions from below tells us that there is only one recovery path from this situation. Before the Android army cannibalizes Apple’s business, Apple must cannibalize its own business with a low-cost iPhone that can get down in the muck and compete with cheap Android phones on price. Likewise in tablets, though Apple might have six months’ more grace there.

Of course, this choice would mean that Apple has to take a massive hit to its margins. Which is the perennial problem in heading off a disruption from below before it happens; it is brutally difficult to convince your investors and your own executives that the record quarterlies won’t just keep coming, especially when your own marketing has been so persuasive about the specialness of the company and its leading position in the industry. This is a failure mode that, as Clayton Christensen has documented, routinely crashes large and well-run companies at the apparent peak of their success.

Does Tim Cook have the vision and the will to make this difficult transition happen? Nobody knows. But the odds are against it.

UPDATE: I originally set the threshold for making a killing on an Ice Cream tablet at $99, but it has been pointed out to me that the $150 TouchPads with more flash sold out just as fast. And also that the single most important discriminator in “good enough” is probably a decent capacitative (as opposed to resistive) touchscreen. The difference is significant because there is nearly enough room in $150 to cover the bill of materials on such a device now; by 4.0’s release date, it should be possible to make a profit at that price point.

234 comments

  1. Eric, there’ve been a long string of $99 off-brand tablets, featuring archaic versions of Android or proprietary OSes, lacking in ease-of-use, battery life, functionality and developer support. And people have bought them (thankfully NOT as gifts for me). So the fact that people will gladly pay $99 for a tablet that’s slightly less lacking in some of those categories isn’t shocking, and doesn’t really change anything much, as far as I can see. :)

  2. Google might not be planning to strike with a Motorola patent. While the conventional wisdom says Google’s IBM patent buy was about Oracle, as least one person thinks it was actually targeted at Apple (see: http://www.patentlyo.com/patent/2011/08/guest-post-google-is-packing-heat-with-sights-on-apple-1.html ).

    Given how well re-examinations seem to be going in the Oracle case, he might be right. And one benefit of using IBM patents is that Google owns them today and can assert them now, instead of waiting for the Motorola purchase to close. Motorola’s patents might just be the next pile of ammunition.

  3. Tim Cook’s style is very different; where Jobs obsessed about design and coolness, Cook’s history is of obsession with efficiency and execution – one cannot escape the sense that he is more interested in supply-chain management than in how the product looks and feels.

    What about Johnathan Ive?

    1. >What about Johnathan Ive?

      What about him? You tell me. The guy may be a design guru but he doesn’t run the company.

      Part of Apple’s strength under Jobs was that the top design obsessive didn’t have to get permission from the alpha wolf because he was the alpha wolf. Now that’s changed. It’s hard to believe it won’t make any difference.

  4. @Dan:

    I think the TouchPad shows that the game changes once the $99 tablet gets “good enough” on the dimensions you mention, just as you’d expect for a “disruption from below”. We’ve already seen the $80 Android phone. When we see a $99 tablet that’s a full-fledged member of the Android ecosystem, the game will have changed again.

  5. The biggest problem i see before me for Apple is that the Reality Distortion Field might wane off. Seriously, an Iphone contains nothing new or novel at all. Every bit and piece has been done before and in many cases, much better. I remember being pretty dumbfounded when they tooted the Iphone as the second coming of christ when i had my OpenMoko do things the Iphone could not for years, and much better.

    Lowering the price of an Iphone also goes against one of its biggest selling points, its a status symbol for many. As soon as it becomes an also-ran the reason to buy one disappears. At the same time, you wont buy one if you can get a much better phone for less than half the price.

    As for the suits i dont know what to say, it just reeks of desperation and foul play. Apple must have realized they are toast i they cant stop the Android avalanche. The problem is that if they succeed in stopping Android there are Megoo and Bada waiting around the corner for a chance on the market. Megoo is in my mind much better than Android and would be a much worse competitor to Apple than Android can ever be.

  6. “It could hardly happen at a more difficult juncture”.
    You can say this about almost any moment in apple history. Apple has a history of falling forward, always changing its niche. It was computers, multimedia, music and movies, mobile devices, pads… Apples strength is in its almost absurd ability to adapt. To me it has always looked like a football player attacking and passing opponents. You watch in awe but you know he will fall or be stopped somewhere in the end.

    1. >Apple has a history of falling forward, always changing its niche.

      This is very true, and ties into all the major reasons the loss of Steve Jobs is a big deal. That adaptability relied on Jobs’s unduplicable mix of vision, charisma, and his track record as the guy who pulled rabbits out of hats. It’s very difficult to see how Apple can be that nimble in the future without a personality as dominant as his at the helm.

  7. I’ve played around with both the Touchpad and a few Android tablets, including some in the absolute high end, and I’ve found the Touchpad to offer a much nicer experience all-round. Apart from the case design, which is a bit too plastic-y, and the dearth of quality apps, I’d have to say that it’s my favourite tablet. $99 is an absolute steal, if you can find it (which you can’t, at this point).

  8. For what it’s worth, I think ESR is right about the $99 price point. When the iPhone first appeared in 2007, I stuck with my Palm PDA and dumbphone, because I wasn’t willing to pay $600 for a phone no matter how awesome it was. But in 2010, when AT&T was offering a refurb iPhone 3G for $49 with contract renewal, I said “Why not?” And this year, I was able to upgrade to a 3GS for $19.

    For now, I’m an iPhone user, but if a sub-$99 Android phone comes along that will do what I need, I’ll switch in a heartbeat. Price and features matter a lot more to me than brand.

  9. Apple will eventually stumble, but probably not this year. Actually, the “spaceship” seems like a more ominous sign than anything going on in Android-land. There’s a longstanding business rule of thumb that any time a company builds a grand new headquarters building, it’s a good time to short the stock. The reason is that when the company is doing lots of important stuff, management is *too busy* to design a signature building and fight with the bureaucracy to get it built. Any time and effort that Apple’s best design guys, production guys, and project management staff spent hashing out details of the spaceship is time they *weren’t* spending “inventing the future”. Companies do that sort of thing when they have lots of money but no good ideas for how to invest it or when senior management wants to produce “a legacy”.

  10. @esr: Why “exeunt”?

    Beat me to it. Unless you’re claiming that Jobs has multiple personalities, you want “exit”. “Exeunt” is plural.

    1. >Beat me to it. Unless you’re claiming that Jobs has multiple personalities, you want “exit”. “Exeunt” is plural.

      You’re right. My error.

  11. We haven’t seen any evidence so far that Android can eat any of Apple’s market share. All we’ve seen so far is that Android can beat Apple in taking away Nokia, Rim and Microsoft’s market share. One doesn’t follow from the other.

    1. >We haven’t seen any evidence so far that Android can eat any of Apple’s market share. All we’ve seen so far is that Android can beat Apple in taking away Nokia, Rim and Microsoft’s market share. One doesn’t follow from the other.

      Until about a fortnight ago I would have agreed with you. Now I think we’re starting to see evidence – weak so far, but I doubt you’ll be able to honestly repeat the above 90 days from now.

  12. I think the key to the Touchpad fire sale is not that these are $99 tablets (crappy $99 tablets are readily available) – it’s that they’re $99 tablets with good hardware. Specifically, capacitative screens. This is anecdotal, but my coworkers were wild for the Touchpad fire sale specifically because the screen was not terrible.

    $99 tablet with web browser and capacitative screen = win.

    1. >$99 tablet with web browser and capacitative screen = win

      Yes. Good catch, there; at current state of the market I agree this probably is the biggest single factor in “good enough”.

      This means we can refine my prediction. Whoever first ships a $99 device with Android 4.0 and a capacitative screen is going to make a major killing.

      Google now says 4.0 will ship in real devices around Thanksgiving (end of November for you foreigners); the clear intent is that we’ll get Ice Cream for Christmas. I don’t think we’ll get the $99 price right away, but on recent trends I think we can expect it in 1Q2012.

  13. Why do you think Apple’s leadership a key question? Your whole thesis since the beginning has been that an open platform (aka Android) will ultimately crush a closed, walled garden like Apple’s iOS. If you stand behind that, what kind of “leadership” are you imagining that can prevent Apple from succumbing to this? Apple has software always been closed — can you really imagine that changing?

    1. >Why do you think Apple’s leadership a key question? Your whole thesis since the beginning has been that an open platform (aka Android) will ultimately crush a closed, walled garden like Apple’s iOS.

      I still think that’s true. The question is what Apple can do to maintain something like its historical niche share of the PC market, at least for a few years while it finds a new product category to play in.

  14. Perhaps “exeunt” because by implication, it is the company exited the stage.

    I am saddened to read that Jobs was back in hospital on June 29, that he won’t be around to contribute and observe the ultimate outcome of the software age and open source. Contrasted with that I want to eliminate his captive market for collecting 30% rents on software development and dictating morality of apps. The competitive OPEN (not captive) balance is for investment capital is to extract about 10% of the increase in capital.

    Of course Apple will lose market share in not too distant future, as will every capital order that exists due to more than 10% capture of the increase resulting from its investment. Btw, this is why a gold standard can never work, because society can’t progress when new innovation and labor (i.e. capital) is enslaved to pre-existing capital (prior innovation and labor).

    In my mind, the more interesting question is what happens to Google’s ad renting captive market (I suspect taking way more than 10% of the increase in many cases), when the revolution of information targeting fitness of OPEN (i.e. not captive) social media has the same explosion into entropy that Android did to Apple. The waterfall exponential shift won’t take more than 2 years once it begins. I suppose Google will be forced to lower their take (as Android will force Apple), thus exponentially increasing the size of the ad market is critical. So the motivation for Android is clear, but ironically it may accelerate Google’s transformation from an ad company to a software company. But as a software company, I expect Google will be much more valuable as a collection of individuals or smaller groups, thus there will be an exodus of talent. I don’t yet know how many years forward I am looking, probably not even a decade.

  15. Apple business model has a problem in that it just hopes other sides will lay down and die. In the 1980s it just assumed Microsoft would fall off and the army of hardware makers would be happy making radios and TVs. Now they just assume major phone makers will sit back and die as they produce under powered over priced tablets and phones. Just as Windows 95 ultimately drove Apple to the brink of collapse Android has the potential to take out Apple as the market is flood with about $100 tablets and phones running a better OS.

    Apple lovers seem to fail to understand that firms won’t just die because Apple needs them to. Google, Microsoft, Samsung, HTC and others are going to put up a fight to survive and make profit.

  16. I think that gets the prize for biggest internet strawman of all time

    While i don’t think that gets the prize… it certainly ranks well amongst “most obscure non-reference to another post in an internet thread”.

  17. Ignoring all of the Apple-watching going on in this thread, I would like to point out that the HP firesale treatment of TouchPads does not prove that the breaking point on tablet sales is $100. Instead, it shows that it is not below $100. It could be higher than that, in fact — I would say perhaps as high as $150 — given that the lower end, similar Android tablets run in the $200-$250 range.

  18. For Apple, the history of technology disruptions from below tells us that there is only one recovery path from this situation. […] Apple must cannibalize its own business with a low-cost iPhone…
    Cheaper iPhone4 rumored -GizMag
    I know these are just rumors at this point, but…

  19. >What about Johnathan Ive?

    What about him? You tell me. The guy may be a design guru but he doesn’t run the company.

    Part of Apple’s strength under Jobs was that the top design obsessive didn’t have to get permission from the alpha wolf because he was the alpha wolf. Now that’s changed. It’s hard to believe it won’t make any difference.

    Maybe if Ive and Cook are more understanding of each other..?

    Instead of Ive going after Cook’s throat at each “design sacrilage” or something-another… Cook actually sometimes suggests some design ideas, and Ive is actually willing to at least consider some of them..? Maybe if they’ve like, I don’t know, established some sort of corporate culture or something..?

    Lots of speculative hand-waving here yes; sorry ^-^”

  20. >This tells us something important: it tells us that the first Android tablet with hardware comparable to the TouchPad and a supported software stack that goes below $99 is going to meet even more frenzied demand.
    Or it could mean that a lot of smart speculators saw an opportunity to buy them up and sell them at twice the price after the supplies ran out. OTOH, I would have bought one of them for the £89 they were asking for here, had the Dixons website not run out of stock in precisely 29 minutes..

  21. Alex K. Says: “I would like to point out that the HP firesale treatment of TouchPads does not prove that the breaking point on tablet sales is $100. Instead, it shows that it is not below $100. It could be higher than that, in fact — I would say perhaps as high as $150 — given that the lower end, similar Android tablets run in the $200-$250 range.”

    Good point. The other thing is that this whole “magic $99” idea seems to assume that a veritable stampede to the new device is required. But that’s not necessary. All that is necessary is a solid competitor to the iPad at a price that will attract away a significant market share. I suspect the magic number is more like $199 or even $249.

  22. Even if Jobs had the next 6 new things in progress and had pre-recorded videos, they might have a chance. If anyone else had come out with the iPad (no camera, not even a screen for native 720p – still!, takes strange USB to charge and then takes several hours – I could go on…), they would have been laughed at or ignored. They hit home runs, but the lead fades, and is fading more rapidly. Walks and singles add up.

    At best they will be like Mercedes – a profitable premium brand with a small market share.

    They are limiting their ecosystem advantage with the hardware (and/or software) lock-down. I thought about it, and the only advantage to having “one” device, or a small number, is things like cases, mounts, screen protectors and such all fit.

    Of course Google/Android could specify some compliance suggestions – if there was a 10 inch tablet standard mechanical, or a small number of formats. Google already has with their Arduino based hardware platform leapfrogged iOS. If my phone/player/tablet is a capable display, then why not use that interface instead of building seconds or thirds. If automobile manufacturers integrate android support?…

  23. …At best they will be like Mercedes – a profitable premium brand with a small market share. | The flat comScore graph trend does somewhat suggest that…

  24. Apple must cannibalize its own business with a low-cost iPhone that can get down in the muck and compete with cheap Android phones on price.

    You mean something like this?

    Others in this thread have already noted the rumors that Apple will re-position the iPhone 4 into that price point after the release of the iPhone 5. Perhaps Apple has already come around to your point of view.

    1. >Others in this thread have already noted the rumors that Apple will re-position the iPhone 4 into that price point after the release of the iPhone 5.

      Not strong enough. The iPhone5 would have to be able to price-compete with low-end Androids too, otherwise it would become irrelevant post-disruption.

  25. Its strategic position rests on premises that are now in serious doubt, and it is on the wrong end of a serious example of what Clayton Christensen has called “disruption from below”.

    Or perhaps you’ve consistently misread Apple’s strategic position. Hint: It’s not based on majority market share. Apple has never had majority market share. Apple doesn’t need majority market share to continue to be highly successful although I expect Apple to maintain fairly high share in tablets. The tablet market doesn’t look anything like the smartphone market in 2010 because there are no weak incumbent players of note to steal share from. As many analysts have stated, currently there’s an iPad market not a tablet market. By 2012 that might change a little.

    The near-term threat of an Apple market-share collapse to the 10% range or even lower is, in my judgment, quite significant

    You’ve been predicting that for a while now.

    There are days that I feel the irrational desire to see Apple fail in smartphones and tablets stems from the fact that the proprietary source model from Apple for OSX completely eclipsed Linux on the desktop and Android is viewed as comeuppance for Apple’s success with closed cathedral model source.

    What is sad is that the Android model isn’t that much more open than iOS. The key pieces of the Google ecosystem (mail, maps, search, etc) that makes Android attractive as a platform is closed.

    Apple has been rumored to have a cheaper iPhone for a while now. It strikes me as unlikely that they’ll have much more than the 8GB iPhone 4 this year and an iPhone nano leads to platform fragmentation.

    If you want to see Apple collapse you’ll have to wait 3 years to see if Apple loses steam in a post-Jobs Apple. And his stepping down is most likely part of a planned transition and not that he has 6 months to live or something morbid. As someone else stated, this is the BEST time for him to step down. To allow Cook be the CEO during the launch of the iPhone 5 and iPad 3 and for the next big thing they have planned to show that a post-Jobs Apple is as innovative and design centered as when he was CEO.

  26. Not strong enough. The iPhone5 would have to be able to price-compete with low-end Androids too, otherwise it would become irrelevant post-disruption.

    That is an insane assertion even if you accept the premise that a disruption is inevitable. Apple can have a high tier iPhone 5 and a low tier iPhone 4 derivative just like Samsung, HTC, etc have high end and low end phones with different capabilities. If the $188 cost to build for the iPhone 4 can be driven to a point where a $299 no contract price can be achieved that’s low enough.

    I suspect it’ll still range above $400 this go around though. The retina display is the cost driver unless they drop the iPhone 4 to 3GS resolution. Apps would still work at the 320×430 resolution and keep further fragmentation at bay.

    1. >Apple can have a high tier iPhone 5 and a low tier iPhone 4 derivative just like Samsung, HTC, etc have high end and low end phones with different capabilities.

      Samsung/HTC’s high-end phones are going to be forced to lower prices as well. As with every other maturing technology subject to Moore’s Law, the price band for mass-market products will be narrower as well as having a lower bottom.

      One of your (many) errors is believing that the normal consequences of market competition don’t apply here.

  27. Nigel,

    And his stepping down is most likely part of a planned transition and not that he has 6 months to live or something morbid.

    Let’s be real here. The guy looks sick as hell — even more emaciated than a couple months ago when everyone was like “gosh, look how skeletal Steve looks — what’s going to happen to Apple when he’s gone?” His illness had something to do with this. Even if he had access to a secret cure that will make him fit as a fiddle, his retirement now sets up the perfect opportunity to “come again in glory” once his treatments are over with.

    But, realistically, my guess is he hasn’t got long, and is checking out of the Apple day-to-day to ensure a smooth transition for Apple and to live his last however-long-he-has in peace out of the public eye.

  28. I think a major point is being missed here. Apple has for a while been quietly building itself as a content company. The iPhone exists to serve up apple content – music, movies and apps.

    The tablet extends this, and so does/will apple-tv. They may be willing to sell cheap (hardware) at some point, if it means more sell through of their content. I see content to apple as similar to ads to google.

  29. Not strong enough. The iPhone5 would have to be able to price-compete with low-end Androids too, otherwise it would become irrelevant post-disruption.

    That’s true as far as it goes — a low-end disruptive technology makes top-of-the-line models irrelevant to all but a small market segment. However, it seems to me that what Apple needs to survive the disruption is to always offer some model smartphone (not necessarily its latest model) that is both price- and feature-competitive with the low-end Androids.

  30. Please, don’t bury Steve before his time. His illness is more-or-less directly related to Apple’s success. As in: the same processes which made Apple so successful are now threatening to kill Steve. And if he’ll manage to actually retire – he may live for a long time yet.

    It’s all related to the OODA cycles and the fact that Steve is control freak. If you’ll compare Apple’s OODA cycle with OODA cycle of it’s competitors you’ll see that difference is striking: 1-1.5 years long vs 3-4 months long. Apple’s competitors should kill Apple in hurry! Yet it does not happen. Why? Well, the speed of OODA cycle is important but quality of individual steps are important too. If the short cycle of HTC and/or Samsung can not kill the Apple then it means that quality of decisions are much higher in Apple. And changes in direction are more abrupt (because there are less of them yet they achieve the same end result). This is only possible if you assert extreme pressure on your underlings – basically if you are control freak (this is necessary condition, not sufficient one). And such changes are EXTREMELY nerve-wracking. Few people can survive in such a condition for long. Yet Steve maintained it for years! No wonder his health suffered – the good question is “how’s he still alive?”.

    And now for the trouble with Apple: the whole company was built around the ability of Steve to observe and orient the whole company waaay in advance (compared to competitors) – or the whole house of cards will collapse. The big question here: are there people in Apple who have enough foresight and enough balls to continue to do such bold moves – and do they have enough foresight to move in the right direction? We’ll see in the next year or two, but I sincerely doubt it.

    P.S. There are another alternative: Steve may decide to continue to do large decisions and only cede day-to-day operations to Tim. This WILL BE suicide, but in this case Apple will be “on the top” for a few more years. I sincerely hope Steve will not choose this option, but then, I’m not Steve…

  31. @esr

    “I’ve said before that I think Apple looks just like sustaining incumbents often do just before they undergo catastrophic disruption from below and their market share falls off a cliff.”
    doubling down again eh? how many quarters of being wrong about apple’s market share tanking before you would admit you were/are wrong?

    “playing technological catch-up with Android handsets that have 4G capability, faster processors, and more features.”
    I’ve used a lot of the new android phones. The only win I really see are notifications and 4G. I really like the iOS 5 notifications, so that’s being addressed soon. 4G is really fast on Verizon. *BUT* the battery life is terrible. A dead Android phone is infinitely slower than an iPhone that still has some juice left in it.

    As far as the other stuff goes, from a consumer perspective specs don’t matter. Why should they? They iPhone 4 feels fast and smooth. Certainly more than any android phone I’ve ever used, despite the processor speeds. And it’s still prettier and better designed than every single android phone yet
    created. It doesn’t feel/act old. Mobile devices are more like consumer electronics in this regard than PCs. Do people pay attention to RAM and processor specs on their TV or Blu-Ray player? I would contend they matter *some*, but not much. 4G matters, storage matters.

  32. @esr
    “This tells us something important: it tells us that the first Android tablet with hardware comparable to the TouchPad and a supported software stack that goes below $99 is going to meet even more frenzied demand. If Apple doesn’t get to that price-performance point sooner than Android, it’s going to bleed tablet market share like someone slashed an artery.”

    “This means we can refine my prediction. Whoever first ships a $99 device with Android 4.0 and a capacitative screen is going to make a major killing.”

    Have you seen the BOM for an iPad 2?
    http://www.isuppli.com/PublishingImages/Press%20Releases/2011-03-12_iPad2_BOM.png

    The display alone costs $127

    And the BOM costs went UP for iPad 2 over iPad 1. I think you over-estimate how quickly the cost of the components will drop.
    The iPad is actually priced quite low. That’s what you keep missing I think.

  33. That’s true as far as it goes — a low-end disruptive technology makes top-of-the-line models irrelevant to all but a small market segment. However, it seems to me that what Apple needs to survive the disruption is to always offer some model smartphone (not necessarily its latest model) that is both price- and feature-competitive with the low-end Androids.

    I think Apple is already there. The iPhone 3Gs is heavily discounted. If the process continues, the iPhone 4 will eventually take its place. Apple won’t have to spend precious development attention on cut-rate products.

    Actually, the “spaceship” seems like a more ominous sign than anything going on in Android-land. There’s a longstanding business rule of thumb that any time a company builds a grand new headquarters building, it’s a good time to short the stock.

    It’s ominous to me, too.

  34. @Dominic Amann

    “I think a major point is being missed here. Apple has for a while been quietly building itself as a content company. The iPhone exists to serve up apple content – music, movies and apps.”

    This is way, way off.

    from http://seekingalpha.com/article/280344-apple-management-discusses-q3-2011-results-earnings-call-transcript
    “The iTunes store generated strong results with revenue of almost $1.4 billion. iTunes revenue was up 36% year-over-year, thanks primarily to continued strong sales of music, video and apps. With more than 225 million accounts, iTunes is the #1 music retailer in the world and customers have downloaded more than 15 billion songs today.”

    $1.4 billion is nice. But that’s just revenue. Pales in comparison to iOS device sales.

  35. >Ignoring all of the Apple-watching going on in this thread, I would like to point out that the HP firesale treatment of TouchPads does
    >not prove that the breaking point on tablet sales is $100. Instead, it shows that it is not below $100. It could be higher than that,
    >in fact — I would say perhaps as high as $150 — given that the lower end, similar Android tablets run in the $200-$250 range.

    I agree. I would even offer the fact that $99 for decent hardware set off such a frenzy shows that people already know the breaking point really is higher- they instinctively knew that $99 was a *steal* only made possible by HP’s incompetence.

    Look at ebay right now, where the scalpers (who were able to get in early and buy in bulk) are offloading at a substantial profit. Right now people are consistently bidding scalped Touchpads up over $200.

  36. But we all know that Android’s real purpose is to allow Google to target more ads to the mobile platform. Forgive me for not bursting into flames in excitement over that fact (whether it is open source or not (and how open is Android anyway)). Whether this is the great success story of open source software is up to you to decide.

  37. One thought occurred to me recently.

    The smartphone market is not like the PC market, in that attracting third-party support is not decisive.

    There is very little add-on hardware smartphones (none, for the vast majority of users).

    Network compatibility is pretty much guaranteed – everyone builds to non-proprietary standards.

    The only important third-party component is app software – and as long as there is plenty of it, a minority vendor remains viable. If the market is large enough, there will be enough developers to it. The Mac survived as a platform despite its markets share being way down.

    What does this mean for Apple? That Apple (and RIM, now that I think about it) may hold substantial market share for quite a long time – till the next wave hits and replaces smartphones. Apple’s share of this enormous and still-expanding market could be large enough to sustain their revenue position – maybe not at current levels, but not that much less.

  38. Samsung/HTC’s high-end phones are going to be forced to lower prices as well. As with every other maturing technology subject to Moore’s Law, the price band for mass-market products will be narrower as well as having a lower bottom.

    One of your (many) errors is believing that the normal consequences of market competition don’t apply here.

    So the phone market (pricing) as a whole isn’t “mature”? I disagree, the consumer price points dropped rapidly through the 90s but largely have stabilized around $0-$100 on contract. Hey look, the iPhone 3GS is $49 on contract through AT&T. Even in countries that don’t subsidize phones carriers often will offer discounted contracts with unlocked phones so the TCO for a phone isn’t largely in the cost of the handset but the monthly recurring costs.

    Handset costs have major impact in the pre-paid market. But even there, Apple can command a higher price and still make decent sales because the platform is a great user experience. This is also why carriers are currently willing to pay a premium over the iPhone vs an android phone…the value of the iPhone exceeds the value of the handset itself.

    Moore’s law doesn’t apply to user experiences or ecosystems. Apple has a rich ecosystem that supports their iPhone experience. HTC, LG, etc don’t. They rely on ecosystems produced by others (Google, Amazon, Verizon, etc) without the same level of integration with the handset and across product categories (phone, tablet, computer).

    You assume that the horizontal market always dominates the vertical given the experience of the PC era. Counter examples in the tech world include video game consoles and the iPod. Also, prior computing eras were dominated by vertically integrated companies. It appears that the pendulum is swinging back in that direction (vertical).

    Some folks undervalue the importance of user experience just like some folks overvalue the importance of user experience. What Apple has done is provide a highly usable ecosystem paired with reasonably priced premium products that thus far, a large number of folks are willing to pay for.

  39. Here’s the full text of Jobs’ Resignation Letter.

    It sounds like Jobs is still going to ‘be around’ the office shepherding some projects farther down in the pipeline, based on subtext. It also sounds like he knows his time is short.

    Wild, outside the box speculation:

    What happens if Apple releases a BSD-licensed fork of iOS, as they did with Darwin, but kept the parts that drive UI proprietary? They have the cash reserves to weather the disruption this would cause, and they’d be able to demonstrate reduced development costs, while keeping the “Apple Experience” locked in nicely…

    Under Jobs? No way in hell.

    Under Cook, who is, deep in his heart of hearts, a “pare down costs to boost margins” guy? I can see it. While it lets competitors make “iOS compatible devices” to eat the low end of the market, that’s a part of the market that Apple doesn’t want anyway…and the payoff is getting an Apple army to fight the Android army, while still retaining the crown jewels of UI development.

    1. >What happens if Apple releases a BSD-licensed fork of iOS, as they did with Darwin, but kept the parts that drive UI proprietary?

      This is good outside-the-box thinking, and I agree Cook would be more likely to entertain the idea. But I don’t think it would actually work.

      The problem is that the proprietary “special sauce” in iOS includes the entire GUI. You can’t build an image usable in a phone without it. Which would leave potential partners no better off than before – they’d have to license those bits at whatever price Apple chose, meaning it’s just another proprietary platform with all the power in the proprietary vendor’s hands. This would be a non-starter against Android.

  40. it tells us that the first Android tablet with hardware comparable to the TouchPad and a supported software stack that goes below $99 is going to meet even more frenzied demand.

    Yeah, and likewise the first BMW M6 that goes below $20k will outsell the Camry.

    That doesn’t tell us anything meaningful, though, does it?

    “Products can be very appealing when priced far below cost” is not exactly an earthshaking announcement, as Phil suggested above.

    On the other hand, let’s not read anything much else into even that fact – I would have bought one just as a toy at $99, because that’s “impulse buy” territory, and I’m still a geek – but it would mean nothing about Android vs. iOS. (I remain firmly in the iOS camp as an end-user. Android can’t even compete in anything I’ve seen, because I care about UX.)

    It’s not “android tablets below $99” that have demand. It’s any tablet that isn’t obviously a pile of steaming feces – at least until the geek market is saturated.

    (On the topic of “$100 capacitive-screen A4.0 tablet by Q12012″, I ain’t seeing it as real likely, especially not likely in a way that anyone would want to buy it and not immediately return it.

    DealExtreme will sell you an A2.3 device with a 7″ capacitive screen $140 now, admittedly.

    But that machine has a 1ghz ARM11 core and 4 gigs of storage, 256mb of ram, is reported to weigh a ton, and has what I can only assume is dubious build quality, compared to anything that isn’t $140 from Shenzen. (And it’s a 7″ screen, not a 10” like the TouchPad and iPad.)

    Dropping nearly a third of the price on that product might happen in 6 months. Combining that with a modern-enough CPU (in the Cortex family) and enough storage to compete, and in a package someone won’t pick up and then immediately put down as a failure?

    Very tricky.

    I think they could meet your technical requirement by Q1, but only because it’s so lax and unrelated to actual user requirements.

    I don’t know about you, but I think people want 16GB of storage, minimum, and enough ram to do things, and a CPU that won’t feel slow as molasses – and those aren’t happening that soon in that segment.)

  41. Pingback: One More Thing…
  42. > Android can’t even compete in anything I’ve seen, because I care about UX.

    Funny. iOS can’t even compete in anything I’ve seen in Android, because _I_ care about UX.

    I hate using my work-provided iPhone 4 vs my personal CM7 HTC EVO. It feels clunky, the UI gets too much in my way, and navigating through apps is an exercise in patience. Painful. End-users care about UX, so obviously every iOS user will switch to Android when they come to their senses. Hm, wait…

  43. @jsk “I hate using my work-provided iPhone 4 vs my personal CM7 HTC EVO. It feels clunky, the UI gets too much in my way, and navigating through apps is an exercise in patience. Painful. End-users care about UX, so obviously every iOS user will switch to Android when they come to their senses. Hm, wait…”

    I’m sure there are some people who prefer a Toyota Camry to a BMW M6 too. Count me in the M6 camp though.

  44. Glen Raphael:

    > Actually, the “spaceship” seems like a more ominous sign than anything going on in Android-land. There’s a longstanding business rule of thumb that any time a company builds a grand new headquarters building, it’s a good time to short the stock.

    Nokia built theirs in 1997 and did pretty well for another ten years after that. Granted, the building is not as big in scope as Apple’s spaceship and Nokia’s organization is (was) infamously distributed to many locations compared to Apple.

  45. I think esr & co. underestimate Apple’s capacity to compete with Android.

    I suspect that aside from a sliver of geeks, most Android phones aren’t bought because they are open, but because they are cheap and available from their carrier of choice. They are many people’s first smartphones, and while no doubt many prefer them to iPhones, there is little to prevent non-geeks from switching to an iPhone later on. (No, it won’t be because of their investment in apps, because many [most?] Android users never buy any.) Many non-geeks might get annoyed at the large and growing problem of Android malware, for instance. I think there’s a once-Android-always-Android assumption here.

    Apple is also far less susceptible to disruption from below because they have become masters of industrial design, supply-chain management, and economies of scale. When Apple can get parts cheaper than competitors and make tens of millions of each product, it’s much harder for competitors to produce equivalent hardware more cheaply, or better hardware for the same price.

    Also, everyone around here seems to assume that Apple will never be more open than it is now. True, it will never be entirely open, but some increased openness could quell many of the complaints from all but the zealots.

    In short, I don’t think iOS will fall to 10% marketshare in smartphones for years, if ever, and Android tablets will gain share and success more slowly than predicted here.

    1. >I suspect that aside from a sliver of geeks, most Android phones aren’t bought because they are open, but because they are cheap and available from their carrier of choice.

      There’s plenty of evidence that a lot of Androids are bought because they are Androids. Underestimate the brand’s strength among non-geeks at your peril.

      My own experience with this is anecdotal, but it’s supported by the way Android phones are marketed. Handset vendor and carrier ads tout “Android” as though they believe many consumers have developed the same fierce brand loyalty as the Guy, the Gamer-Girl, and the Hottie. It might be that this was originally a ploy intended as a self-fulfilling prophecy; if so, it worked.

      It is probably not irrelevant that, even separately from Android, Google has an extremely positive brand image.

  46. Eric – what HP has inadvertently done with the “fire sale” on the Touchpad is shown that the market is willing to pay $150 for a capacitive active-matrix touch-screen tablet with wi-fi.

    Unfortunately for the market, there’s no way to make one at that price point without going broke.

    Until the cost of those screens comes down (which isn’t going to be any time soon) you can forget the $99 Android tablet that will finally lead to mass-adoption.

  47. What I’ve seen is that the 32GB Touchpad has a bill of materials of $318, see http://www.isuppli.com/Teardowns/News/Pages/HP-TouchPad-Carries-$318-Bill-of-Materials.aspx The breakdown is very interesting.

    While in general I agree with Brian, the details are a bit off.. the firesale shows the market is *more* than willing to pay $150 (the 32GB firesale price was $150, and it went just as fast as the 16Gb at $99). We can draw further data by seeing where ebay prices for scalped Touchpads end up, right now they look to be $250 or more.

    So to get mass (dare I day enthusiastic?) adoption of a quality (hardware) tablet without iPad cachet would seem to require selling at a bit of a loss right now, the difference isn’t all that much.

    1. >So to get mass (dare I day enthusiastic?) adoption of a quality (hardware) tablet without iPad cachet would seem to require selling at a bit of a loss right now, the difference isn’t all that much.

      Indeed. Good point about the $150 TouchPads also selling out. This pulls the probable timing of the Android breakout a good bit closer – I have no trouble at all imagining $150 Ice Cream tablets with capacitative touchscreens being out for Christmas. This calls for an update to the OP.

  48. Unfortunately for the market, there’s no way to make one at that price point without going broke.

    Not yet, but it’s not far off. The Nook Color could be made for under $250 several months ago (based on B&N’s complete indifference to the popularity of rooting it), and is still reasonably decent hardware by today’s standards. In particular it has the same IPS display as the iPad, just smaller.

  49. “I have no trouble at all imagining $150 Ice Cream tablets with capacitative touchscreens being out for Christmas.”

    Christmas of what year?

  50. I’m sure there are some people who prefer a Toyota Camry to a BMW M6 too. Count me in the M6 camp though.

    In related news, a new study reveals that Apple products cause brain damage…film at 11.

  51. I hate using my work-provided iPhone 4 vs my personal CM7 HTC EVO.

    How’s that battery life working out? In any case, rooting the phone and loading CM7 is a non-starter for anything but the geek market and an indicator that your UX preferences are not the norm for the non-geek.

    Also tweaking CM7 to be somewhat bulletproof for the non-geek (aka wife) takes a bit of effort. On the nook anyway. I wouldn’t say that the result is a particularly awesome UX and I’ll probably take it back to a rooted stock with an overclock.

    WebOS is a far superior UX to Android IMHO. I got one of those $150 TouchPads and while a bit rough in some places I do much prefer it over Android. Too bad it’s effectively dead. I have to get WP7 mango and give that a whirl.

  52. The Nook Color could be made for under $250 several months ago (based on B&N’s complete indifference to the popularity of rooting it), and is still reasonably decent hardware by today’s standards. In particular it has the same IPS display as the iPad, just smaller.

    Of course the NC is made for under $250 but probably not all that much. There are tradeoffs to get that price point even if I think it’s a very nice little tablet. $150 by THIS Christmas? Highly unlikely.

    I won’t say impossible because Amazon might be willing to take a loss to attempt to steal the platform from under Google but very doubtful. Perhaps that’s why Google refuses to release Honeycomb source and I doubt you’ll see Ice Cream Sandwich code out in October even if the 1st product launches then. Keeping any 2011 Amazon tablet at 2.x reduces the risk of an Amazon takeover.

    1. >$150 by THIS Christmas? Highly unlikely.

      Don’t write the possibility off too fast. Remember that the BOMs you’re seeing now are pegged to parts prices as they were about 90 days ago. Given a conservative estimate of component prices dropping 8% per month (flash is probably coming down faster) that’s enough time for them to have fallen by about a factor of 2 by Christmas.

  53. @nigel:

    You’re forgetting that stock ROMs (especially Samsung’s stock ROMs) will be using more of CyanogenMod as time goes on. CM7 itself may stay obscure, but the improvements in it are going to spread widely.

  54. @ravi anything to improve touchwiz is a good thing. Sorta. UI customizations by some vendors is an source of bugs/frustration for some devs. Depends on your app whether or not it is sensitive to blur, sense, touchwiz, etc. I’d say the vast majority are fine with only some minor tweaking required but those cases where it interferes is a real pain.

  55. @nigel:

    On pricing, the Nook Color has been spotted as low as $200 new / $170 refurbished, so $150 is not that far off. We’re also coming up on the one year anniversary of the release, so a new model seems likely. If B&N decided to focus a redesign on cost instead of features, I think they’d have no trouble hitting $150.

  56. What I find funny is the assumption by Apple enthusiasts that people want a personal status accessory that is exactly the same as everybody else’s.

    That people will finally come to their senses that fashion is meaningless and that they should become uniform and all carry around the exact same model of the exact same brand of phone. And that single model would be the iPhone X of the moment.

    there are over 5 Billion cell phone users, out of some 7 Billion humans. And they all want to carry an iPhone to show off to other iPhone users. They say.

    Would be a first in human history. Maybe we could not even talk about “Human” anymore, because it really feels alien.

  57. I think perhaps my prior comment was not coherent enough.

    With smartphone hardware costs trending asymptotically towards the cost of 100 grams of sand, plastic, and mostly non-precious metals, the future profit margins are in software. I previously wrote that the industrial age is dying, to be displaced by the software (knowledge) age. Automation is increasing, costs are declining towards material inputs, thus aggregate profits (and percentage share of the economy) are declining for manufacturing, even if profit margins were maintained (which they are not because 6 billion agarians are suddenly competing to join the industrial age in the developing world). There is now even a $1200 3D printer. Wow.

    Assuming the smartphone is becoming a general purpose personal computer, the software paradigm that provides unbounded degrees-of-freedom, can in theory gain at an exponential rate more use cases over a bounded platform.

    Even if Apple competes on the low-price end, I predict their waterfall implosion will be driven by some aspect of “web3.0” that diminishes their captive high rents on content and cloud services, because this will cut off their ability to subsidize software control (and the low-end hardware), i.e. the subsidy that they are not leveraging the community capital via unbounded open source. Such a paradigm shift may also threaten Google’s captive high rents on ad services, but Google leverages open source to minimize the subsidy. I envision that Google will lose control over the platform once the high rate of market growth slows and vendors compete for a static market size pie. That will be a desirable outcome at that stage of maturity.

    The high-level conceptual battle right now is not between hardware nor software platforms, features, etc.. It is a battle between unbounded and bounded degrees-of-freedom. The future belongs to freedom and inability to collect high rents by capturing markets in lower degrees-of-freedom. So I would bet against all large corporations (eventually), and bet heavily on small, nimble software companies.

  58. @Shelby
    “he future profit margins are in software.”

    The marginal cost of software is the energy needed for copying (~kT/bit) is essentially zero.
    http://arxiv.org/abs/1004.4732

    So the profit margins of software will go down to zero too. The real margin is in the one thing that has a fixed marginal cost, human time. That is, the real profits are in services, especially in services with an information balance.

    Google is in the service industry. The profit margin is not in the search software, but in keeping the software running and improving it. Like someone wrote more than 10 year ago (Eric? Bruce Perens?), the value in the software is the discounted value of future upgrades.

    The future is services.

  59. @Winter
    I agree that the future profit margins belong to the owners of human knowledge (distinguish from mindless repetitive labor that adds no knowledge), i.e. the individual programmers. Services are trending asymptotically (i.e. but never reaching) full automation, meaning that programming will move continually more high-level forever. Software is never static.

    Thus, services is software. Knowledge is software.

    I have written down (see the What Would I Gain? -> Improved Open Source Economic Model, and scroll horizontally) what I think are the key theoretical concepts required to break down the compositional barriers (lack of degrees-of-freedom) so the individual can take ownership of his capital. I have emailed with Robert Harper about this. Afaics, once this is in place, then large companies will not be able to take large rent shares. We are on the cusp of age of the end of the large corporation and the rise of the individual programmers, hopefully millions or billions of them. Else massive unemployment.

  60. > Foremost among Apple’s problems is Android.

    Nope. Apple’s biggest problems are: 1) Making enough units to meet demand (they’re still delaying rollout in additional countries while ramp-up continues) 2) finding enough engineering talent to cover all the bases, and 3) finding enough office space (they’re not building that new campus for the fun of it, they need the space. They’re not planning on giving up any of the buildings they currently occupy all over Cupertino that aren’t on the old HP site).

  61. >are there people in Apple who have enough foresight and enough balls to continue to do such bold moves – and do they have enough foresight to move in the right direction?

    Yes, there are. Apple’s SVPs are very aware of how Apple got where they are, and what they’ll have to do to keep growing. This isn’t like Gates handing off Microsoft to Ballmer and the cast of a thousand poseurs. Someone like that “J Allard” character would never have made it to senior manager at Apple since SJ’s return.

  62. @Shelby
    “Thus, services is software. Knowledge is software.”

    I would like to separate Software as bits that can be distributed at almost zero cost, and Services, which claim the time of some human.

    Services will always cost money, at least the amount needed to sustain a person for the time she spends on the task (+plus maintenance of person + pension etc).

    It is true that the tasks are fleeting. However, the principle is not. So, I see a bright future in software as a service, Search, Social sites, weather forecasts, storage, which always include a component of human action to keep systems up and running. There never was a future in Software as selling bits. There was a market that was kept profitable by criminal abuse of monopolies and publishers.

    @Shelby
    “We are on the cusp of age of the end of the large corporation and the rise of the individual programmers, hopefully millions or billions of them.”

    If you write “Consultant” and see software not as a product, but as part of a service, I could agree. But I would prefer to think of it as “ICT services”.

  63. @Winter
    Agreed. The bits are never static. They continually require new knowledge to further refine and adapt them. It is not the bits that are valuable, but the knowledge of the bits, and how to fix bugs, improve the bits, interopt with new bits, and compose bits. And this process never stops, because the trend to maximum entropy (possibilities) never ceases (2nd Law of Thermo). What makes software unique from (fundamental of) all other engineering disciplines, is that software is the encoding in bits, the knowledge of the other disciplines– a continual process.

    But actually it is not an encoding in bits. It is an encoding in continually higher-level denotational semantics. The key epiphany is how we improve that process, and the tipping point where it impacts the aggregation granularity of capital formation in the economic model of that process. If you understand language design and the references I cited in them, the links I provided might be interesting (or the start of debate).

  64. @Shelby
    “If you understand language design”

    Not particularly well. Artificial languages are not my field, and I will probably mostly be offline during the coming week, so a debate would not be very convenient now.

    But for the rest, I would use different words, but the concepts are the same, so we see the future going in the same direction, more or less. At least in this field. ;-)

  65. Don’t write the possibility off too fast. Remember that the BOMs you’re seeing now are pegged to parts prices as they were about 90 days ago. Given a conservative estimate of component prices dropping 8% per month (flash is probably coming down faster) that’s enough time for them to have fallen by about a factor of 2 by Christmas.

    I haven’t been tracking flash prices in the last year or so but I do recall that we first hit $1/GB (contract pricing) in 2008 and then didn’t see those prices again until late 2010. Flash prices were expected to drop 35%+ this year but despite soft demand the earthquake plus new production slipping a quarter has kept pricing up a bit.

    We saw a nice dip in May when Apple and other reduced contract buys.

    http://www.eetasia.com/ART_8800644460_499486_NT_a67f038b.HTM

    But contract prices fell slower in August than July. MLC prcing only dropped 1-2% according to DRAMeXchange.

    Consumer SSD pricing hasn’t hit that $1/GB mark we had all be expecting to happen in 2009 yet. We’re still looking at $2-3/GB consumer SSD pricing with the expectation we’ll see $1.50/GB by Christmas.

    An 8% drop per month for component pricing may or may not be “conservative” if you average the price drop over the entire lifetime and include the huge drop in price once you hit mass market but no industry sustains 8% month to month after that happens. What happens is that companies stop producing those components when they become economically unattractive to produce and retool to produce components with higher profit reducing output on older components….sometimes to zero.

    There’s a price floor.

  66. So the profit margins of software will go down to zero too. The real margin is in the one thing that has a fixed marginal cost, human time. That is, the real profits are in services, especially in services with an information balance.

    Right, because software takes zero time to write…not.

    While it is very true that we have vastly increased productivity through higher levels of abstraction (at the same rate as hardware advances IMHO) but the problems we solve are also far more complex (or fully featured, however you wish to characterize) than before.

    The idea that individual programmers will supplant large development teams is a delightful fantasy for some. The reality is that while power tools might allow me to build a house as an individual they don’t allow me to build a skyscraper as an individual. When the time comes that I can build a skyscraper using power tools as an individual, society will be building single structure mega-arcologies if not Dyson spheres.

    The leap in capability is matched step by step by the leap in complexity and scale of requirements.

    As an aside, as much as we’d like to think were a fast paced industry it still takes around 10 years for a new concept to percolate from inception to mainstream. There’s a scad of potential next mainstream languages from Haskell, Scala, Spec#, Erland, OCaml, etc.

    It’s a shame that Google didn’t elect to grab one of these for Android and instead decided to leverage/steal (depends on your perspective which word you choose) Java from Sun.

  67. @nigel
    “Right, because software takes zero time to write…not.”

    There has been an understanding in economics that the equilibrium price of a product is the marginal cost. That is actually the equilibrium price of software in a free market, the FLOSS world “sell” software at the marginal cost of supplying a copy.

    But, software is a service industry, so people are willing to pay coders to sit down and produce new code. Code which will be traded for the marginal costs in a very short time.

    Which was my point in the first place.

  68. @esr: “Don’t write the possibility off too fast. Remember that the BOMs you’re seeing now are pegged to parts prices as they were about 90 days ago. Given a conservative estimate of component prices dropping 8% per month (flash is probably coming down faster) that’s enough time for them to have fallen by about a factor of 2 by Christmas.”

    by “conservative” do you mean “pulled out of thin air”?

  69. @Winter “there are over 5 Billion cell phone users, out of some 7 Billion humans. And they all want to carry an iPhone to show off to other iPhone users. They say.”

    see http://en.wikipedia.org/wiki/Straw_man

    Apple advocates don’t think the iPhone is going to dominate the market. They will be a major player, but one of several. And there will be multiple models eventually, just like the iPod and Macs.

    It’s the Android advocates that seem hell bent on world domination. ESR seems to think that the iPhone is destined for <10% market share. Happened with Windows, so must happen with phones? That seems to be the flawed logic. The markets are vastly different. Compatibility is much less important these days as all the real value is on the internet. You just build simple mobile apps that leverage these services. The cost to develop for multiple platforms is lower than it was back in the PC days. And office type products don't map to mobile.

  70. @phil
    “Happened with Windows, so must happen with phones? That seems to be the flawed logic. ”

    Not quite.

    In the PC wars, we had Apple as a single supplier, “walled garden”, PC maker that made everything from the hardware to Application programs. Targeting the high end, cool market. And you had MS, which made software, to everyone who wanted to buy. Targeting the low end, consumer/business market. Windows ran on (almost) everybody’s hardware.

    You only have to substitute Android, and we are back again. So why should it go in the other direction now? What is the difference?

  71. @nigel Larger software teams accomplish less are due to the Mythical Man Month. My conjecture is that individual developers will become the large team sans the compositional gridlock of the MMM, with the individual contributions composing organically in a free market. I realize it is has been only a dream for a long-time. On the technical side, there is at least one critical compositional error afaik all these languages have made, include the new ones you mentioned. They conflated compile-time interface and implementation. The unityped (dynamic) languages, e.g. Python, Ruby, have no compile-time type information.

    If we define complexity to be the loss of degrees-of-freedom, I disagree that the complexity rises. Each higher-level denotational semantics unifies complexity and increases the degrees-of-freedom. For example, category theory has enabled us to lift all morphism functions, i.e. Type1 -> Type2, to functions on all functors, i.e. Functor[Type1] -> Functor[Type2]. So we don’t have to write a separate function for each functor, e.g. List[Int], List[Cats], HashTable[String]. Perhaps complexity has been rising because of the languages we use. We haven’t had a new mainstream typed language since Java, which is arguably C++ with garbage collection. Another example, before assembly language compilers, we had the complexity of manually adjusting the cascade of memory offsets in the entire program every time we changed a line of code.

    Indeed it can consume a decade or more for a language to gain widespread adoption, but that isn’t always the case, e.g. PHP3 launched in 1997 and was widespread by 1999. Afaik, a JVM language such as Scala can compile programs for Android.

    @Winter, agreed. It is my hope that someday we won’t need to pay for all the complexity bloat and MMM losses. We will get closer to paying the marginal cost of creating knowledge, instead of the marginal costs of captive market strategies, code refactoring, “repeating myself”, etc.. I bet a lot boils down the fundamentals of the language we use to express our knowledge. Should that be surprising?

    @phil captive markets grow as far as their subsidy can sustain them, then they implode, because of the exponential quality of entropy and the Second Law of Thermodynamics (or otherwise stated in Coase’s Theorem). Apple’s subsidy might be stable for as long as their content and cloud revenues are not threatened, perhaps they can even give the phones away for free or negative cost. That is why I think the big threat to Apple will come from open web apps, not from the Android hardware directly. The Android hardware is creating the huge market for open apps. I guess many are counting on HTML5, but the problem its design is a static and designed by committee, thus not benefiting from network effects and real-time free market adaptation. I would like something faster moving for “Web3.0” to attack Apple’s captive market.

  72. phil,

    I think a big part of Android fanboyism is fosstards wishing very hard for Android to be the next Windows, so developers would be forced to acknowledge and target it. Left to their own devices, free of Stallmanite coercion tactics, hackers tend to prefer the Apple stuff.

    Developers targeted the iPhone because they wanted to. When the App Store first debuted, the iPhone had previously been a neat, innovative little proof-of-concept device, it took the developer community by storm. It took significantly more marketshare and and heavy promotion for devs to sit up and really take notice of Android. In part this was because Apple singlehandedly did what Android and OpenMoko and all the other fosstard pie-in-the-sky projects couldn’t do: broke the carrier lock on Apple distribution. But really it’s about something else. It’s about a great API that gives you extensive access to a great piece of hardware.

    Android fanboys want Android to “win” so that developers will have to target it. iOS makes developers want to target it simply by existing; it’s that freakin’ awesome of a platform. (Recall that Tim Berners-Lee chose its direct ancestor on which to first implement the Web.)

    Not yet, but it’s not far off. The Nook Color could be made for under $250 several months ago (based on B&N’s complete indifference to the popularity of rooting it), and is still reasonably decent hardware by today’s standards. In particular it has the same IPS display as the iPad, just smaller.

    The Nook Color screen is a beautiful piece of shit. It’s beautiful because of the LCD components; it’s a piece of shit because of the touch components. The capacitive touch screen appears to have a far lower sensor density than the iPad, meaning that it’s way less accurate at registering where a touch occurs. Sometimes it goes crazy and registers touches all over the place, at the far opposite side of the screen from where you actually touched.

    So yes, severe compromises were made in order to get the price that far down. Do not expect a $150 tablet that’s anywhere near worth using, by Christmas or any other time in 2011 or 2012 at least.

    The cheapo TouchPads sold out because people were getting an actually far more expensive item at a fire-sale discount. It doesn’t generalize. The Cruz Tablet is going for $180 at Radio Shack these days and it’s not selling. It’s not selling because it’s shit.

  73. So if the economy is going from manufacturing to software/service, just what will you sit on to write your software? Where will it come from? Who will make it? Will you code it yourself and have it appear out of thin air?

    I don’t think we’ll ever get away from material things. I think the talk about the service economy is apologia for the fact that government regulation and the culture (We don’t want dirty factories in our back yard!) have driven it offshore.

  74. About the Nook Color screen:

    Sometimes it goes crazy and registers touches all over the place, at the far opposite side of the screen from where you actually touched.

    That sounds more like a software problem to me.

  75. @Shelby: “@phil captive markets grow as far as their subsidy can sustain them, then they implode, because of the exponential quality of entropy and the Second Law of Thermodynamics (or otherwise stated in Coase’s Theorem). Apple’s subsidy might be stable for as long as their content and cloud revenues are not threatened, perhaps they can even give the phones away for free or negative cost. That is why I think the big threat to Apple will come from open web apps, not from the Android hardware directly. The Android hardware is creating the huge market for open apps. I guess many are counting on HTML5, but the problem its design is a static and designed by committee, thus not benefiting from network effects and real-time free market adaptation. I would like something faster moving for “Web3.0? to attack Apple’s captive market.”

    Are you for real with this Second Law of Thermo BS? It’s a law of *THERMODYNAMICS*. You going to apply quantum physics next?

    Mobile safari is the best mobile browser out there anyway, so HTML 5 is still can be a win for Apple. Long term mobile web will displace some of the app market. It’s gone too far to the app side now I think, but there are inherent advantages that apps have that will be hard to replicate. Apps just feel smoother and more responsive. I know android fanboys don’t care about “feel” as it doesn’t show up on a feature grid.

  76. @Winter

    “Not quite.

    In the PC wars, we had Apple as a single supplier, “walled garden”, PC maker that made everything from the hardware to Application programs. Targeting the high end, cool market. And you had MS, which made software, to everyone who wanted to buy. Targeting the low end, consumer/business market. Windows ran on (almost) everybody’s hardware.

    You only have to substitute Android, and we are back again. So why should it go in the other direction now? What is the difference?”

    here are a few:
    1) for the PC world the public wasn’t on the internet, and when it emerged the winner had already been decided. Smartphones are largely a way to leverage internet services while mobile
    2) compatibility was everything. it was a large undertaking to write applications – writing for multiple platforms was onerous (many of wildly popular apps are available for both platforms, showing it’s not that hard)
    3) apple has more apps than android anyway, that never happened with mac
    4) macs were priced way above windows machines. iphones are price competitive and all the devices are priced much lower than PCs/macs were in the 80s/90s
    5) macs never got a significant market share. iOS has one
    6) apple is a much healthier/bigger/wealthier company than they were in the 80s/90s
    7) windows showed that having a wild west approach to installing software leads to a giant malware, virus infected mess
    8) MS Office doesn’t matter on mobile
    9) apple has an integrated ecosystem (itunes w/ video/music, apple tv, iPad). Android doesn’t have anything close to this
    10) apple has extremely high brand awareness and an extremely highly favorable public reputation

    I think #1 is the biggest difference though.

  77. @Bob: So if the economy is going from manufacturing to software/service, just what will you sit on to write your software? Where will it come from? Who will make it? Will you code it yourself and have it appear out of thin air?

    No one is saying that manufacturing will *disappear*. But increases and automation and productivity improvements will mean that fewer and fewer people will need to be employed in manufacturing to maintain (or increase!) output. Similar to what happened with agriculture. See http://www.ers.usda.gov/publications/eib3/eib3.htm
    In 1900 41% of the workforce was employed in agriculture.
    In 2000 1.9% of the workforce was employed in agriculture, and output is much greater.

    A Luddite (or Conservative, at the time) would look at those numbers and think “dear maker, all those lost jobs! people will starve in the streets”. Someone with a little faith in human ingenuity would think, “what could you do with all that talent, 40% of the labor force freed from inefficient back-breaking drudgery”… the next step was manufacturing.

    In the latter half of the 20th Century a similar thing has been happening in manufacturing, fewer workers but far greater output per worker. Replacing entire machine shops with a CNC machine really *is* progress.

    IMO, you just need to avoid any reductio ad adsurdum situations- thinking you can get away with going to 0%. Because you can’t. (I REALLY need to find time to read Christensen) See http://theintentionalleader.wordpress.com/2010/08/19/be-careful-what-you-wish-for/ for an interesting story.

    Oh and making sure that the dole doesn’t become the new heart of your economy.

  78. In 2000 1.9% of the workforce was employed in agriculture, and output is much greater.

    And far more processed, more toxic, and less healthy.

  79. There has been an understanding in economics that the equilibrium price of a product is the marginal cost.

    That’s a silly “understanding”. The marginal cost of a good or service is generally much less than the average cost of producing it: “The first chip off a production line costs millions, the second costs $0.25.” A business where the current market price has already fallen below the expected average production cost for new producers will no longer attract them.

    A more accurate statement is that the long-term equilibrium price of a commodity tends toward the second-lowest per-producer average cost of producing it. Why the second-lowest? Because the lowest-cost producer can price his product close to the second-lowest-cost producer’s average costs per unit and still be profitable while driving the competitors to either find their own cost savings or leave the market. He may even deliberately price it high enough to keep that second-lowest-cost producer around, although crippled, because he’ll make more money at the higher prices and not risk having an antitrust action filed against him for monopolizing the market. (Anti-trust laws do not exist to protect consumers; they are to protect competitors at the expense of consumers.)

    (Analogously, one might note that the price that an item can earn at auction is one bid increment above the second-highest bidder’s top price. That the buyer may be willing to pay even more is unimportant. All that matters is he is willing to pay more than #2.)

    But if the price of a commodity were exactly the same as the average cost to produce it, the producers would not earn any profit, so they’d find another line of business. It has to be enough higher than the cost of production to give investors a better profit than they’d find in that other line.

  80. >And far more processed, more toxic, and less healthy.

    Not sure what that’s supposed to prove, aside from demonstrating that you have an overly rosy view of the past untroubled by reality. Food being processed is essentially a lifestyle choice that doesn’t have much connection with basic production. Do a search for ‘swill milk’ and you will find that toxicity is not new, in fact our current food supply is remarkable for its purity and safety.

    And my point still stands that, thanks to automation and efficiency improvements output has gone way up with labor required way down, and literally millions freed from endless backbreaking, soul-killing physical labor.

  81. > iOS makes developers want to target it simply by existing; it’s that freakin’ awesome of a platform.

    Bingo. There’s a reason why there were so many apps for the iPhone so soon after the SDK was available. If Android had a better development environment than iOS, I would have switched. As Wil Shipley has stated on occasion, I would rather stick forks in my eyes than try to develop code in C++ or Java.

  82. I would rather stick forks in my eyes than try to develop code in C++ or Java.

    No argument with C++, but Java isn’t that bad if you avoid the enterprisey tentacles that often infest it, which Android mostly does. It has some solid advantages over ObjC like garbage collection, type safety, no header files, a better (or at least more comprehensive) standard library, and autoboxing for primitive types.

    Although I’m still hoping pypy gets ARM and Android support.

  83. That is actually the equilibrium price of software in a free market, the FLOSS world “sell” software at the marginal cost of supplying a copy.

    But, software is a service industry, so people are willing to pay coders to sit down and produce new code. Code which will be traded for the marginal costs in a very short time.

    Which was my point in the first place.

    Except that the majority of software is not provided by the FLOSS world nor can the FLOSS world produce significant code without the largess of large corporations. Your assertion is the equivalent of stating that the marginal cost of cars is the gas to drive it to the destination based on observation that some companies and folks donate some cars to charities.

    There is a set of business cases where it makes sense for companies to pay for software development and give it away for free. Where this is true the subsidized FOSS project produces quite a bit of code (like Linux where 75% of contributions comes from paid devs). Where it doesn’t you end up with 2 active core devs on a major OS project like GIMP and improvements are glacial. In comparison Photoshop is several million lines of code and provides far more functions and is a pro grade tool that folks pay a lot of money for and for good reason.

    Software is sometimes a service industry and sometimes a product industry. Both exist and both make money.

    1. >Except that the majority of software is not provided by the FLOSS world nor can the FLOSS world produce significant code without the largess of large corporations.

      You are so astonishingly full of shit. Notably about the second clause.

  84. Brian 2,

    Garbage collection is not an advantage. Not in this environment.

    Dalvik is also hampered by the memory/performance overhead that a VM necessarily entails. If you have a space-efficient VM, it will be slow. If you have a JIT, you incur the space overhead of keeping your JITted code around in memory.

    Apple made the right decision in making apps native code.

  85. Larger software teams accomplish less are due to the Mythical Man Month. My conjecture is that individual developers will become the large team sans the compositional gridlock of the MMM, with the individual contributions composing organically in a free market.

    If that’s what you got out of MMM you need to read it again. I also get the distinct impression you’ve never been on a large software development team or understand just how big a million lines of code actually is.

    There’s simply stuff that takes a lot of folks to build. The stuff gets bigger as technology gets better but that’s part of the deal. Humans like bigger stuff.

    The solution for “compositional gridlock” has been known since before MMM. Decomposition of a larger problem into smaller problems solved by smaller and sometimes specialized teams. This is how we sent guys to the moon and pharaohs got pyramids.

  86. @ Jeff Read:

    Garbage collection is not an advantage. Not in this environment.

    GC is a major advantage if talking “developer-friendly”. It is also a major advantage in high level app development. Very rare are the situations where having no GC pays off. Imperative languages like D take the right approach on GC. Default is to have a GC, but manual memory management can be done if required.

    Dalvik is also hampered by the memory/performance overhead that a VM necessarily entails. If you have a space-efficient VM, it will be slow. If you have a JIT, you incur the space overhead of keeping your JITted code around in memory.

    There is more to having a VM than memory/performance overheads (memory is getting bigger/cheaper every passing day to the point where this is *almost* irrelevant). A key advantage to VMs is when the VM acts as a substrate on which several powerful languages are built. The languages can still interoperate with one another very easily while of course using the right tool/language for the job. Both JVM and CLR have numerous languages than can easily interoperate with one another. Dalvik can become just like the JVM and CLR in terms of languages built on top of it.

    Apple made the right decision in making apps native code.

    Yes and no. As long as you can port Dalvik to a CPU (any CPU) you have the android apps running. If your goal is to run on anything and everything the VM approach does make sense.

    I would have preferred a design like Meego (but with a developer-friendly language like D for the UI) or on the other extreme something like webos (from what I learned about it from esr and others) to ultimately rule. Having said that, android as is will still take over the world.

  87. @phil your list of 10 reasons illuminates why I think open web apps will be the waterfall exponential demise of iOS. Because then it won’t matter which OS is running them, so Apple won’t be able to collect the subsidy it depends on to fund its closed source development model. Note I assuming open web apps will proliferate faster than Apple can approve apps. The world doesn’t want to wait for approval to publish every new of a million new app features per day. I am not going to bore readers with an explanation of Coase’s Theorem, nor derive it here from the 1856 statement of 2nd Law of Thermo that the universe trends to maximum independent possibilities (entropy). There are sufficient resources on the web for that.

    @Jeff Read “And far more processed, more toxic, and less healthy.“, you are conflating economy-of-scale and automation with processed foods. Remember my point in the prior blog, that uncracked whole grain can be stored for 25 years, thus economy-of-scale is compatible with whole food. Perhaps soon we will have a robot to maintain our own private vegetable gardens. Don’t throw the baby out with the bath water. I understand the political frustration, and the solution is (using technology) just individually come out of the socialism being caused by end of the industrial age, as it is dying (and literally for the people who don’t exit to the software/tehnology/knowledge age).

    @The Monster in the short-run the price can be lower than the cost, due to debt, as in China now. As this accelerates volumes, the cost declines. This expands for as long as volume can expand faster than the usury acceleration of the interest.

    @nigel I agree that open source gift subsidy economic model is not sufficient to create a million programmer jobs. And I agree that breaking the large down into compositional modules is the solution. As I said, the large team becomes a free market of individuals (or small teams), who get paid as individuals (or small teams). Hopefully you will read my prior links.

    @Jeff Read regarding GC and VM, society will not go backwards to more complexity on the programmer, because the exponential cost on knowledge formation. It is less expensive to solve it with more resources, and/or better tuned GC algorithms. The iOS and development platform will not scale (cost efficiently) to a million new software features per day.

  88. Well, GC problem is good example of where Apple will need to push very hard – and it’s not clear if it’ll be able to without Steve’s charisma.

    GC does not make sense in mobile. CPUs are underpowered, delays GC introduces are highly visible, etc. Thus Apple’s decision looks rational and Google decisions looks bad. Today.

    But times, they are changing. Emacs had a joke “Eight Megabytes And Constantly Swapping” definition. Yet today, when it’s compared to many other systems (like IDEs) it’s often used as blazingly-fast, responsive example. What happened? Moore’s law happened. Today eight megs is something you have directly in CPU and full RAM is 100 times that.

    When you have system which is 10 times more powerful then it minimally needs to be for the ask GC starts to make sense: yes, it still introduces delays, but when system is idle 90% of the time and when it’s powerful enough to do a lot of work in a few milliseconds… GC works just fine.

    Well, the future of Mobile is like that: we’ll have four-core 2GHz CPUs soon enough and eight-core 3GHz CPUs not all that long after that. But you’ll only be ever able to use 10-15% of their power on sustained basis (or else juice will run out in hurry). This means GC will make sense and it’ll probably make sense to add it to iOS. But… it’s hard to add GC: this is yet another redesign (bigger then multitaking introduction with iOS 4.0).

    Apple was historically quite good at pushing (and punishing) developers: today more then 70% of apps in Apple’s store support multitasking, for example. But as time goes on and as newer and newer capabilities are offered it becomes harder and harder to convince developers to rewrite perfectly good programs just to satisfy Apple’s craving for beauty. Remember riots like Adobe’s Photoshop CS4 (which was 64bit only on Windows because it used Carbon for many years even after OS 9 retirement)? Expect more and more riots like that in iOS space.

    This is where it becomes hard. In the past Apple lost a lot of developers when it introduced such bold moves while Microsoft (with his “compatibility is sacred” mantra) was tangled up in insanely long release cycles (but kept the majority of the market). And pretty soon Apple will need to choose side. Google has more time because the most invasive decisions (multitasking’s design, GC, VM, etc) were chosen years in advance: basically Android was designed from the start as “platform for year 2015″… and then trimmed to fit in mediocre hardware of 2008. The only big change which Google have not foreseen are related to tablets: because smartphone is small device it does make sense to always cover the whole screen with a single activity – but tablet are bigger and this decision looks silly. Fragments API was designed as response to these changes – but it still often feels like alien addon.

  89. > GC does not make sense in mobile.

    What does make sense on a mobile device is Apple’s new Automatic Reference Counting (ARC) system in the latest Clang/LLVM toolchain. I’ve always carefully followed the memory management rules in my Obj-C code, and I had about one line per project that I had to edit to make my code all work with ARC. People who were sloppy about it would have a bit more work to do to update their code for ARC, but it’s still a surprisingly small amount of work.

  90. in the short-run the price can be lower than the cost, due to debt, as in China now.

    You’re using The People’s Republic of China as an example of how a free market operates? Where the government now allows “capitalism” but massively interferes in the market, creating artificial costs and/or prices to the players? When a producer’s prices are subsidized by government, its prices are higher than the downstream costs. When a producer’s costs are subsidized, its costs are lower than the upstream prices. Either way allows the producer to “sell below cost” without actually having done so.

    A physical product (as opposed to a service or intangible product), once produced, becomes Inventory. Excess inventory can indeed be sold below cost (“fire sale” like the HP tabs) but the company that finds itself selling at a loss will thereby be disciplined to do a better job of producing in the future, or find better places to invest their dwindling assets. Potential competitors are discouraged from making the necessary investment unless they have some reason to believe they can reduce costs, get consumers to pay more, or both. Both of these factors produce less suppliers, moving the supply curves so the equilibrium slides up the demand curve to higher price and lower quantity.

  91. You are so astonishingly full of shit. Notably about the second clause.

    Then it would be easy to simply provide a counterexample. Name more than a couple million SLOC+ open source project has no (or very very minimal) corporate sponsorship developed entirely by individual devs not being paid by anyone? CPAN? That’s the kind of large FOSS project that can be built largely by individual contributors because it’s lots of little things as opposed to one cohesive thing.

    Linux? No. IBM, HP, Red Hat, etc.
    OpenOffice/LibreOffice? No. Sun.
    mySQL? No. Innobase, Sleepycat, etc.
    Firefox? No. Mozilla.
    Android? No. Google.
    Apache? No. Many individual corporate devs assigned and quite a few large platinum corporate sponsors.
    Xorg? No. Original XFree86 ore from X11R5 and contributions from SGCS.
    PostgreSQL? No. Illustra, Great Bridge, Pervasive and other provided full time devs and code contributions
    BSD? Berkely, AT&T, etc

    These projects either started as corporate code base released or saw corporate investment in the form of money or full time devs…mostly because it made business sense to do so (and sometimes, not so much in hindsight…like Sun).

    GIMP. Hmmm…that could be one. Depends on how you count XCF.

    A million+ lines of code is a lot of code to manage much less write. There’s a reason software engineering exists as a practice. There’s a lot of scut work that aren’t itches many devs want to scratch. Managing that many individual contributors to produce a million+ lines is “problematic”.

    Linus is a one in a generation kind of guy like Steve Jobs. I’m not sure that even he could do THAT well on a mammoth project consisting only of individual contributors not being paid to work on that project by any company or entity (government, university, Shuttleworth, etc).

    1. >Then it would be easy to simply provide a counterexample.

      Emacs should do nicely. And most of the projects you site reached mega-SLOC scale well before they had corporate sponsors. I know this because I am significantly responsible for the fact that they have corporate sponsors.

      >Managing that many individual contributors to produce a million+ lines is “problematic”.

      Yes. Yes it is. But we’ve been doing it in open-source land for decades. Your Johnny-come-lately ignorance is appalling.

  92. @nigel
    I consider the arguments of people like Adam Smith about the relation between marginal costs and equlibrium prices much more convincing than your just so story. And Smith at least had some real evidence to back his analysis.

    And why should a company offloading its production costs upon a FOSS community with some in kind subsidie disqualify FOSS? Linux, Apache and the like are bona fida FOSS projects that produce the backbone of modern ICT infrastructure. And they are paid for by the up front savings and productivity increases.

    You are just repeating MS FUD by demanding that FOSS should be limited to the lone basement programmer. Divide and conquer strategies.

  93. @The Monster
    My understanding is the debt driving China is being created in the USA.

    Humans have been using debt in all economies for apparently at least 5000 years. The effect of debt is to exponentially accelerate volumes and economies-of-scale, but this overshoots demand eventually, and waterfall collapses back to the trend of maximum entropy. Debt must grow exponentially, because of the compounding interest. Thus it always overshoots at the end and collapses. This is all perfectly natural. My understanding is the universe uses exponentially increasing local order (“closed thermodynamic systems” where the closed barrier’s energy/cost subsidy will eventually collapse) to drive the overall trend towards maximum independent possibilities (entropy).

    We see that phenomena with iPhone’s unsustainable subsidy and GC-less platform kick-starting the smartphone era.

    China is symbiotic, because they peg the Yuan and thus due to export surplus, force their money supply to increase along with ours. It is a global racket to enrich those who hold capital assets and impoverish labor and innovation. This is why capitalists typically support the “take more of the pie” than “increase the size of pie” outcome, which is what we can see now with Obama using the AGW fraud to shut down coal plants by unconstitutional EPA decree (expanded regulatory power enabled by POTUS executive order without a Congressional vote), to increase profit margins for those with exemptions (e.g. GE) by eliminating competitors. This may cause utility prices to rise in the USA. Food prices are higher after the ethanol subsidy of the government. We are told these things are for our benefit.

    I believe the global racket is because stored assets capital is always depleting, and capitalists try to fight nature and thermodynamics (this is probably a reason so many of their mass media deceptions involve bogus science). The reason capital is depleting is because the only way to increase your stored assets capital is to invest it “at risk” (not fixed income, which is the debt model), which thus enriches the people you invested in (and the society on the aggregate) more than yourself, thus your relative networth declines, even though your nominal wealth increases. This becomes more certain as networth grows very large. That is why I wrote the only sustainable large business model is to take 10% of the increase that your stored assets capital generates via passive investment. The point is not to become relatively richer than everyone else (sociopath), but to enrich the society optimally while increasing your nominal wealth. But unfortunately the people who run the world apparently don’t think that way, or maybe it is because the people of the world don’t think that way, and love debt and fixed income instead of “at risk” investment.

    We are noticing much unconstitutional activity lately, such as the above and the new “SuperCongress” of 12. If you like conspiracy theories, one says that the USA constitution was voided in 1861 or thereabouts and we’ve run by executive order ever since where the federal law says the “United States” is a “federal corporation”. I am not saying it is true.

    I don’t view this as depressing, rather that the industrial and passive capital asset intensive age is dying, and the capitalists and the socialism that depends on them are desperate. We come out of that morass with our non-passive technological capital and prosper.

  94. It’s hard to find large (millions of lines) FOSS projects without corporate involvement not because they can not exist without corporations, but because they can not exist without interest: it’s hard to imagine *useless* FOSS project with millions of lines of code (while in proprietary world it’s the norm: there are A LOT of failed mega-projects) – and if it’s *usefull* then of course it’ll be usefull for companies too… unless they can not participate because someone declared such activity “questionable”. Then FOSS projects live just fine without corporate endorcement (think MAME: that’s close to 3 MLOCs), but, as I’ve said, such cases are rare.

    FOSS projects do not need corporations to thrive but they usually don’t try to shun them: why should they? It’s possible to develop KHTML in parallel with WebKit out of spite (just like it was done for years before “supercapable” Apple decided that it can not write browser from scratch and should borrow FOSS one), but what exactly will it prove?

  95. @Jeff Read “Brian 2,

    Garbage collection is not an advantage. Not in this environment.

    Dalvik is also hampered by the memory/performance overhead that a VM necessarily entails. If you have a space-efficient VM, it will be slow. If you have a JIT, you incur the space overhead of keeping your JITted code around in memory.

    Apple made the right decision in making apps native code.”

    This “overhead” doesn’t matter for 90% of your apps. Just look at the most popular apps – facebook, foursqaure, etc. They are simple and not processor/memory intensive. And android has C available for games/intensive apps.

    Objective C is archaic. Try anything involving Strings. I thought I’d left header files behind 15 years ago. At least with ARC, they are making memory management easier/less error prone. I’m sure you’ve seen your apps crash on iOS. Most likely that’s do to raw pointer issues. I say all this as someone who doesn’t really like Java, but language wise – Java is way ahead. And Java is not exactly a cutting edge language. I’d say Java is getting long in the tooth.

  96. @shelby
    Seeing the world as a big conspiracy clouds your judgement. It is too much honor for the conspirators, who are never that smart and that well organized and it hides much simpler causes as stupidity, information you lack, or simple self interest. You should also always include the option that they are right and you are wrong.

    Anyhow, at the moment China is building its industry by selling to the US for borrowed money. Money it does not expect to see back anytime soon, if at all. Nothing nefarious, simply a way out of poverty. If the US does not want to save money and wants to live in debt, why should the Chinese object.

  97. @Winter the length of what we can write here, will lead to simplistic summaries. I think you will find that we agree mostly, if had a long discussion.

    I wrote symbiotic, self-interested at many levels and facets, not top-down controlled. The founding fathers couldn’t even create a static top-down Constitution that could survive the Iron Law of Political Economics. The bell curve of knowledge and power law passive capital distribution are not right nor wrong. Some may want anticipate the effects of natural swings of price, supply, and demand distortion. Someone pointed out to me recently the poor have a higher standard of living than Kings of yore. I can for example appreciate Apple, while also anticipating changes. I seek to continually find the future that has fewer limitations.

    Government regulation and size of government as a percentage of national income is increasing almost every where in the world. It is not right or wrong, it just is. What is causing it? I presented a theory that it is the death of the industrial age and passive capital seeking to increase market share to compensate. And there are effects to anticipate.

    Perhaps in China the share of government is decreasing as a percentage, but skyrocketing nominally as it had shrunk the economy to nothing in attaining a near complete control.

    When I was growing up, I had a lemonade stand, I mowed laws for $10, I did carpentry and handy work, etc.. I didn’t need to pay tax or be licensed. There was no parasite on my labor to demotivate my entrepreneurship.

    My understanding is that China was impoverished by their turn to communism. Apparently it was China’s perceived weakness as a result of England addicting China to opium in order to get its silver back, that caused young Chinese to advocate the overthrow of the Qing Dynasty and the creation of a republic. Perhaps that is not the complete story.

    The physical world has its politics of integration. I seek more room to stretch my arms. I guess the technology world also has its politics of integration as we observe here in this blog. I was hoping we could find a technology that removed the politics from integration. Politics are caused by lack of degrees-of-freedom, i.e. resource constraints. Politics promises to remove them. Okay I better stop there and get back to work.

  98. Emacs should do nicely.

    Emacs doesn’t feel like a million lines of code. Eclipse and netbeans feel substantial enough to be a million lines.

    Checking on it, in 2005 emacs was under 200 KSLOC. I doubt it has quintupled in size since then.

    http://books.google.com/books?id=SeFvCMTW_YgC&pg=PA402&lpg=PA402&dq=emacs+sloc&source=bl&ots=iMwlKmFVQj&sig=HFrmhlLXYaAd2Av9_-nSq0_wqyQ&hl=en&ei=ZDlZTtXRONGrsAKuubDODA&sa=X&oi=book_result&ct=result&resnum=4&ved=0CC8Q6AEwAw#v=onepage&q=emacs%20sloc&f=false

    Amusingly VIM is larger. PostgreSQL is a little smaller than I expected as well but that was 6 years ago and it’s seen quite a bit of improvement since. If it’s not in the high 6 digits I’d be surprised.

    But, nope, emacs isn’t an example of a million+ SLOC FOSS project.

    “Managing that many individual contributors to produce a million+ lines is “problematic”.”

    Yes. Yes it is. But we’ve been doing it in open-source land for decades. Your Johnny-come-lately ignorance is appalling.

    You have maybe one example and that’s run by Linus. Linux is not exactly a pure example of this given the large chunks of donated code and effort but pretty close given that (the kernel at least) is driven by someone with moral authority typical of FOSS projects (as opposed to authority granted by virtue of paycheck). On the other hand, like I said, Linus is a once in a generation kinda guy. That’s not a very repeatable process…kinda like a business model based on having Steve Jobs.

    I guess you can count Theo but while I like Theo it’s not quite the same I don’t think. Even then the BSDs were able to inherit a bit of unix code.

    In contrast there are lots of examples of million+ SLOC proprietary products and the major FOSS projects that are in the same league have heavy corporate involvement/investment. I make this observation not to diss FOSS but to counter the meme that software is of so little value that the marginal cost is effectively zero.

    I find that when I feel the need to insult folks that don’t agree with me that I’m feeling on rather weak footing.

  99. @khim

    It’s hard to find large (millions of lines) FOSS projects without corporate involvement not because they can not exist without corporations, but because they can not exist without interest: it’s hard to imagine *useless* FOSS project with millions of lines of code

    That’s a very valid point. On the other hand it strikes me as very difficult to get to that next level without someone with deep pockets funding it. As I said, there’s a large amount of scut work that many devs aren’t inclined to do without being paid.

    This limits the maximum scope for FOSS projects without large financial commitment…while the maximum size of problem that an individual (or small team of) developer can solve is ever increasing the scale of problems that software solve increases at the same pace.

    Then FOSS projects live just fine without corporate endorcement (think MAME: that’s close to 3 MLOCs), but, as I’ve said, such cases are rare.

    MAME is a good example. A quick google shows about a million sloc for MAME 0.93u1. Half a million worth of drivers. That strikes me as a good kind of project for massive volunteer effort. Core code around 200-300 KSLOC and a lot of potential individual contributions in the form of drivers and other stuff.

    FOSS projects do not need corporations to thrive

    Thrive? Heck no. There are many FOSS projects that thrive with a couple active devs. FOSS projects don’t need corporations just to thrive. But I argue it can’t break out of that small team (max 10 full time dev equivalents or so) mode without it. There are a couple exceptions but I think that’s just what they are. Exceptions.

    It’s possible to develop KHTML in parallel with WebKit out of spite (just like it was done for years before “supercapable” Apple decided that it can not write browser from scratch and should borrow FOSS one), but what exactly will it prove?

    I would argue that WebKit/KHTML is a good example for my position. I would argue that WebKit is only has the impact it has today because Apple invested in KHTML and all the scutwork required to get it to that next level.

    As much of a fan as I am of FreeBSD it’s not at the same level as other OS’s (Linux, OSX, Windows, etc) because of lesser corporate investment. That’s despite the fact that the dominant desktop unix (OS X) was built atop their userland. Then again, it does what it does very very well.

  100. >>>> [Name more than a couple million SLOC+ open source project has no (or very very minimal) corporate sponsorship developed entirely by individual devs not being paid by anyone? ]
    >>> [You’re full of shit.]
    >> Then it would be easy to simply provide a counterexample.
    > Emacs should do nicely.

    Emacs? As in the whole Unipress .vs GNU emacs? Even if you discount that the early versions of GNU emacs contained significant amounts of Gosling/Unipress code (specifically, the display code), you’d still have to
    account for the MacArthor grant, Lucid, etc.

    Shall we forget that the Free Software Foundation (corporate non-profit) was founded in October 1985, initially to raise funds to help develop GNU?

    BTW, wasn’t emacs used as an example of the “Cathedral” development style CatB?

  101. @Shelby

    Debt must grow exponentially, because of the compounding interest.

    Only if it’s not being serviced.

    If someone goes into debt to invest in a productive asset, they amortize the loan within the life of the asset so that the increased marginal profitability provided by that asset will cover the principal and interest, and leave some additional actual profit. If they do not expect to earn that additional profit, there is no reason to take out the loan in the first place.

    Now, I understand that there is a difference between what people expect and what they get. Sometimes people make less than they thought they would. They then have less money and available credit with which to make further mistakes. Sometimes lenders miscalculate, and the debtor is unable to repay all of that compounding interest. They then have less money to loan out on further mistakes. Those who do not make those mistakes have more money to invest. The market does a wonderful job of promotion and relegation, when it’s allowed to do so (not when regulatory agencies act as you describe).

    And investment need not be driven by debt. It can also work with equity. One of the few things about Shari’a law that I don’t find completely reprehensible is the use of equity instead of debt: The bank is an investor in the business who only profits to the extent that the business profits. [I do not support making debt/interest illegal, but I do support the right of banks that voluntarily limit their activities to the equity model to do so.]

  102. As an aside, even as a former emacs fan, emacs feels like a bloated piece of software where vim doesn’t. What the heck is vim spending 200 KSLOC on anyway?

    Okay, Eclipse Indigo is 46 Million LOC. I guess VIM is svelte in comparison. Then again anything does in comparison to Eclipse, the emacs of the modern age.

    I wonder how big IntelliJ and VisualStudio are.

  103. @nigel:

    TeX/LaTeX wasn’t corporate sponsored. Yet it outshines everything else in typesetting. The math software like scilab, R, Maxima, sage is in the millions of lines of code and I find it to be more useful than commercial software (e.g. Matlab).

  104. TeX isn’t a million lines (LaTex is 34 KSLOC) and while wildly successful in technical (stem) typesetting it is not widely used on other fields since it’s geared toward mathematics.

    Scilab jumpstarted by INRIA and ENPC (government). Maxima by MIT. Google, Microsoft and the NSF (among others) have funded Sage. R from U of Auckland and the exiting foundation has a reasonably large sized list of supporting institutions and companies.

    Lots of great stuff comes out of academia but there’s usually some source of funding somewhere, not to mention great slave labor in the form of grad students. :)

    Khim’s point applies but I still argue that without someone paying the bills you don’t get the time/effort to get to this level/size (even if some of these projects are probably lower than 1 MSLOC…R is 600 KSLOC…still very very substantial)

    1. >TeX isn’t a million lines (LaTex is 34 KSLOC) and while wildly successful in technical (stem) typesetting it is not widely used on other fields since it’s geared toward mathematics.

      Blah, blah, blah. People keep giving you examples of large open-source systems developed without corporate largesse, and your response is to no-true-Scotsman at them and push an ever-expanding definition of disqualifying funding sources. Your ignorance of history is pardonable, but your prejudices are vicious. You’re a troll and a fool, and I will both cease feeding you and advise others to do likewise.

  105. Not only are you dismissive of all the examples, you don’t seem to know what your point is. At first I thought it was that FOSS couldn’t work without a larger corporation helping it out. Now you seem to think even a group of hackers that form themselves into a corporation to promote their favorite project doesn’t count. There would seem to be a significant difference between a already existing company “seizing” (and I used that term reluctantly as I don’t really think that is what they are doing) a project and pouring money in to help their bottom line and a “grassroots” formation of a company to support something.

    (Eric, I’m sorry, but I thought this point should be made. Just because a group of hackers decides to form a organizational structure called a corporation doesn’t make FOSS any less a part.)

  106. @nigel
    I agree with esr’s Inverse Commons thesis. Apparently there is amble evidence of the success of open source. The involvement of corporations is anticipated by that model, thus if anything provides more evidence of the model’s success. The “gift or reputation” component of the model is in harmony with the strategic benefit to corporations. I also concur with esr’s stated reasons for doubting how an exchange economy could work for open source.

    However as a refinement, I also think the lack of an exchange economy in that model means that mostly only entities who need a commons software improvement are motivated to participate. I know this is true for myself. To broaden the impact of open source, and motivate people to contribute for the income directly correlated to the market value of their contribution, I have in theory devised a way to enable the open source exchange economy. Notably it doesn’t require copyright policing, nor anyone to assign a monetary value to a module, nor micro-payments, nor must it be one centralized marketplace. It is all annealed by the market. Relative value is calculated by relative module use, and relative price in an indirect way. Nominal price is never calculated. And for the vast majority of users and certainly all non-profit ones, it remains a “gift or reputation” economy.

    It is my hope that this can drive millions or billions of people to become programmers. I might be wrong about this though, and I remain committed to the Inverse Commons as well. Please note that my theory is that adding more fine-grained relative value information to a market, can make it more efficient (assuming there are no bottlenecks), because there would be more stored information and degrees-of-freedom annealed by the free market. Relative price is information. So my model is not so much about exchange of fiat currency, but about measuring this information. My “pie in the sky” dream is the knowledge, with software modules as the proxy, becomes fungible money and thus a currency. Note that gaming currencies became so widespread that China outlawed their convertibility to yuan.

    @The Monster
    The opposite actually. Aggregate debt is growing nominally by the aggregate interest rate, while it is serviced. Aggregate debt can shrink during implosive defaults, but not if the defaults are transferred to government debt, as is the case in western world today.

    I understand your argument that real debt isn’t growing if production increases faster than debt (a/k/a positive marginal-utility-of-debt), but you discount the damage due to supply and demand distortion. In fact, the western world is now in negative marginal-utility-of-debt, i.e. the more public debt we add, the faster real GDP shrinks.

    The explained by the disadvantage of a guaranteed rate of return compared with equity, because the investor has less incentive to make sure the money is invested wisely by the borrower, i.e. passive investment. No amount of regulation can make it so. The growth of passively invested debt causes mutually escalating debt-induced supply and demand. When that implodes (due to the distortion of the information in supply and demand the debt caused), the capitalist lender demands the government enforce the socialization of his losses. Thus the fixed interest rate (usury) model is an enslavement of labor and innovation to passive stored capital, and is the cause of the boom and bust cycle. Equity is in theory a far superior model. But the problem with equity is the attempt to guarantee rate of return via captive markets (a/k/a monopolies or oligopolies), i.e. again stored capital wants to be passive. The basic problem is stored capital, i.e. the concept that what we did in the past has a value independent of our continued involvement. I trying to end the value of passive stored capital with my work on an exchange open source economy, and I think it is declining anyway with the decline of industrial (capital intensive) age.

  107. Apple decided that it can not write browser from scratch and should borrow FOSS one

    By “borrow”, I can only infer that you mean “vastly improve and add to it, and give the code away as the licenses required”. Look at what KHTML had a decade ago, look at what WebKit has today, and realize that it was Apple’s funding and participation that made today’s best open-source HTML engine possible.

    1. >Look at what KHTML had a decade ago, look at what WebKit has today, and realize that it was Apple’s funding and participation that made today’s best open-source HTML engine possible.

      Credit where credit is due; Some Guy is right about this. WebKit wasn’t Apple’s idea, but it’s one of the clearest cases in which corporate funding of open source worked out well and probably represents Apple’s most important contribution to open source.

  108. Objective C is archaic. Try anything involving Strings.

    Such as?

    I’ve got the same regex capabilities that I have in Python, I have a full rich text system that even includes right-to-left and vertical layout for chinese, and I’ve even got text-to-speech built in. What do you find lacking in Apple’s development environment w/r/t handling strings?

  109. @Some Guy

    Of course you can do anything with Objective C. It’s just extremely verbose. I’ll just show a crazy simple example:

    Objective C:
    NSString *test = [myString stringByAppendingString:@” is just a test”];

    Java:
    String test = myString + “is just a test”;

    You tell me, which is better? If you were designing a language, which syntax is more like what you would end up with?

  110. The question was “can the FLOSS world produce significant code without the largess of large corporations” – and KHMTL/WebKit story shows: obviously it can. KHTML was quite capable browser before Apple involvement.

    WebKit is ALSO good couterexample for “there’s a large amount of scut work that many devs aren’t inclined to do without being paid”. The very first thing Apple did with KHTML are changes to break it cross-platform nature and tie it to MacOS. Only later, after huge hoopla other people cleaned up the code and made is usable on other platforms (when Apple wanted to port Safari to Windows it ported some proprietary MacOS libraries and used them instead of making native Windows port). It can be observed in many other cases as well (Android kernel, for example).

    Companies are very-very bad at doing “scut work”. There are no money in it. What they do instead is “shiny work”: they add some gloss and then sue everyone who tried to backport it back.

    You can compare the problem of writing software with the problem of growing food. Can people grow good food without “the largess of large corporations”? Well, sure. Can they stuff it with toxic flavour enhancers and perseverants to sell in shiny, attractive packages? Probably not. But is it necessarily a bad thing?

    1. >Companies are very-very bad at doing “scut work”. There are no money in it. What they do instead is “shiny work”: they add some gloss and then sue everyone who tried to backport it back.

      That’s a little harsh. It is true that companies don’t have good incentives to work on cross-platform infrastructure. It is also true that the hacker culture has been much better at this, historically. But since the late 1990s, when I broadcast the idea that corporations could use open-source projects for cost- and risk-pooling, a lot of Fortune 500 companies have gotten pretty good about backing that sort of effort. Good enough that fools like “nigel” have forgotten, or never knew, that it was ever any other way…

  111. @Shelby
    >Aggregate debt is growing nominally by the aggregate interest rate, while it is serviced.
    I don’t think we’re talking about the same thing at all here. While someone is making regular payments on a mortgage, auto loan, etc., the debt is shrinking with each payment. It is unclear to me what you mean by “aggregate debt”.

    gt;The basic problem is stored capital, i.e. the concept that what we did in the past has a value independent of our continued involvement.

    With that, you’ve shown your argument to be absolutely insane. Declaring “stored capital” a problem leads inexorably to what Ric Locke calls “eating the seed corn”, a wonderful way to capture the essence of what “capital” is, why it’s important, and why people destroy it.

    When someone diverts some of their time and effort from producing goods or services for consumption/enjoyment, to producing something not to be consumed, but to be used in producing some other good/service, they are investing in capital. (In Ric’s example, they don’t eat all the corn crop they harvest; they save some of it to plant in future seasons.) In order for doing so to make sense, they have to believe they will continue to enjoy the benefit of that capital investment. That means they have to own what they’ve produced just like they have to own the consumer goods/services they produce, or they’ll lack any incentive to do so.

    If I build a tool, until it breaks or wears out, that tool continues to have value whether I’m still “involved” with it or not. I can then rent that tool to someone with the skill to use it. I am no longer “involved” with it, but it has value to him, some of which he gladly gives me in the form of that rent. Or I can sell the tool outright. Perhaps I use the money from selling the tool to buy or rent machine tools to make me more efficient at making tools for craftsmen. Those machine tools are valuable to me despite the fact that their maker is no longer “involved” in them. My payment to him reflects that value.

    Machine tools are a very, very important milestone. While our near kinsmen primates will use primitive tools (like sharpening a stick to dig edible grubs and bugs out of nooks and crannies), and everyone knows about how beavers build dams, I am not aware of any other species that makes tools that are not used for some immediate benefit, but to make other tools. Machine tools are capital&sup2;. The value they hold is incredibly high, precisely because the people who make them are rarely involved with them after doing so, (exception is when the machine tool maker builds himself better machine tools with which to make machine tools better/faster/more efficiently) but instead get them into the hands of the people who use them. One machine-tool manufacturer can make many tool-makers more productive, who in turn make many many people who use those tools more productive.

  112. @Eric sorry to get under your skin but hey those are my views. As far as the examples go emacs simply isn’t big enough. Neither is TeX.

    The challenge was relatively simple…en example of FOSS in the same range as the huge commercial products like Photoshop, MS Office, OS X that wasn’t funded by some large entity and given away. There are many examples of the latter that I listed: eclipse, Firefox, android, OpenOffice. I still haven’t seen a project of the same caliber as these that aren’t corporate or government sponsored in some significant way.

    Perhaps you disliked my use of the word significant. Okay, my choice of word was bad….but the challenge remains. The assertion was that individual devs will supplant large teams funded by large corporations and the marginal cost of software is effectively zero. I disagreed and here we are.

    You pissed at me for challenging an extremist position.

  113. nigel, your expectations are unrealistic. Any useful software is going to attract corporate contributions. Thus, you’re saying “Why is there no useless software which is totally written by volunteers?”. Eric is calling you stupid for asking such a question, and … I have to agree with him.

  114. @nigel
    An example? Git, Debian?

    Git might not be big, but big is not beautiful. MS Word is much bigger than OO.o nor because of necissity.

  115. @nigel
    I think “individual devs will supplant large teams funded by large corporations”, and it will be because the marginal cost of software is not free. In The Magic Cauldron, my understanding is that esr argues that software is not free and has costs that in most cases could never be recovered by selling it. The Use-Value and Indirect Sale-Value economic models presented in The Magic Cauldron, seem to acknowledge that open source will be funded by corporations. I think there can be an exchange model which can enable individual devs to function more autonomously, but it is achieved, it will be because software is not free and has use value cases that can be worked on independently in compositional modules.

    @The Monster
    Evidence says that the total (a/k/a aggregate) debt of the fiat system increases at the compounded prevalent interest rate. For my linked graphs, note that M3 is debt in a fiat system, because all fiat money is borrowed into existence in a fractional reserve financial system.

    Apparently the reason for this debt spiral is because even while some pay down debt, that debt elevated demand, which escalates supply and debt, which then escalates demand, which then escalates supply and debt, recursively until the resources feeding the system become too costly, then it implodes. Also there are at least two other reasons. When money is borrowed into existence from a bank’s 1-10% reverse ratio and is deposited, it increases the bank’s reserves, thus increasing the amount that can be loaned into existence. Perhaps more importantly, the money to pay off the debt, has to be created (since the entire economy is run on debt money), and thus must grow at the compounded interest rate on the aggregate in the economy, as the evidence shows. So raising interest rates to slow down the economy, actually increases the rate at which the debt grows.

    I did not criticize storage of sufficient inventories. Physical inventories are becoming a smaller component of the global economy, and I bet at an exponential rate. I criticized passive capital, meaning where our past effort extracts captive market rents on the future efforts of others, simply for doing nothing but carelessly (i.e. passively with guaranteed return) loaning digits which represent that past effort (or guaranteeing ROI with monopolies, collusion with the government, etc). Contrast this against say offering some product, and the ongoing effort to support that product, i.e. active investment in any venture where your past experience is being applied most efficiently towards active current production, which would include equity investments based on your active expert knowledge of what the market needs most. What you wrote about Ric, does not disagree with my thesis. For example, as I understand Esr’s thesis about use versus sale value in The Magic Cauldron, it says open source program code can’t be rented, unless there is ongoing value added, i.e. the value is in the service not the static bits. He mentions the bargain bin in software stores for unsupported software. Machine tools are critically important, but not the raw material inputs and not so much so the machine itself, rather the knowledge of the design, operation, and future adaptation and improvement.

  116. @Shelby

    > Evidence says that the total (a/k/a aggregate) debt of the fiat system increases at the compounded prevalent interest rate. For my linked graphs, note that M3 is debt in a fiat system, because all fiat money is borrowed into existence in a fractional reserve financial system.

    You have demonstrated an interesting correlation, but not the direction nor mechanism of causation.

    I might observe that many people with well-developed muscles frequently go to a gym, and conclude that the muscles cause them to go to the gym. I might similarly determine that basketball skill makes people grow taller and their skin get darker, or that becoming bald increases cancer risk.

  117. > String test = myString + “is just a test”;
    >
    > You tell me, which is better? If you were designing a language, which syntax is more like what you would end up with?

    The Thinking Man knows that each time you append something via ‘+’ (String.concat()) in Java, a new String is created, the old stuff is copied, the new stuff is appended, and the old String is thrown away. The bigger the String gets the longer it takes – there is more to copy and more garbage is produced.

    The Thinking Man also knows that in Objective C, @””-strings are of class NSConstantString, and thus act like atoms in lisp; they hang around. That is, if you use @”cow” in two separate places in your code, they will be referencing the very same object. (Try that in the disease-ridden whoredom that is Java.)

    The Thinking Man wonders if you know anything about Objective-C, or if you just looked up something you could find to be angry about. The Thinking Man would avoid the issues with @””-strings via something like:

    NSMutableString *myString = [NSMutableString stringWithString:@”Hello”];
    [myString appendString: @”is just a test”];

    The Thinking Man openly wonders why all of the ‘programmers’ under 30 (raised on scripting languages) think that the ‘+’ operator is a natural stand-in for concatenation. Yes, it was the classic intro to C++, woo! BFD!

    The Thinking Man also wonders if anyone (other than, perhaps esr) remembers Icon, where the syntax for string concatenation is: s1 || s2

    The Thinking Man openly challenges Russell Nelson to be nice. Calling people ‘stupid’ seems to be his forté.

  118. I’ve seen a lot of different string concatenation syntax over the years, and I don’t much care for any overloaded single character operators. If it were up to me, I’d pick an English word that describes what’s happening as obviously as possible. If it’s longer than some people would prefer, well that’s what code completion in the IDE is for.

    One thing I would like to see in Clang/LLVM is a compiler switch or a pragma that tells it to treat all the constant strings in a given file as objects, since I hardly ever have occasion to use plain C strings anymore. E.G:

    id helloWorld = [“hello” appendString:”world”];

    and I’d have -appendString: and -appendWords: methods that differ by including or not including an implicit space.

  119. @Russell

    Sure, and I agreed with khim on that point successful projects will attract sponsors.

    But it is equally stupid to assert that proprietary software products are doomed when the only examples of FOSS projects in the same class (complexity and size) are sponsored by large corporations because it makes sense in specific business cases. Unless you also assume that giving away software IP makes sense in ALL business cases. That’s simply untrue.

  120. @Eric I respect you enough that I prefer not to have you frothing at what I write.

    Do you prefer that I just leave? You have my email just send a note.

    1. >Do you prefer that I just leave? You have my email just send a note.

      I don’t even censor people for being villains, let alone fools.

  121. Companies are very-very bad at doing “scut work”. There are no money in it. What they do instead is “shiny work”: they add some gloss and then sue everyone who tried to backport it back.

    What he meant was the work it takes to turn a great software idea or implementation into a polished, end user product. This involves a lot of things: market research, usability testing, UI design, documentation, localization, etc. that most devs can’t be arsed to do. The pure OSS answer to this problem is to “crowdsource” these tasks, but this ends up producing very mixed results.

    Take GIMP. As OSS projects go it is a paragon of focus on the end user experience. The developers implemented an entire toolkit from scratch to avoid putting the burden of clunky old proprietary Motif on everybody. Yet still, it’s 15 years behind the state of the art (Adobe Photoshop). You can whitewash it all you want, GIMP is 15 years obsolete. And the reason why is Adobe did the “scut work” of making sure that Photoshop is as easy and polished an experience as a professional press artist or designer could possibly ask for. This “scut work” can become heavily streamlined when money is involved. The crypto-communist Stallman can ramble all he wants otherwise; money is a powerful intrinsic motivator.

  122. @Jeff Read:

    What he meant was the work it takes to turn a great software idea or implementation into a polished, end user product. This involves a lot of things: market research, usability testing, UI design, documentation, localization, etc. that most devs can’t be arsed to do. The pure OSS answer to this problem is to “crowdsource” these tasks, but this ends up producing very mixed results.

    OSS almost always has a better UI. There is a lot of BS in market research, usability testing and UI design. The most obvious example is the ribbon in MS office which was supposedly the result of such market research and usability testing. There were companies selling all kinds of hacks and remedies on how to retro-fit office 2007 to make it look like office 2003.

    The core of MS word is still the same buggy junk I used (and swore off) > 15 years ago. Instead of fixing their buggy code, M$ spent untold resources on ‘market research’, and ‘usability testing’. The end result is a shiny but unusable glitter at the surface, and the same buggy core beneath the surface.

    In many ways that is the problem with corporate software: Too much focus on fluff, glitter. Too little focus on the design beneath the surface. And once something “works” (whatever the criteria for making that statement is in a corporate environment) it becomes impossible to touch that part that works or change the design later on. You can only “append” to it. That is why the bugs and arbitrary behavior in M$ word live on!

  123. @Jeff Read:

    What he meant was the work it takes to turn a great software idea or implementation into a polished, end user product. This involves a lot of things: market research, usability testing, UI design, documentation, localization, etc. that most devs can’t be arsed to do. The pure OSS answer to this problem is to “crowdsource” these tasks, but this ends up producing very mixed results.

    OSS almost always has a better UI. There is a lot of BS in market research, usability testing and UI design. The most obvious example is the ribbon in MS office which was supposedly the result of such market research and usability testing. There were companies selling all kinds of hacks and remedies on how to retro-fit office 2007 to make it look like office 2003.

    The core of MS word is still the same buggy junk I used (and swore off) > 15 years ago. Instead of fixing their buggy code, M$ spent untold resources on ‘market research’, and ‘usability testing’. The end result is a shiny but unusable glitter at the surface, and the same buggy core beneath the surface.

    In many ways that is the problem with corporate software: Too much focus on fluff, glitter. Too little focus on the design beneath the surface. And once something “works” (whatever the criteria for making that statement is in a corporate environment) it becomes impossible to touch that part that works or change the design later on. You can only “append” to it. That is why the bugs and arbitrary behavior in M$ word lives on!

  124. @nigel:
    Btw, I worked briefly on Corel Painter in mid-1990s, when it was Fractal Design Painter, and Steve Guttman came to us from VP of Marketing of Adobe Photoshop (he is now VP at Microsoft and Mark Zimmer is now making patents for Apple). I escaped from that mentality and software model, under which my creative abilities were highly under-utilized because we had to give way to the founder heros (and I took advantage of it too). I appreciated the learning experience and opportunity to work with people with 160+ IQs (Tom Hedges purportedly could memorize a whole page of a phone book by glancing at it), but I also see the waste (captive enslavement by those who need a salary) of resources in that non-optimal allocation model. I have not worked for a company since.

    With a compositional model, I assert proprietary software is doomed to the margins. Open source increases cooperation. No one can make the cost of software zero. Open source is a liberty, not a free cost, model. My understanding is that Richard Stallman and the FSF are against OSI’s replacement of the word “free” with “freedom”. My understanding is because FSF requires that the license must disallow charging for derivative software so that the freedom-of-use is not subverted by forking, but perhaps this is in tension with the reality of the nonzero cost of software and the types of business models that derivatives might want to explore. I may not fully understand the rift, or maybe there is no significant rift, yet apparently there is some misunderstanding outside the community of what is “free” in open source.

    If we have technology such that software modules are compositional without refactoring, I think this tension in derivative software will diminish, because then any derivative module (which is a composition of modules) is a completely orthogonal code, and thus may have a separate license from the modules it reuses without impacting the freedom-of-use of the reused component modules, because the reused modules will not have been forked nor modified. Thus I propose that with a compositional computer language, individual modules can be created orthogonally by individual devs and small teams, and thus the importance of corporations will fade.

    @Jeff Read and @uma:
    In my “pie in sky” vision, the corporations can still exist to slap on compositional glitter to core modules created by compositional open source. And they can try to sell it to people, but since the cost of producing such compositions will decline so much (because the core modules have been created by others and their cost amortized over greater reuses), then the opportunities to create captive markets will fade also. In a very abstract theoretical sense, this is about degrees-of-freedom, fitness, and resonance (in a maximally generalized, abstract definition of those terms).

    The cost of creating the core modules is not zero, and so I envision an exchange economy to amortize the costs of software in such a compositional open source model. But first we need the compositional technology, which is sort of a Holy Grail, so skepticism is expected. I am skeptical and thus curious to build it to find out if it works. However, if there is someone who can save me time by pointing out why it can’t work, that would be much appreciated. Which is perhaps why I mentioned to the great thinkers here. Also to learn how to become a positive member of this community.

    @The Monster:
    I don’t see how the conclusion would be different, whether the growth of total debt causes the interest rate to be correlated, or vice versa. I.e. Transposing cause and effect makes no difference. Even if the correlated total debt and interest rate are not causing the other, the conclusion remains that total debt grows at the prevalent interest rate compounded. And I don’t think you are refuting that an increase in debt, increases demand and supply (of some items) in the economy. Recently it was a housing bubble. Loans pull demand forward, and starve the future of demand.

    @The Thinking Man:
    The problem with + for string concatenation is only when there is automatic (implicit) conversion of string to other types which use the same operator at the same precedence level, i.e. integers. This causes an ambiguity in the grammar. Eliminate this implicit conversion, and + is fine.

    I read that Objective C does not support mixin multiple inheritance, thus it can not scale a compositional open source model. I don’t have time to fully analyze Objective C for all of its weaknesses, but it probably doesn’t have a bottom type, higher kinds, etc.. All are critical things for the wide area compositional scale. Thus I assume Objective C is not worth my time to analyze further. I know of only 3 languages that are any where close to achieving the composition scale, Haskell, Scala, and Standard ML. Those are arguably obtuse, and still lack at least one critical feature. I realize this could spark an intense debate. Is this blog the correct place?

  125. @Winter so Transaction Cost Theory defines the natural boundary and size of the corporation. They mention Coase’s Theorem. Thanks.

  126. @Shelby:

    The only thing truly compositional, modular, and scalable (what you call the “holy grail’) involves transitioning to functional programming languages. I don’t see that transition happening soon.

    It will be hard getting people to transition from C++ into something like D or Go let alone FPLs.

  127. @The Thinking Man
    “The Thinking Man knows that each time you append something via ‘+’ (String.concat()) in Java, a new String is created, the old stuff is copied, the new stuff is appended, and the old String is thrown away. The bigger the String gets the longer it takes – there is more to copy and more garbage is produced.”

    The thinking man is wrong. The java compiler has optimized this to string “append” for many years now. And even if it did what you suggest, the difference would be negligible unless you did it many times.

    Clean/simple/easy to read code leads to easier to maintain, less bug prone software. And java code is by its nature going to be easier to read/cleaner/simpler than equivalent Objective C code. God, I can’t believe i’m defending Java – I really don’t like it :)

    Languages are designed to provide higher level abstractions. And objective C fails miserably at this compared to newer languages. I still like it better than C++ of course.

  128. In many ways that is the problem with corporate software: Too much focus on fluff, glitter. Too little focus on the design beneath the surface. And once something “works” (whatever the criteria for making that statement is in a corporate environment) it becomes impossible to touch that part that works or change the design later on. You can only “append” to it. That is why the bugs and arbitrary behavior in M$ word lives on!

    Agreed, and that’s part of why Nigel’s statement has truth into it, but it’s truth with very little force. See, in order for a project such as a word processor or image editor to even get to the mega-LOC scale, bloat must almost necessarily be involved. Hence, the mega-LOC scale projects — the bloated ones — are almost universally going to have significant corporate input, because corporations tend to be prime bloat producers. People who are savvy about open source tend to do well — even thrive — with focused tools that compose along clean interface boundaries. The design of focused tools (of which the “Unix Way” is but one extreme form; consider also the development of GIMP plug-ins and Emacs modes) is conducive to complexity management and bloat reduction.

    So in effect, Nigel is saying that software isn’t serious unless it’s bloated. Which I don’t think is what he intended.

  129. @uma:
    I agree if you meant not only FP, but immutable (i.e. pure, referentially transparent) FP. Also must have higher-kinded, compile-time typing, and this can be mostly inferred and unified higher-level category theory models hides behind the scenes to eliminate compositional tsuris, without boggling to mind of the average programmer.

    I understand, because I initially struggled to learn Haskell, Scala, and Standard ML. If we make PFP easier, more fun, more readable, less verbose, and less buggy than imperative programming, perhaps we can get a waterfall transition. Note PFP is just declarative programming (including recursion), and declarative languages can be easy-to-use, e.g. HTML (although HTML is not Turing complete, i.e. no recursion). This is premature to share as I don’t have a working compiler, no simple tutorial, only the simple proposed grammar (SLK syntax, LL(k), k = 2), and some example code. I found many confusing things in Haskell and Scala to simplify, or entirely eliminate, including the IO Monad, that lazy nightmare, Scala’s complex collection libraries, Scala’s implicits, Scala’s mixing of imperative and PFP, Java & Scala type parameter variance annotations are unnecessary in a pure language, etc..

  130. The only thing truly compositional, modular, and scalable (what you call the “holy grail’) involves transitioning to functional programming languages. I don’t see that transition happening soon.

    Might want to look into Ada. It has a bad reputation, but with real type and module systems and no header files per se, modularity and well-specified interfaces are the path of least resistance when coding in Ada. Ada code tends to be more reliable because you have to be explicit about dangerous stuff like side effects and aliased variables (quite unlike C-family languages).

  131. @Jeff Read: an imperative language will never be wide scale compositional, because side-effects are anti-compositional.

    Stylistically speaking, any language that has loop statements, e.g. ‘for’ or ‘while’ is already lost imho, because it encourages the anti-compositional mode of programming. Iteration should be done through tail-recursion.

  132. @Shelby
    “Stylistically speaking, any language that has loop statements, e.g. ‘for’ or ‘while’ is already lost imho, because it encourages the anti-compositional mode of programming. Iteration should be done through tail-recursion.”

    Wow. At least you didn’t try to apply thermodynamic theory to this.

    I want to see some one go on a rant against state and variables now.

    And where are the assembler advocates?

  133. Shelby,

    But what ablout bloody I/O? In order to do that and remain purely functional, you must come up with some way of proposing the dog of food rather than feeding the dog. There are several — monads get the most press — but they are all confusing.

    And let’s not forget that functional languages need garbage collectors, and garbage collectors are an inherent performance killer.

    Ada is an imperative language, but it gives you control over the dangerous effects of imperativity. It is useful for applications where purely functional languages are a poor fit (embedded or real-time systems, performance-critical areas of code, code that must run under extreme memory pressure (a lot of server code is like this), etc.).

  134. My god I am begging the people in this thread who are slagging languages to at least use them first. Then I will next beg them to go take a 30 second tour through the history of programming languages and contrast it with Moore’s Law.

    A GC is not a bad decision in the phone space, especially not when we are seeing the same type of mad dash on the hardware side that we saw starting in the 90’s with the pentium. There were still people complaining about the use of the garbage collectors when java did what smalltalk and lisp did 20 years earlier and included a garbage collector. After surmounting years of FUD about the supposedly crippling effects of such a decision, a quick trip over to the programming languages shootout places java in 5 behind ada, c, c++ and ATS. Obviously having a GC is not the sort of crippling performance barrier it was made out to be, and there is no reason to believe that this won’t play out in phones as well. And the reason why you would want one is simple: it’s more productive. As simple as Apple’s reference counting makes memory management, it doesn’t make it easier than garbage collection, and especially if you want to embrace things like closures, and higher-order functional goodness, a GC is mandatory. This will become especially important as we move to more of a mixed mode type of programming, combining traditional object oriented modelling with stateless functional programming where we can get away with it.

    Second both Java and Objective-C are great languages that share a lot of history but decided to represent that history completely differently. The reason Objective-C has those great big method names is because it decided to run with it’s smalltalk heritage, and present message-sends (which generally cause methods to be called) the same way smalltalk did. The selector syntax seems verbose, but it is actually pretty awesome, especially when aided with code completion. The canonical example is the Point class when you can construct it with x,y coordinates or and radian and distance for polar coordinates. In a non-smalltalk language this is harder to represent (what do Point(1,1) and Point(3.14, 1) represent?) but in Smalltalk/Objective-C you would write Point x: 1 y: 1 or Point radian: 3.14 distance: 1. There is no ambiguity in what the parameters you are passing in stand for.

    Java decided to take smalltalk’s ideas (mostly) and then cover them with C syntax. However it can still intern strings and does (try reading the javadoc before slagging the language or at least a 5 second google search) and can optimize string concatenation to appends, as it has done for several versions, and in the exact same way that smalltalk might decide to inline method invocations. As long as you get the correct end the result the language is free to optimize however it wants and does.

    Really, learning smalltalk is super enlightening and I recommend it in the same way that people recommend lisp and for the same reasons. Maybe if more people did that, they would think first and decide not to slag something without having any real experience with it.

  135. I should also note that a quick trip over to ohloh.net shows emacs as containing a little over 2.5 million lines of code as of right now, which kind of beats nigel’s ridiculous goal-post shifting.

  136. I want to see some one go on a rant against state and variables now.

    All the evil of bad programming is ultimately traced to variables. Good programming just uses the stack. Like Forth.

    j/k

    1. >All the evil of bad programming is ultimately traced to variables. Good programming just uses the stack. Like Forth.

      /me slaps SPQR with a large fish. Seasoned with garum, of course.

  137. @Eric It’s not censorship but civility. I wouldn’t want to go to your physical home to do nothing more than piss you off so likewise I wouldn’t want to go to your virtual home to do nothing more than piss you off. It’s too easy to forget there is a human on the other side of the screen.

    So the easiest way to resolve this is for you to believe me when I say I’m not here in bad faith nor is it my goal to goad you into anger OR I simply stop. Either way it pains me when you delve into name calling as it doesn’t reflect well on anyone.

    1. >I wouldn’t want to go to your physical home to do nothing more than piss you off so likewise I wouldn’t want to go to your virtual home to do nothing more than piss you off.

      Try not talking out of your ass, then. You’ve tried to handwave all examples of large-scale open-source projects produced without “corporate largesse” out of existence, which is both profoundly ahistorical and insulting to those of us who toiled for years on them, never even dreaming of a future in which the likes of IBM and Apple and HP would drop millions of dollars on us. What, did you think “ESR” came out of nowhere? I thought of Emacs first because I was there – Ohloh says 2.5 million lines but it was already a cool million in the early 1990s when I was writing LISP modes for it like crazy. Most old-school hackers could tell similar stories about other large projects.

      We wrote the infrastructure you rely on today in basements and dorm rooms and CS labs, stealing time from day jobs or not having lives at night, precisely because there was no fucking “largesse” to be had. I spent a half-decade of my life selling open source to Wall Street before the X Consortium became an example rather than than a one-off – and I certainly didn’t do it so some ignorant jerk could try to tell me that we couldn’t write or maintain large codebases without showers of corporate gold.

  138. @jmg no goal post shuffling. I should have checked ohloh rather than just used the first source I found. Interesting the size of the discrepancy and the massive jump in 2010. I wonder what that was.

  139. @nigel, I think that a likely candidate is the merge of the CEDET (Collection of Emacs Development Environment Tools) code which was developed by Eric Ludlam in his spare time apart from his job working on Matlab.

  140. @Shelby:

    I was referring to immutable when talking about FPs.

    I understand, because I initially struggled to learn Haskell, Scala

    Haskel – same here. Scala – don’t know much about it but imo not worth learning. Putting functional and imperative under the same roof is not a very good idea imo. Results in more confusion than good. The right approach is to use mixed language development that sit on top of something like LLVM, or some other VM so that the languages can interoperate while keeping cleanliness and purity of every paradigm. That is why I am a fan of clojure -> Because clojure embodies this approach to programming, and draws the lines where they should be drawn.

    If we make PFP easier, more fun, more readable, less verbose, and less buggy than imperative programming, perhaps we can get a waterfall transition.

    It is less buggy and far less verbose already.Imperative/FP ratio is already at 5x to 10x LOC. How much shorter do you want your programs to be ? What FPs are lacking is mainly tools though this is being remedied somewhat in the case of haskell (there is an IDE in haskell written in haskell). As for waterfall transition I am not as hopeful as you are although I do think there is a good future for Haskell, F#, Erlang and Clojure.

    Where FPLs seem somewhat lacking to me is in how the deal with the question of time – real time. While the availability of an absolute time reference (unlike distributed systems) makes it feasible for the compiler to easily reason about such questions as concurrency, parallelism etc, that absolute time reference is not available to the program as a whole. The programmer cannot exactly control how the many pieces of the program behaves vs. the time axis. While in most systems this is not of major importance as long as things happen correctly and fast enough, in quite a few systems it is.

  141. @nigel:

    no goal post shuffling. I should have checked ohloh rather than just used the first source I found.

    Your posts were not credible when it comes to LOC for TeX/LaTeX. If LaTeX was that small a system why does it occupy 500MB on my hard disk ? Oh wait, let me guess: You only checked the LaTeX executable itself dismissing the countless packages written on top of latex that do all kinds of magnificent things -> all entirely written by volunteers and people in basements and all part of the same installation of TeX/LaTeX on any/every system.

    What is even worse is that you seem to think that unless we lump all these packages into one giant monolithic application with flashy icons and make every single feature in the packages accessible to users from toolbars/ribbons/docks with apple-dock-genie-like animation that it somehow doesn’t count as an application.

  142. The canonical example is the Point class

    Actually, in Cocoa, we use a struct, not a class, to hold points. Obj-C is C plus smalltalk-style messaging. We still use the C types where they make sense.

  143. Here’s a little something to think about.

    John Gruber posted something recently noting that iTunes brought Apple $1.4 billion in revenue last quarter, noting how “it’s insignificant in the grand scheme of Apple’s income”. Indeed, Apple brought in $13.3 billion in revenue on the iPhone alone last quarter.

    Which brings us to Google. Estimates are that Google could make up to $2 billion in revenue on Android… for all of next year.

    Google’s development costs are probably lower because they only have to develop the operating system and not the hardware and their marketing costs are probably lower but, still, they’re just not pulling in anything near what Apple is.

    Your takeaway from this should be, of course, that Google is winning and Apple is losing.

  144. Apologies for the length, I am packing several replies in one comment.

    @john j Herbert:
    Apple’s gross appstore revenue for 2011 is projected to be $2 – 4 billion. And total appstores annual gross revenue is projected to rise to $27 billion in 2 years. While hardware prices and margins will decline, the confluence thus perhaps lends some support to my thought that the future waterfall decline threat to Apple is an open app Web3.0 cloud. Perhaps Apple’s strategy is to starve the Android hardware manufacturers of margins, as the margins shift to the appstore and eventually total smartphone hardware volume growth decelerates and the debt driven expansion starves future hardware demand. I note the battle with Amazon over the name “Appstore”. I broadly sense that Apple may be trying to create a captive internet, but I haven’t investigated this in detail.

    @jmg:
    My understanding is that Smalltalk is anti-compositional, i.e. anti-modular, because it doesn’t have the sufficient typing system[1], e.g. subtyping, higher-kinds, and diamond multiple inheritance. You can correct my assumption that an object messaging fiction doesn’t remove that necessity.

    @uma:
    Agreed that interleaving FP and imperative adds complexity to the shared syntax for no gain if purity (a/k/a referential transparency) is the goal (and this apparently applies to Clojure too), because in functionally reactive programming (i.e. interfacing IO with PFP), the impure functions will be at the top level and call the pure functions[2], a simple concept which doesn’t require the mind-boggling complication of Haskell’s IO monad fiction.

    Clojure is not more pure than Scala, and is only “configurably safe“. A Lisp with all those nested parenthesis requires a familiarity adjustment (in addition to the digestion of PFP) for the legions coming from a “C/C++/Java/PHP-like” history. I doubt Clojure has the necessary type system for higher-order compositionality[1].

    My point was we need all of those advantages in one language, for it to hopefully gain waterfall adoption. The “easier” and thus “more fun” seem to be lacking in the only pure FP language with the necessary type system[1], Haskell. And Haskell can’t do diamond multiple inheritance, which is a very serious flaw pointed out by Robert Harper and is apparently why there are multiple versions of functions in the prelude. All the other non-type-dependent languages have side-effects which are not enforced by the compiler, or don’t have the necessary type system. Type-dependent languages, e.g. Agda, Epigram, and Coq, are said to be too complex for the average programmer, which esr noted in his blog about Haskell.

    I agree the IDE tools are lacking, but HTML demonstrated that a text editor is sufficient. I disagree that any of those other languages can become the next mainstream language, regardless how good their tools become, because they don’t solve the compositional challenge[1], so what is the motivation for the vested interests of leaving the imperative world then? I think a language that solves the compositional challenge, will “force” adoption because its collective community power in theory grows like a Reed’s Law snowball, i.e. it will eat everything due to the extremely high cost of software and the amortization of cost in a more granular compositional body of modules.

    @phil:
    The prior paragraph derives abstractly from thermodynamics, i.e. economics. State and variables exist in PFP. What PFP does is make the state changes orthogonal to each other[3].

    @uma:
    The time indeterminism in Haskell is to due to lazy evaluation, which isn’t desirable. See “lazy nightmare” link in my prior post for the rationale. Orthogonal to the indeteminism issue, where the finely-tuned imperative control of time is necessary, which btw is always a coinductive phenomena[2] and thus anti-compositional, this goes in the top-level impure functions in Copute.

    @Jeff Read:
    IO in PFP requires the compositional way of thinking about the real world[2], i.e. a coinductive type. The practical example[2] is easy for the average programmer to grasp. It is just a generalization of the inversion-of-control principle. This stuff isn’t difficult, it was just difficult to realize it isn’t difficult. Once that “a ha” and it comes clear in the mind, it is like “why wasn’t I always doing it this way”.

    Sections of my site (scroll horizontally):
    [1] Skeptical? -> Expression Problem, Higher-Level, and State-of-the-Art Languages.
    [2] Skeptical? -> Purity -> Real World Programs.
    [2] Skeptical? -> Purity -> Real World Programs -> Pure Functional Reactive -> Declarative vs. Imperative.

    1. >Turns out Clayton Christensen has a pretty good handle on Apple hasn’t been subject to disruption under Steve Jobs

      I agree with this analysis as far as it goes, but find it very curious what Clayton doesn’t notice. That is, that Apple isn’t disrupting its own phone business.

  145. @John J Herbert
    Apple is indeed raking in money. But is it more than the combined income of all Android producers? Google is just one player.

    Apple is also in a position to skim off all profits in the market. This captive market will of necessity be smaller than an open market where profits stay with the producers to a greater extend.

    So is Apple winning the marketshare game? Obviously not.

  146. Here is an “interesting” analysis that is germane to the topic:

    “Everyone knows the true reason why Steve Jobs resigned. Apple cannot compete with dedicated handheld gaming devices. What Apple fans are not familiar with is that Steve Jobs was an employee of Atari and received his business training from Atari’s main investor. The Apple II was designed around the video game ‘Break-Out’ which Wozniak essentially designed. Video games created the personal computer, not the other way around. In the same way, handheld video games created the handheld computers, not the other way around. And despite Steve Jobs’s origins in gaming, his success in media only comes from old media such as movies (Pixar) to music (iPod). Steve Jobs has had the same exact effect on video games as Bill Gates: none. Aside from providing computer hardware that can also play video games, the nature of video games has been unchanged and uninfluenced by Jobs.

    “It is wise for Steve Jobs to exit his role in the company now when it is clear that he has failed for the second time (the first time was in home game consoles, the second was handheld game consoles) to take over gaming. Steve Jobs has failed to surpass his original employer, Nolan Bushnell, in influence over video games.

    “Meanwhile, dedicated game hardware continues along merrily oblivious to what goes on in Apple.”

  147. @Winter: Yes, Apple’s profit share is almost certainly more than the combined profit share of all Android producers. See here.

  148. @Some Guy, sorry I was unclear in my statement. I meant that Point was the canonical example I had seen in tutorials describing the selector syntax used in smalltalk, and so of course my statement was smalltalk-centric. I was just trying to demonstrate the keyword message syntax that both smalltalk and Objective-C share, and why it can benefits that outweigh its initial verbosity.

    @Shelby, I can really correct your assumption because I don’t know what it is. You have an amazing amount of technical jargon in your posts, but by your own admission you don’t have a functioning compiler/interpreter, no tutorial which simply gets across the advantages of the paradigm you are proposing and, most importantly, no body of code demonstrating sophisticated functionality that shows considerable benefits over another body replicating that functionality but developed in a different language/paradigm. In this sense it reminds me of the strict FP advocates who tempt me to read their blog post promising to reveal the revelatory power of their approach, and then show me a few numeric calculation functions, and then leave me to apparently fill in the enormous chasm between a succinct Fibonacci implementation and a fully developed user application.

  149. @uma If you can’t see the difference between the really large projects I cited and the projects put forth then we have very little basis for comparison.

  150. @jmg, very good point, theory is cheap and those who talk aren’t building. Thanks.

    Someone had threadjacked about the Gibson raid, and now the CEO claims that the 2008 “Lacy Act” makes every domestic citizen a potential criminal if they sell any product that was originally manufactured in a way that violates any law of any foreign country.

  151. Apple cannot compete with dedicated handheld gaming devices.

    Tell that to Nintendo’s investors. Sales of Nintendo DS class systems (including the 3DS) and games is flagging, putting huge dents in Nintendo’s profits — all because of the iPhone. (Note: not cellphones or smartphones — the iPhone. There is simply no other player in the AAA cellphone gaming market.) Their investors are begging Nintendo to start releasing Super Mario content on the App Store or something.

  152. @esr I suspect that the researchers were counting only the core emacs code…there’s a reasonably close match in terms of the SLOC count they provide and the C code count on Ohloh. If I defend that folks will claim I’m changing goal posts but folks should really stop to consider how much of the lisp code is core to emacs as functionality (quite a bit) and how much was for hangman and news readers (also quite a bit). Emacs wasn’t just an editor but an application platform.

    I don’t know how old you think I am but I was in the same dorm rooms and basements and so forth in the 80s. Don’t make it out to be some grand sacrifice and not having lives because of some great cause. We did it because it was fun.

    And none of this was “large scale” even for the large projects. My “day job” while I was in school at the time was working on a 70 person team to develop code for a large science data center so I had a good feel for the difference in scale and coordination required even then. At night I’d be a computer “first aid” staff in one of the labs and hacking. You guys were a little older than me being in grad school but I have my own recollections of that era with one foot in the hacking culture and one foot in the formal software development culture.

    And there was corporate largess everywhere from donated PCs, mainframes, etc to internships, grants, etc. If you were hacking in any sort of academic environment it was there. That’s ignoring that universities have pretty deep pockets of their own.

    1. >If I defend that folks will claim I’m changing goal posts

      And they’ll be right, too.

      >Don’t make it out to be some grand sacrifice and not having lives because of some great cause. We did it because it was fun.

      True, but orthogonal to any claim you’ve been making.

      >And none of this was “large scale” even for the large projects.

      Right, not if you carefully define away any counterexamples as you’ve been doing.

      >And there was corporate largess everywhere from donated PCs, mainframes, etc to internships, grants, etc

      I never saw any of it. Nor, to my knowledge, did any of the hackers I worked with. Perhaps you were exceptionally lucky; if so, bully for you but it has distorted your view of the conditions most of us had to cope with.

  153. @jmg Thanks. That big jump makes sense when you merge in a large library.

    Aside from this whole “what is significant and what counts as code/corporate contribution” debacle I started, I’ve always grimaced when I use ohloh data.

    Especially the “how much this code cost to build”. The (now defunct) project I’m most familiar with where I have a reasonably good estimate of how much it did cost to build and the two numbers are fairly far apart. There’s also a bunch of extraneous (aka dead end) code in our repository that I simply wouldn’t count as part of the core.

  154. @esr

    My primary claim is that very large software projects require rigor and structure that is most easily achieved by paying folks to do what is needed and big software product companies are well suited to do this and hence are unlikely to ever disappear.

    Everything else is typical of what happens in most internet debates…a swirl into every weirder and more extreme positions and the defending of minutia. My bad, I concede all of your points regarding emacs, etc. My apologies for picking words that were unintentionally inflammatory.

    But I don’t believe I am being extreme when I say that I believe that Eclipse/OpenOffice/etc is of the same scale as the largest commercial IDEs/MS Office/etc and very unlikely to have been developed as a pure grass roots effort. I think the difference between the widespread adoption of Eclipse/OpenOffice/Firefox/etc vs commercial equivalents and say GIMP vs Photoshop is due to the (sometimes massive) investment in these open source projects by companies for their own strategic reasons. Strategic reasons that aren’t always valid across the spectrum of software business cases. Hence the lack of anyone sinking significant resources into GIMP at this time.

  155. @esr

    Regarding goal posts:

    The examples I consistently provided were of the large single program type. The line of code is one major aspect of the goalpost and while I’ve tried to clarify what I meant by the goalpost, it itself hasn’t moved: programs like MS Office, Photoshop, etc.

    I would assert is that a large single program with 1M+ lines of code is much more difficult to engineer and manage than a medium sized program of 200K lines of code and 800K worth of individual add-ons/plug-ins. The difference in complexity is not linear. I feel I’m on pretty solid ground with that assertion and hence why core application size is part of the goal post.

  156. I would assert is that a large single program with 1M+ lines of code is much more difficult to engineer and manage than a medium sized program of 200K lines of code and 800K worth of individual add-ons/plug-ins.

    Well, yes. That’s why the smart method of large-scale development is to do the latter, not the former. Your point seems to devlove into, “The only way bad, non-modular design can flourish is to be sheltered in the hothouse environment of the corporation.” That is true, without actually being an indictment of open source.

  157. @Nigel
    It may be true that there are cases where there are transactional costs for uncoordinated software development, that leave captive markets for the corporation. That isn’t an indictment on the open source model of cooperation in Inverse Commons which amortizes costs and risks, but imo rather an orthogonal indictment of the technology we currently have for software development.

    Hypothetically, if a huge software project could be optimally refactored such that it had the least possible repetition, and if I was correct to assert that mathematically this requires the components (e.g. functions and classes) to be maximally orthogonal, then what would happen to your assertion that only big software companies will ever be able cooperate to create huge projects?

    In the theory of the firm that Winter shared, the reason the corporation exists is because there a transactional cost (or risk cost) for uncoordinated cooperation. So what is the nature of that transactional cost in software? Afaics, it is precisely what causes the Mythical Man Month, i.e. getting all devs on the same wavelength, because the code they write all has interdependencies. But if there is maximal orthogonality of code, then the communication reduces to the public interface between code. Also with a higher-level models, such as Applicative, they automatically lift any function of any number of parameters of unlifted types, T -> A -> B -> C …, to higher-kinds of those types, i.e. you get for free all functions of type (without having to write infinite special case boilerplate), e.g. List[T] -> List[A] -> List[B] -> List[C] and any other class type that inherits from Applicative, not just List. This is the sort of reuse and orthogonality that could radically reduce the transaction costs for uncoordinated development. With a huge preexisting library of orthogonal modules, a small team of the future could possibly be able to whip up large compositions at exponentially faster rate. We have a huge body of code out there today, but my understanding is it often difficult to reuse, because it takes more time to learn what it does, extricate and refactor the needed parts. I have not read Esr’s book on unix philosophy and culture, but I think I understand intuitively that it has a lot to do with using orthogonal design, e.g. pipes in the shell with many small utility commands (programs). Although it might seem that code for different genres of programs are totally unrelated, I am finding in my research that maximal orthogonality causes more generalized code.

    I can rightly be accused of over-claiming without a working body of code (so I better shut up), and on the other extreme you wrote we can n”ever” progress. I hope I can change your mind someday.

  158. I would assert is that a large single program with 1M+ lines of code is much more difficult to engineer and manage than a medium sized program of 200K lines of code and 800K worth of individual add-ons/plug-ins. The difference in complexity is not linear. I feel I’m on pretty solid ground with that assertion and hence why core application size is part of the goal post.

    Which pretty much confirms my suspicion regarding your central point. Regardless of what you intend to say, what you are actually saying is that the open source model is not conducive to producing big, bloated balls of mud, and that corporate development is.

    That sounds like an indictment of corporate development, not of open source. It may also serve as a warning to open source project maintainers who accept large contributions from the corporate sphere. (Some say this has already happened to the Linux kernel…)

    The tone of your postings also seems to imply that an application is not worthy of being considered serious if it consists of a small engine controlled with scripts and add-ons, rather than a big ball of C++ mud (and, being C++ mud it has a good amount of toxic waste mixed in with it). Ever think that the “serious applications” from big companies look like that because big companies are in the main not smart enough to consider there are other ways of doing software?

  159. I think the difference between the widespread adoption of Eclipse/OpenOffice/Firefox/etc vs commercial equivalents and say GIMP vs Photoshop is due to the (sometimes massive) investment in these open source projects by companies for their own strategic reasons.

    The point I found amusing is that Linux didn’t count because Red Hat makes money off it when it was a massive project long before the existence of Red Hat.

    That and my first thought was “who thinks ‘lines of code’ is actually a meaningful metric”? I mean really… move into this century please.

  160. Something completely different.

    Pressure seems to be mounting on Google to sell Motorola phones devision. Can cyanogen start a fork of Android when Google starts to compete directly with the Asians? Or is he just kept to pressure Google.

  161. Tell that to Nintendo’s investors. Sales of Nintendo DS class systems (including the 3DS) and games is flagging, putting huge dents in Nintendo’s profits — all because of the iPhone. (Note: not cellphones or smartphones — the iPhone. There is simply no other player in the AAA cellphone gaming market.) Their investors are begging Nintendo to start releasing Super Mario content on the App Store or something.

    For a long time the 3DS was getting its clock-cleaned by . . . the DS and the PSP. Maybe Nintendo’s failure has nothing to do with the iPhone and everything to do with . . . releasing a crappy product?

  162. @Jeff Read
    That research correlated SLOC with fault-proneness complexity. It is possible (I expect likely) that given the same complexity of application, bloated code may have more faults than dense code.

    A metric that correlates with application complexity (as a proxy of relative price in, price x market quantity = value), is a requirement in the exchange model for open source that I am proposing. I will probably investigate a metric of lambda terms count and denotational semantics (i.e. userland type system) complexity.

  163. The point is that a metric for fault-proneness complexity may not be correlated with application complexity, i.e. effort and difficulty of the problem solved.

    1. >ATT HTC LTE tablet for $700 on contract! Apple must be terrified!

      You really have to wonder what these idiots were thinking.

  164. >You really have to wonder what these idiots were thinking.

    Saw that article. Had the same reaction- all I could think was that they’re hoping to fleece some overeager foolhardy LTE-craving early adopters.

  165. you stole my name!

    I’ve been here a lot longer than you. I can do that. I (and Jeff, Shenpen, Ken, Pete, and Dean) kept this blog going for two years while esr was away.

  166. >>ATT HTC LTE tablet for $700 on contract! Apple must be terrified!

    >You really have to wonder what these idiots were thinking.

    They’re thinking that’s the price they have to charge if they’re going to make any money. They can’t undercut Apple because they don’t have Apple’s volume and supply chain. So they’re offering a product with one or more unique high-end features for a high price, expecting relatively low sales volume selling to the early adopters and then maybe if it catches on they can drop the price later as costs come down.

    In tablets there is no “Apple Tax”. Yeah, Apple’s making a profit, but they’re selling more cheaply than anybody else can afford to produce a remotely similar level of quality.

    I take this as more evidence for my earlier contention that development cycles are (for *new* products, not tiny tweaks on existing ones) still on the order of 2 years. Yes, this thing is an obvious miss, but remember back when Apple was about to release a tablet and everybody expected it to be priced at around $999? IPad 1 came out, what, 16 months ago? April 2010. Suppose it *had* come out at $999 instead of $499. Against that, this thing at $700 would have been plausibly competitive. So the simplest explanation is that they started development on this a solid year and a half ago, guessing at what the competitive landscape would look like now…and guessed wrong.

  167. Lines of code turns out to be the only accurate metric of code complexity.

    What little i can read of that article implies the case study is about code within the same project, and thus (one would hope) with the same coding style. The value of metrics such as Cyclomatic Complexity and “Change Risk Analysis and Prediction” is that it is more meaningful between projects with different styles and (in some cases) different languages. And considering the context, we’re likely talking about both of those differentiating factors.

  168. Also a cursory evaluation of the papers that cite the El Emam et al. paper show that it’s not without either controversy or alternate results. At least one citing paper gets better results out of export coupling.

    To clarify, i don’t mean to say that LOC is a completely useless metric. But defining project size(and value) by LOC seems somewhat backward to me when one is written in C(or C++) and one is written in Lisp and the coding styles are going to be very distinct and the opinions on what should go into a software product (“what needs to be there” vs “everything including the kitchen sink and all of its plumbing”) are like night and day.

  169. Which pretty much confirms my suspicion regarding your central point. Regardless of what you intend to say, what you are actually saying is that the open source model is not conducive to producing big, bloated balls of mud, and that corporate development is.

    You could characterize it that way. On the other hand there are millions of folks that prefer those big bloated balls of mud because they provide functionality that open source products rarely do in a cohesive (or well documented) manner.

    /shrug

    There’s bloat of course. But there’s a lot of functionality packaged in a way that normal users can actually use them. Since we agree that Open Source is not conducive to producing these large projects then great, there’s no argument. The market will decide whether big bloated balls of mud are useful or not. Thus far these products have been highly successful and the companies that produce them are successful as well. The odds that they will disappear seems highly unlikely. Any software silver bullet that improves the productivity of a small team typically will improve the productivity of a large team.

    I am not a big fan of C++ and while I wouldn’t characterize it as toxic I find it far less pleasant than Java or C# for development. I’m not driven by ideology but pragmatism. Which is why I really like and support Open Source but realize that it has some inherent weaknesses. Likewise the same for closed proprietary source development/products.

    Terrible, I know. Unbelievers are typically better tolerated than those considered heretics…

  170. @JonCB

    I use SLOC as a convenience since it’s the most accessible metric. It provides context when I say “large project” so it is understood to be of very large size. 1M SLOC is probably now on the very low end of the “large project” scale these days. I could have described it as greater than 18000 function points but that doesn’t resonate all that well. Perhaps that would have been preferred…pray tell how many Function Points is emacs or Eclipse? Tell me without taking SLOC and applying the rule of thumb conversions to FP for the respective languages used.

    Cyclometric complexity is one of those often cited metrics that managers like to apply (thou shall have no complexity above 10) but one with its own set of limitations. A nice indicator of potential trouble but not one with which to judge relative sizes of projects. That’s not what its for.

    To borrow from Churchill SLOC is the worst from of metric except all the other that have been tried.

    PS Linux didn’t count because the debate was whether mega-projects are feasible without corporate investment. For Linux there are strategic reasons that these companies invested millions into it. As I said, these strategic reasons (even if nothing more than the founder hating Gates) do not exist for every software product.

  171. Linux didn’t count because the debate was whether mega-projects are feasible without corporate investment. For Linux there are strategic reasons that these companies invested millions into it.

    The question though is which came first, the millions of dollars invested into it or the millions of SLOC invested into it. How big was linux by the time it managed to attract such corporate attention?

    Also note that tex (at least one distribution of it anyway) is about 900k SLOC; saying latex only has “34k SLOC” doesn’t mean much, as latex is a format for tex and not a “stand-alone” project in the sense you seem interested in. Of course I’m sure that’s still not enough to “count” but I thought I’d mention it.

  172. Also note that tex (at least one distribution of it anyway) is about 900k SLOC; saying latex only has “34k SLOC” doesn’t mean much, as latex is a format for tex and not a “stand-alone” project in the sense you seem interested in. Of course I’m sure that’s still not enough to “count” but I thought I’d mention it.

    Doesn’t count. It’s not monolithic enough, or popular enough, or something else. (Nigel seems to have evolved beyond moving goalposts, to projecting holograms of goalposts and declaring the kick not good because it was kicked through the hologram.)

  173. You could characterize it that way. On the other hand there are millions of folks that prefer those big bloated balls of mud because they provide functionality that open source products rarely do in a cohesive (or well documented) manner.

    On the gripping hand there’s buttloads of popular proprietary software that’s written as a small core with scripts and plug-ins supplying the end-user UI. You can start with, oh, just about every AAA game released in the past five years or so.

  174. @jeff I believe that you would be incorrect regarding AAA games. Unreal 3 engine is around a couple million lines of code if I recall correctly. Gambryo wasn’t small when I looked at it and neither was Torque which is an older AAA engine. That and AAA games have art assets no open source game can match. It’s just too expensive to produce.

    AAA games are like the worst example of a commercial niche that open source could fill. Now I’m sure to get beat up over that too and have to defend why a game like Deus Ex or Mass Effect is bigger or more whatever than Assault cube or Sauerbraten.

    Frankly, for games I think the indie model (closed cathedral development using cheap or free-as-in-beer toolsets) better than the open source model for small/single game devs. I’ve always liked the GarageGames model and am glad they survived.

  175. @Nigel It is not a coincidence that you mention the Unreal 3 engine, because it was Tim Sweeney’s The Next Mainstream Language that sparked my interest to fundamentally improve uncoordinated development. As I dug into what the features would need to be (which Sweeney did not entirely enumerate), there were no languages left standing that had them. Haskell, SML, and Scala were the finalists.

  176. FP like all things has advantages and disadvantages. There are many powerful things you can do with FP based languages and they are highly parallelizable. But the strength of no side effects is also a weakness in other scenarios.

    What’s interesting is where we are 5 years later despite the arguments made by Sweeney. The coming crisis in 2009…ah not so much. I’m aware of some F# samples using XNA it strikes me as pretty much the same as in 2006…

  177. I am becoming more confident there are no mainstream use cases where FP has disadvantages. And the advantages are beyond orders-of-magnitude, with the power of Reed’s Law for composition. Imperative programming isn’t generally compositional, and that is fundamental.

    I recently explained the O(1) solution to the n-queens using an immutable random access array, which provides evidence that immutable is not likely to be log n slower in mainstream use cases. Today I published a tweak to Traversable so that lazy is not more efficient than eager for FP composition on combinations of mapping and folding.

    As best as I can tell, no one other than me, is thinking about the separation of interface and implementation in FP. If anyone knows of anyone, I want to know who they are. So that could be the reason no progress has been made. I had a huge learning curve to climb, starting circa 2008, and I didn’t really decide until 2011 that I had no choice but to create a language.

    Also I could not achieve my work without Scala. There is no way I could reproduce all that effort by myself in any reasonable time-frame, so initially Copute will compile to Scala, and let the Scala compiler do as much as possible of the heavy lifting. My gratitude to the Scala folks is beyond words. I understand that Odersky’s model was to throw in everything including the kitchen sink, because it was a research project to demonstrate that generality. My goal is different. Again too much talk on my part, but I am excited.

  178. And the point that Scala wasn’t well known and fully capable until about 2008 roughly, maybe 2006 sans some key features.

    Some progress has been achieved, but not so mainstream yet. Twitter is deploying Scala and other boutique JVM languages throughout their server-side.

  179. The Thinking Man wonder if a $99 tablet might also freeze the market for high-end Android phones that are priced higher than $99.

  180. The LA Times reports that tablet manufacturers are getting the message from the TouchPad frenzy, with Lenovo saying it will debut a $199 tablet at the end of September.

    Interestingly, the article suggests that consumer price expectations for tablets may be being bounded above by the price of e-readers ($250 range).

  181. I wish the lenovo was coming to the states but I recall reading it wasn’t.

    I like the 7″ form factor for ebooks and movie watching. As a tablet it’s a little too confining but I sill like the Nook Color.

    The 10″ tablet size is just more usable for me even if less portable.

  182. @phil> The thinking man is wrong. The java compiler has optimized this to string “append” for many years now.

    Only for simple expressions, Phil.

    Do something only as complex as:

    String s = “”;
    for (int i = 0; i < 10; i++) {
    s = s + Integer.toString(i);
    }

    and you get the behavior that generates a lot of garbage.

    StringBuffer exists for a reason.

  183. > The LA Times reports that tablet manufacturers are getting the message from the TouchPad frenzy, with Lenovo saying it will debut a $199 tablet at the end of September.

    It might work! The A1 (which is the 7″ one that will sell for $199) is nearly identical to the Galaxy Tab 7″, and Samsung has sold 20,000 Galaxy Tabs!

    http://www.guardian.co.uk/technology/2011/sep/02/samsung-galaxy-tab

    ‘course, that article claims a much higher price for the Lenovo K1 (10″, like the iPad) tablet:

    Lenovo announced its own entry into the Android tablet market at the IFA technology show in Berlin on Thursday, with the IdeaPad K1. Asked whether the K1, which will launch in mid-September with a starting price of £349, should be much cheaper than the market-leading iPad 2, priced from £399, Barrow told The Guardian that “at that price, any [manufacturer] would be giving money away”.

  184. @The Thinking Man:
    The original complaint wasn’t about the semantics, but rather the verbosity of:

    NSString *test = [myString stringByAppendingString:@” is just a test”];

    The distinction you’ve raised is not a justification for the extra verbosity of Objective C, but rather the raison d’être for immutability, i.e. purity, a/k/a referential transparency. In an pure language, no object can be mutated, which eliminates the spaghetti nature of cross-dependencies of functions, i.e. any an impure function which inputs a string and appends to it by mutating its input, has modified its input and thus has a dependency on its caller, which spreads out into the program like spaghetti.

    In pure language, the string is represented as an immutable list, thus the ‘+’ operator for strings, is the construction operator for a list, which eliminates the wasteful copying and eliminates the other problems you raised:

    List[String] = mystring :: “is just a test”

    Pure FP isn’t generally slower, and in the above case it is faster except for concatenation of very small strings (which could thus be automatically optimized by the ‘+’ operator to be a copy to new string of only the small portions), because it forces us to use algorithms which make more sense in overall compositional perspective.

    Thanks for providing a clear example of why pure FP is what we should all be using.

  185. Didn’t read all the comments so someone else might’ve said this already, but most of Motorola’s patents that apply to all cellphones were submitted to standards bodies and thus, must be licensed at FRAND rates to all implementors. Thus, those patents are pretty useless for lawsuits as one only can argue about how much is fair and reasonable. Do some research.

  186. “Apple, and its fans, had promised the world that the moment in February that Apple went multicarrier would be when it began to regain ground against the upstart Android.”

    Apple promised this?

  187. “Because Apple reports units sold rather than shipped, that 20% has to be discounted by the return rate on Android tablets – but the return rate would have to be ridiculously high (enough to make front-page technology-press news) in order to drive actual Android share down to a figure that shouldn’t worry Apple.”

    A headline like “Samsung Galaxy Tab sold just 20,000 out of 1m shipped”, perhaps?

  188. I’d love to know why market share seems so important to you when Apple makes more than half of the profit of the smartphone industry. Barely half of the handset makers are currently profitable and the ones who are (HTC, Samsung, RIM) aren’t exactly obtaining margins anywhere close to Apple’s – RIM, the only one with reasonable margins has seen them decrease significantly. Rather than insisting that their market share is increasing and calling the end of the festivities for Apple you should be concerned that *despite* its higher market share Android is far from generating as much money to the companies that support it (Google included most likely) than iOS provides to Apple.

    In this context I fail to see what exactly Apple has to fear: they have better access to manufacturer discounts, can pre reserve raw materials far ahead in time and anywhere in the pipeline, can dedicate more resources to the streamlining of manufacturing, design and assembly processes and reduce costs way better than their competition. Android can grow their market share as high as they can, as long as they extract less profit per unit sold and per vendor they will be fighting an uphill battle. You win an economic competition by making profits and Android isn’t winning it (so far, I’m not saying it can’t change).

    Unless you can demonstrate that the Android phones manufacturers are bound to soon make technological advances that Apple is incapable of which will finally give them an edge in profit making I fail to understand why Apple should be worried.

  189. Actually I have to correct my own statement:

    It was in Q2 2010 that Apple was making half of the profits of the smartphone market:
    http://www.asymco.com/2010/08/17/androids-pursuit-of-the-biggest-losers/

    In Q2 2011, Apple had increased this share to two-third:
    http://www.asymco.com/2011/07/29/apple-captured-two-thirds-of-available-mobile-phone-profits-in-q2/

    Given that the huge number of iPhone 4 sold this last quarter (despite feature-wise superiority of Android competitors) I wouldn’t be surprised that this share actually increased in Q3 but we’ll know about that only in October.

  190. esr, do you really believe that SJobs does all the job at Apple? Thats Microsoft marketing, I can not believe that you really believe that.

    Android is not Linux, and is not Open Source. Where is Android 3.0 source code? Do you really look forward to a mobile future where an antivirus is mandatory?

    As soon as MS relaunch itself with Nokia and HTC buys WebOS the whole marketshare will change. But, Apple continuos grow YoY will keep growing because the total number of smartphone users keep growing.

    There is a place for Apple, Android, WebOS and MS. I don’t think we need another monopoly(Android) like in the PC era(Windows).

    And no, the more Android phones out there does not help Linux at all.

  191. Reading this article a year after it’s published, and right now you can feel the ooze of bullshitness in it. All predictions wrong. It’s not the raw power, it’s the quality of the product. And Apple is still unmatched.

Leave a comment

Your email address will not be published. Required fields are marked *