The Smartphone Wars: Tightening the OODA Loop

An excellent article on the future of smartphones puts hard numbers to a trend I’ve been watching for two years. In so doing, it points out one of the fundamental competitive drivers in the smartphone market. More than that, it displays a powerful if sometimes less than obvious advantage of open-source software, and implicitly relates that to a seminal concept in military theory called the OODA (observe/orient/decide/act) loop.

The single most interesting feature of the article is a graph of how the average lifetime of Android handset products has been falling over time; 3 years as recently as 2007, it’s now down to 6-9 months. Unstated in the article is that to sustain that pace, development cycles actually have to be shorter than that. If you look at the pace of announcements from (for example) HTC, it’s clear the development time of the more nimble Android players is actually down to approaching 90 days.

The article also talks about the “Quadroid” standard, how intense competition in handset design has been fueled by a combination of inexpensive phone chipsets from Qualcomm and the Android OS. The effect of this combination is to drastically reduce both engineering expenses and time to market. It is, fundamentally, why development cycles can drop to around 90 days and product lifetimes to 6-9 months.

There was a fighter pilot named John Boyd who became the most important strategic theorist writing in English during the 20th century. He began with E/M (energy-maneuverability theory), which became the basis on which modern fighter aircraft are designed and modern fighter tactics taught. He ended his career as one of the architects of the winning “left-hook” strategy in the 1991 Gulf War. Connecting these was a general theory of military effectiveness (and more generally, organizational effectiveness) centered around what he called the OODA loop.

OODA theory is worth reading about in more depth for anyone interested in…well, any kind of competitive dynamics, actually – military, commercial, individual, anything. The Wikipedia article is a good start. Stripped to its essentials, the theory is that competing entities have to go through repeated iterations of observing conditions, relating observation to a generative theory, deciding what option to pursue, and acting. Victory tends to go to the competitor who can run this cycle the fastest.

OODA theory was originally generalized from the observation that in fighter design, maneuverability (especially a shorter turning radius) beats straight-line speed. When you get inside your opponent’s OODA loop, either physically or conceptually, you can attack him from unexpected angles. You can be where he isn’t. You have the initiative; he is reduced to reacting, often with too little and too late.

Now let’s look at the smartphone market, and consider Android vs. single vendors like Apple, RIM, or (now) NoWin (Microsoft/Nokia). Your single vendor has a product development cycle on the close order of a year – exemplified by the fact that the NoWin alliance announced just a week ago won’t commit to shipping a phone before 2012. Apple iPhone releases have been issuing on a predictable once-a-year schedule.

The Android army, and some of its individual members, now has a development cycle approaching 90 days. It gets three or four cycles around the OODA loop to each one executed by Apple, RIM, or NoWin. This is, in particular, why the Android army has been winning the race to exploit new networking standards such as HSDPA and LTE. More generally, it means the Android army can exploit technology changes and the availability of new components such as SoCs (System-On-Chip ASICs) at a pace single vendors can’t match. Most generally, it means the Android army has a faster reaction to consumer demand.

No OODA theorist would be surprised at the result; Android is clobbering the crap out of its less nimble competition – not just on price but on features as well. If you know OODA theory, you would predict this result as soon as it became clear that Android’s OODA loop was going to be shorter.

Of course the Android army has had other things going for it as well. Lower engineering costs is an obvious one. Greater collective financial mass than any single vendor, implying the ability to spend more on development and marketing. All these tie together with tighter OODA in a way that has open-source software at its base. Open-source software is the key.

Because Android is open source, developing and shipping handsets becomes less expensive. Because Android is open source, handset makers and carriers can join the Android army confident that they’re not setting themselves up for extortion by Google or any other single software source. Because Android is open source, Qualcomm and Nvidia can front the enormous investment to make SoCs specialized for Android in confidence that they’ll have a market of more than one customer.

All these drivers increase competition and shorten time to market; Android’s OODA loop tightens, while Apple and RIM and NoWin stumble around trying to respond and fighting last year’s war against products designed last quarter. And come up short, as in Apple’s anticlimactic iPhone 4V launch last week.

There is no possibility that the single vendors can win such a battle; Android’s OODA will inexorably tighten into a noose around their necks. I said this confidently in late 2008 when Android first shipped, and it wasn’t wishful thinking; the consequences of open source were as obvious then as they are now to anyone who was paying attention.

I’m repeating the point now only partly as a study in the competitive dynamics of the smartphone market. Really, there’s nothing special about smartphones in this respect; the general effect of open source tightening the OODA loop operates elsewhere as well, with predictable consequences. We actually should have learned this a quarter century ago from the way that TCP/IP surpassed and almost completely destroyed proprietary networking protocols.

The story doesn’t end in 2011. Android SoCs aren’t generally deployed yet. When they start shipping later this year in third-generation Android phones, the parts count on a minimal smartphone is going to drop to a single chip, a capacitive display, a speaker/mic, a couple of microswitches, and the PCB to mount them on. Qualcomm is already predicting retail unit costs of $75 or less, and it’s going to be less once the chip development costs are amortized out.

The key to this possibility is that most of what used to be hardware costs in a smartphone have been ephemeralized – converted into information complexity inside the SoC. The combination of ephemeralization and open source is the fundamental of the fundamentals here. (Not entirely by coincidence, every single one of the billions of these Androids will have substantial pieces of my software inside it.)

It’s anybody’s guess what the lower limit of the OODA loop is. Not so long ago, 90-day product development cycles would have been considered an insane dream – but the rise of 3D printing and commodity SoCs might very well cut time to prototype much further. I can now easily imagine a handset designer with good CAD tools and a 3D printer producing a testable phone in a week.

What is certain is that single vendors going it alone will not be able to match this pace. They won’t have the SoCs and they won’t be able to match the number of engineer-hours going into the software. Their development and production costs will be higher. Consequently, they’ll be more risk-averse and unable to make decisions on a fast turnaround even if their technology permitted it; their OODA loop will never be able to tighten as much as Android’s.

All these problems – all of them – are predictable consequences of having a business plan stuck to a huge heavy blob of closed-source software. That is the weight that will ultimately drag down and kill Android’s competitors.

228 comments

  1. Other predictions can be made form here on.

    Phones are fashion items. People, young women, will carry them around constantly and visibly. So they want them to be fashionable. For (many) others, there are special needs, professional or others (many older people want BIGGER phones, not smaller), children want phones that work for children.

    Obviously, that will not work for $600 single vendor one-size-for-all productions. But for a buy-your-parts-and-and-skin-them production line, the sky is the limit.

    I expect fashion shops to appear that simply solder together custom made phones. And other shops that buy the prints, parts, and screens and build special, small run, but still cheap fashion phones.

    In the end, you will get a free phone with a personal remote control with your TV set. One that will automatically connect to your router at home to make VoIP calls.

    And we will finally get our real personal computer in a pocket that we simply hook up to a screen and keyboard. And the phone will be the mouse ;-)

    1. >I expect fashion shops to appear that simply solder together custom made phones. And other shops that buy the prints, parts, and screens and build special, small run, but still cheap fashion phones.

      Even there you may not not be thinking boldly enough. 3D printing is a perfect match to the business needs of these “fashion” shops, and it scales down pretty well. Want a smartphone case with your monogram embossed on it? Yeah, sure, we can do that while you wait…

  2. @esr
    “Even there you may not not be thinking boldly enough.”

    I know, every one of my predictions I made in my life was too conservative. I can now predict Android ear-rings with a bluetooth arm watch. And I would expect someone to point me to a web page with order information.

    I wanted to beat that string and started to predict MS starting to fall apart during 2012 some time ago. And now, only weeks later, I am becoming afraid that was too conservative again.

  3. @esr
    “This is, in particular, why the Android army has been winning the race to exploit new networking standards such as HSDPA and LTE.”

    This sentence brought a question to mind:
    How does this change given that (in the US, at least) these 90-day shops are constrained to work with (against) carriers whose OODA is measured in *decades*?

    1. >How does this change given that (in the US, at least) these 90-day shops are constrained to work with (against) carriers whose OODA is measured in *decades*?

      Not any more it isn’t. The carriers are down to half-decade timescales now; look at the accelerating pace of 2G vs. 3G vs HSDPA vs. LTE deployment.

      But a half-decade is still a lot compared to a quarter, and you’ve still raised a good question which I don’t intend to evade. The right answer is to question your assumption: who says those shops are going to stay carrier-constrained? We already know that smartphones can be turned into mesh-networking devices. Now, consider this…

      When SoC-smartphone prices drop below $75, odds are it will kill the carrier-contract system. Because in that price range, who needs a carrier subsidy on their phone? The market’s going to shift. Cell carriers are going to lose the little control they have left over handset software, including their ability to suppress VOIP and SIP and mesh networking.

      OK, now watch what happens. VOIP + SIP + fiber to homes + adaptive mesh = who needs the cell carriers? Smartphones are going to become intelligent routing devices that find the least-cost network links for you. That might be a cell carrier or it might not. You won’t care, any more than you care now whether your phone’s browser is using WiFi or a cell tower.

      Because the smartphone’s capabilities are ephemeralized, this is all just software. The rest of the consequences are left as a simple exercise for the reader.

  4. @Michael:

    How does this change given that (in the US, at least) these 90-day shops are constrained to work with (against) carriers whose OODA is measured in *decades*?

    I think it actually tightens the loop even more. If you know that the foundation is not shifting, and you know that your competitor knows the foundation is not shifting, then you work harder to beat the competitor to market, for the simple reason that any network effects achieved by being first won’t be immediately dissipated by the standards changing underneath you.

    But even though the carriers’ reaction time is typically measured in decades, they’re going to have to get on board with the fact that phone subsidies will no longer be needed. My prediction is that this will take awhile. For some amount of time, the carriers will be literally (no, not figuratively) giving away phones down at the WalMart, complete with first and last months free on a two year contract. This will work until everybody’s heard about the abuses that a friend of a friend suffered trying to get out of a contract after he got a bad phone. Virgin’s already showed people that you can buy a pretty darn good phone for $39.95 with no commitment. Soon, the only reason anybody will enter into any sort of cell contract is for cheaper/higher bandwidth. The phone will cease to be part of the equation. The interesting thing to see will be exactly how quickly this transition happens.

  5. @Patrick Maupin
    > literally (no, not figuratively) giving away phones down at the WalMart, complete with first and last months free on a two year contract. … Virgin’s already showed people that you can buy a pretty darn good phone for $39.95 with no commitment.

    I like it.

    What you’re describing is basically throw-away smartphones. I can switch carriers anytime I want (finally!) because the phone device is no longer a consideration.

    Perhaps we finally found a way to get competition in the US market. Took us long enough. Perdition on the monopolies and the horse they rode in on.

  6. @Michael

    In Europe you can take your phone if you switch provider. There are phone+subscription plans but they were always obliged to take any phone with a their sims. I often see kids borowing phones and switching sims to make calls.

    I rather see real personal phones with switching sim cards coming.

  7. NoWin actually isn’t even in a cycle yet, since it’s their first time with the OS. Might well be the last, too.

  8. Comparing Nokia and RIM to Android’s development cycle has merit. Apple’s is a bit different — they actively WANT once-a-year releases. It

    A) gives consumers a predictable schedule, and maximizes the time that a user can have the “Best iPhone”
    B) allows them to do combined hardware releases/software upgrades
    C) improves their abilities to plan part purchases potentially a year in advance, and make damn sure they can ship enough units.

    Which are things the other companies just don’t do. I’m not saying it’s the better way, but it is the better way for Apple’s long game. Apple knows that there is a certain percentage of the market that will only ever buy a phone with an Apple logo, and continue to sell those people high-margin products year after year. Android is taking over, but Apple will stabilize at 10-20% of the market, like they have for PC’s. Why? Customer loyalty, premium branding, etc. And, frankly, that’s the only part of the market Apple is interested in, because it’s high-margin.

    What really has to stop is the Android manufacturers, particularly Motorola and Samsung, to stop trying to be Apple. For instance, the Wifi version of the Xoom is going to be priced at $600, and its more comparable to what’s going to be a ~$500 iPad 2. The iPad already has huge market penetration/mindshare, and Moto ain’t going to win if they aren’t competitive on price. I hope all the Android manufacturers get the message that they actually have to start bringing their prices DOWN now that Apple is available on Verizon. There’s no reason that, say, the HTC Aria should be more expensive than the iPhone 3GS.

    1. >Apple’s is a bit different — they actively WANT once-a-year releases.

      Then they want death, and will get it.

  9. I expect fashion shops to appear that simply solder together custom made phones.

    That’s like opening a boutique that manufactures custom-made Prada-like but not quite Prada bags. Thie iPhone brand is powerful enough to sway consumer decision in the Veblen-good space.

    That said, I see more and more attractive women (who have perhaps a more utilitarian view of which cellphone to buy) carrying Android phones. Never to my 18-year-old geeky self did it occur that I might see a cute non-geek female carrying a Linux computer as a personal accessory.

    Because the smartphone’s capabilities are ephemeralized, this is all just software. The rest of the consequences are left as a simple exercise for the reader.

    It already is “all just software” for the carriers. Cellphone towers are basically a bunch of antennas, transceivers, and DSPs. Recently some engineers at Alcatel-Lucent figured out how to compress the analog stuff into a cube maybe 10 or 15 cm on a side, and the DSPs are outsourced to some datacenter, connected to the cube via a fiber link. This will probably have the effect of shrinking cell size and thereby increasing effective bandwidth; depressing the marginal costs of expanding cell coverage to very minimal levels; and making high-bandwidth service close enough to free that the consumer doesn’t even notice the costs; it may even be rolled into rent the way my water bill is today. Well, in Europe and Asia, anyway. No way U.S. carriers are going to give up their tiered-pricing partner-favoring tethering-surcharge data plans.

    As for the mesh-routing stuff, there are considerable technical and political hurdles. Aside from the non-viability of WiFi mesh networking, the established nationwide monopoly telcos are not going to go down that easy without a bitter political fight. Net neutrality is already in the process of being scuppered by the Republicans, and I doubt the Obama administration is going to stand fast against the favoritism which has shaped the political aspects of U.S. network deployment.

  10. @Aaron
    “What really has to stop is the Android manufacturers, particularly Motorola and Samsung, to stop trying to be Apple. ”

    If they can make money that way, they should do it. If they don’t, they go broke and disappear.

    There is a very long line of shops waiting to take their place. That is the real strength of the OSS/SoC combo. This is a real cut throat competitive market. Simple natural selective will do the job of coming up with the best strategy.

  11. All these drivers increase competition and shorten time to market; Android’s OODA loop tightens, while Apple and RIM and NoWin stumble around trying to respond and fighting last year’s war against products designed last quarter. And come up short, as in Apple’s anticlimactic iPhone V launch last week.

    The problem with this is that Apple is already thinking specifics about what kind of devices people will use three years from now. And it’s not just idle, speculative thought, either: one thing I learned recently is that Apple has the power to fuck up everybody else’s supply chain. Ever wonder why decent Android tablets are all both more expensive than the iPad and have smaller screens? Because probably two or more years ago, Apple negotiated deals with the suppliers that got them all the 10″ screens. Including Samsung, who want to get into the tablet market for their own selves. This can only come from thinking many steps ahead of your competition.

    RIM and NoWin are going to lose. But Apple is willing to sacrifice the present; everybody else is already playing on their terms. They are looking ahead to the future, to the next game they’re going to completely change.

    1. >I learned recently is that Apple has the power to fuck up everybody else’s supply chain.

      They won’t keep it, even assuming they have it now. Because what they’ve done is pump up demand, which attracts capital investment, which means more widgets will be produced at lower cost and made available to companies that aren’t Apple. Apple can stall the tablet market for a quarter or maybe two (at large cost to itself in carried inventory), but the net effect will be that the price trajectory for displays drops faster than it would have without Apple’s capital push.

  12. @Winter

    I don’t think they can. In the average person’s mind, phones are one of the following phone types:

    1) dumb phone
    2) Blackberry (i.e., keyboard, small screen, shitty)
    3) iPhone (i.e., high-end, premium, fancy phone)
    4) Droid (i.e., knockoff made by Not Apple)

    Android doesn’t have a high-end brand value. Geeks like me are willing to spend more bucks on something like the Moto Atrix over the equal-priced iPhone 4 16GB, but most are not.

    1. >I’m not nearly as sure that all those people predicting Intel’s imminent demise are correct, though.

      I, on the other hand, am quite sure they’re wrong. Intel’s strategy is to trade on its financial mass to be a major silicon supplier to whoever wins the upper layers of the stack. It’s a good strategy; nobody else has the expertise and the capital to whistle up billion-dollar chip fabs without having to make a fingernail-chewing, company-goes-under-if-we’re-wrong sort of bet.

      Hmmm…I betcha Intel’s got a skunkworks somewhere working on an Atom-based smartphone SoC. They’d have to be crazy not to.

  13. I thought that Intel criticizing choice of a Microsoft platform was a stretch, but I’ve been engrossed with other things and not been paying enough attention to Microsoft’s machinations lately. WP7 seems to be ARM only, and the next version of Windows is CPU-agnostic, so it was Microsoft that actually fired the first volley:


    Windows Phone 7 makes high-performance ARM processors compulsory…


    Next Version of Windows Will Run on System on a Chip (SoC) Architectures from Intel, AMD and ARM

    Or maybe it was Intel that fired the first volley, since they worked with google on google TV, and definitely have their toe in the Android water. In any case, it’s every man for himself now.

    So cheap ARM cellphones and set-top entertainment boxes are finally cracking the Wintel monopoly. Intel used to benefit from closed-source. Now, to the extent ARM is no longer locked out of Windows, closed-source is Intel-agnostic, and to the extent Intel is locked out of WP7, closed-source is the enemy.

    Microsoft will soon be out of heavy-hitting Windows partners. Microsoft has always relied on others to do a lot of the work of making devices work smoothly with the OS, but a lot of that attention will soon be focused elsewhere.

    In any case, the coming separation of OS and software stack from the underlying CPU is as welcome and miraculous as the coming separation of the cellphone and carrier. Don’t let anybody tell you otherwise — although in theory this could have happened with closed-source, in actuality, this tectonic shift required, in fact was practically mandated by, the rise of open source — specifically everything that goes below, into, and above Android.

    Like Winter, I don’t think Microsoft’s long-term chances are very good.

    I’m not nearly as sure that all those people predicting Intel’s imminent demise are correct, though. The theory seems to be that Intel sucks because it requires extra decoding logic, blah, blah, blah, — but this might miss the point completely. On a modern Intel or AMD die, the instruction decoder is miniscule. The virtual register set is huge. The software can be optimized, must be optimized, relatively independently of the hardware because the hardware ISA changes very slowly, and the amount of pre-supported hardware is huge. This strategy is not as necessary with open source, where everything can be recompiled for every architecture, but is not nearly as much of a performance or die-size hit as ARM people make it out to be. Intel and AMD make great CPUs, and it’s all about execution speed, cache sizes and algorithms, branch predictions, and parallelism.

    So, where power trumps raw performance, Intel and AMD have work to do (which I think they’re both pursuing), but where raw performance trumps power, if Intel and AMD keep innovating, it might take awhile for the ARM train to catch up to them.

  14. @Jeff Read:

    > And it’s not just idle, speculative thought, either: one thing I learned recently is that Apple has the power to fuck up everybody else’s supply chain.

    That was obvious seven months ago.

    Now that you and everybody else know about it, it’s well on its way to being fixed. In fact, if people ramped up too much for iPhone V, there might be a glut of certain components about now.

    Another thing is, most companies that get in bed with Apple get just as squashed as they used to with IBM or Microsoft. And the word on that has to be getting out.

    Look at it this way — if some company ramps up to make stuff for Apple, and Apple pulls the plug (which happens), then that company is probably making parts that are good for the Android ecosystem. And since the Android ecosystem moves very quickly (some Asian companies will see what’s available now and pull together whatever they can get for a BOM to just build 10K units the same) the system as a whole will learn to react to Apple and compensate quite nicely.

    The supply world is now used to a high and accelerating rate of production of those things used to build cellphones and tablets. It’s also used to surges in demand caused by Apple. The only reason we haven’t seen cyclical behavior in, e.g. the flash memory market since Apple started shipping iPads/iPhones is just that the demand acceleration was that sudden. But the market is cyclical, and Apple doesn’t control it, and the market will adapt to Apple’s ability to create instantaneous demand, and eventually over-adapt, and then there will be cycles where Android phones are really cheap.

    1. >(some Asian companies will see what’s available now and pull together whatever they can get for a BOM to just build 10K units the same)

      You’re underemphasizing a key point. From spot surplus to product surge is now down to 3-4 months max, that’s assuming they have to spin up an entire new design to exploit the surplus. For variations on an existing reference design, less. This means the Asians can, almost certainly, surf the components market faster than Apple can plan to screw them up.

  15. >Then they want death, and will get it.

    I think you underestimate the number of people who don’t want their products (and their support availability) on a 3 month cycle. There’s a reason why almost every linux vendor now offers a long term support solution. That isn’t to say that iPhone will take that position, but when my company buys 50+ android phones running 2.2 and using custom designed software, I want to know that in 6 months, I can buy another 50, and in a year when they start breaking, someone will still support them and their software.

    Nothing is more obnoxious to a business guy than asking a question about OSS Project X version 2.2 and being told “Why are you still using 2.2, lolz? Get 3.0!

    Yes eventually you always have to upgrade, but I don’t know any business that likes upgrading every 3 months.

  16. Nothing is more obnoxious to a business guy than asking a question about OSS Project X version 2.2 and being told “Why are you still using 2.2, lolz? Get 3.0!

    One of the things I like about Android’s devkit is that getting a tool chain, SDK, and emulator image for any version of Android from 1.5 to the new hotness is easy — a matter of a few clicks and waiting for some downloads. If you’re a business person looking to make an Android app and concerned with marketing to people with older handsets who aren’t running Jellybean, Kransekake, Lollipop, or whatever the next big thing is — the latest Android devkit will still support your efforts.

    By contrast, I don’t know what the support cycle for iOS versions is like but I do know Apple encourages its users to upgrade to the latest OS X, and practically requires it if you’re developing for Apple platforms. Remember the days when your old Mac was supposed to last forever? Well, that ship has sailed. I got burned when the C++ compiler on Mac OS 10.3 suddenly stopped working. They pushed out a backward compatible build of the OS 10.4 libstdc++ as a mandatory security update, and it was subtly incompatible with 10.3’s g++. The message was pretty clear: you should have been standing in line outside the Apple Store for a copy of Tiger if you were really, really serious about developing with us.

    This is why I’ll never, ever be an Apple fanboy.

  17. I think it depends on how effective the observation and orientation parts of the cycle are.

    If your time is reduced, you do act faster, but you logically have less time to observe and decide. Hence, you’d expect as the cycle length approaches zero, the decisions become more and more hasty, and less effective.

    There is an equilibrium, and open source merely helps reach that equilibrium faster – it cannot move it. Quite a nice metaphor to enzyme kinetics, if I may posit so.

    1. >If your time is reduced, you do act faster, but you logically have less time to observe and decide. Hence, you’d expect as the cycle length approaches zero, the decisions become more and more hasty, and less effective.

      You’re right. But in many contexts, error resulting from the hastiness of decision matters less than the ability to make one and start the loop again. This is likely to be true if the benefit of success is high and the cost of failure is low. You end up operating in a “Ready, fire, aim!” mode where you use each failure to rapid-correct the next try.

      In military and martial contexts, “Ready, fire, aim!” works poorly for hand-to-hand combat but quite well for a firefight. What’s happening, with ephemeralization lowering the capital cost of design changes, is that fast cycles of “Ready, fire, aim!” are becoming an effective way to address fluctuating consumer demands.

      You’re right that this doesn’t mean the OODA loop can shrink without limit. But the OO part can be quite error-prone without compromising your effectiveness if you’re iterating fast enough. In fact, one of Boyd’s original arguments for the centrality of OODA in tactics is exactly that the fighter has to contend with incomplete knowledge, observation error, and epistemic uncertainty.

  18. @esr “OK, now watch what happens. VOIP + SIP + fiber to homes + adaptive mesh = who needs the cell carriers?”

    That day is much sooner than many of us think…

    see: http://www.villagetelco.org/about/mesh-potato/

    “The Mesh Potato is a new device for providing low-cost telephony and Internet in areas where alternative access either doesn’t exist or is too expensive. It is a marriage of a low-cost wireless access point ( AP) capable of running a mesh networking protocol with an Analog Telephony Adapter ( ATA). Wireless APs such as the Meraki or OpenMesh (Accton Mini-router) APs are rapidly gaining in popularity due to their low-cost, relative robustness, and ease of installation. Adding the ability to plug an ordinary telephone into a device like an OpenMesh AP opens up very interesting possibilities. …”


    If I was a carrier I’d be busier than all get out to find a value-add that only a carrier can provide – because the disruptiveness has NOT YET BEGUN.

    (Maybe all the sales offices could start offering drop-off tech support? Health care? Pizza?)

    1. >What if the patent wars successfully knock Android back or out?

      Again, the right answer is “In which jurisdiction?”

      The places where the sales potential of super-low-cost Android devices is highest happen to be those where patent law is weak or nonexistent. Europe doesn’t have software patents. You can’t use U.S. patent law to interdict sales of a product made in Asia and sold in the Middle East or Africa. People who think Android can be stopped or even slowed down much by patent attacks have an insufficiently global perspective.

  19. > What if the patent wars successfully knock Android back or out?

    Seems unlikely in a global context. It’s hard to imagine any widespread blocking patents on cellphones at this stage that aren’t already in (or won’t be in) some sort of RAND patent pool.

  20. Anonymous: What happens is Dalvik gets replaced with another run-time, likely built off of C++ or Go.

    Google probably throws the litigation budget into a line item, and keeps it in court long enough to make sure that whatever replacement runtime it uses not only outperforms Dalvik, but gets enough traction to slowly eat Oracle’s Java business model.

  21. >In fact, one of Boyd’s original arguments for the centrality of OODA in tactics is exactly that the fighter has to contend with incomplete knowledge, observation error, and epistemic uncertainty.

    That matters a lot on the ground, but what about at a high tactical level? The market is not won entirely by looking to the next phone model, and then the next. Each individual battle is not where the winning strategy can be elaborated. Iterating a product is only one part of success.

    I think the manufacturers aren’t the soliders. They’re more like the superpowers. They have to play industry chess, as well as OODA in the product-trenches.

    But, maybe Android makes them all soldiers. I am unsure. In that case the metaphor would show cracks, because they definitely aren’t all allies.

    At any rate, OODA is interesting, but can only be criticized so far as it applies to product development and open source software. There’s not much more to say only it’s not worth betting a war on OODA alone.

  22. in fighter design, maneuverability (especially a shorter turning radius) beats straight-line speed

    Well, not quite. In a so-called dog fight with the 2 fighters circling around a common center the one with the shorter turning radius will catch up with the other eventually (assuming equal speed) and can then shoot it down but if the other one is has better straight-line speed it can just stop circling and fly away which would be a draw. If you want a guaranteed win you need better maneuverability (to win the dog fight) AND better speed (so he cannot get away).

    1. >Detailed discussion of long and short OODAs for people in urgent situations.

      Yes. A parallel distinction among martial artists is whether one performs better in solo kata (drills) or in partner drills and sparring.

      I perform better in partner drills and sparring; I’m a short-OODA person, in her terms. Still…it is quite possible to get inside my OODA loop, hand-to hand, because I’m not fast. In fact, it’s highly recommended that you get inside my OODA loop; otherwise you are likely to lose, lose, lose. :-)

      1. >In fact, it’s highly recommended that you get inside my OODA loop; otherwise you are likely to lose, lose, lose. :-)

        Note: I suppose I should add something to this, because there’s a general principle at work that actually applies back into things like competition in the smartphone market.

        I’m over 50 and have cerebral palsy. I’m not a natural athlete. I’m strong, but not physically fast or agile. Yet, as a hand-to-hand fighter and swordsman, I have not infrequently gone up against fast, young, capable fighters and won. There’s half-serious folklore about an age and treachery bonus; I am proof that this is not entirely a joke. So…how do I do it?

        Some of it, to be honest, is my bulk muscle. It closes off options my opponents might otherwise have, and importantly (referring back to an earlier theme in this discussion) it raises the cost of errors. My opponents look at me and know, at a gut level, that if they slip even once, I will close and destroy them. This complicates their decision-making – makes them a little hesitant, even if they have a substantial skill edge on me and know it.

        Much more of it is that, though I am physically relatively slow, I am mentally very quick. I can observe, orient (adjust my model of the opponent’s intentions and capabilities), decide, and begin an action while most fighters (including many more skilled than I) are still trying to get through “orient”. If I can manipulate the battle so the advantages of my mental quickness are maximized and the advantages of the opponent’s physical speed and skill are minimized, I can often win. (One manipulation that often has this effect is to decrease the engagement range – I like to fight close-in for a reason.)

        The lessons for commercial competition should be obvious. Large, successful companies are like a muscular man with slow reflexes – well equipped with sheer power to exploit opponents’ errors but tending to only be able to act relatively slowly due to internal friction costs and inertia. If the A part of the loop is necessarily slow, you have to compensate by making the OOD part really fast.

        Or, if the DA part of the loop is necessarily slow, the OO part has to be even faster. The point is, it’s the sum around the whole loop that matters.

  23. >The message was pretty clear: you should have been standing in line outside the Apple Store for a copy of Tiger if you were really, really serious about developing with us.

    Apple always has been more willing to push “breaking” changes in OS upgrades, going all the way back to the original Macintosh. One thing you can give Microsoft credit for is that they have (usually) been sticklers for backward compatibility; you can still run even ancient MS-DOS binaries on Windows 7, thanks to the work of engineers like Raymond Chen. Open-source software kind of falls in between those two extremes, and can be a mixed bag; one thing that works in its favor is that old releases are almost never lost, so, if you really need to work with an outdated version of the Linux kernel or the C library or some such, you can, though it’s usually not recommended.

  24. Open-source software kind of falls in between those two extremes, and can be a mixed bag; one thing that works in its favor is that old releases are almost never lost, so, if you really need to work with an outdated version of the Linux kernel or the C library or some such, you can, though it’s usually not recommended.

    That’s for closed source on top of open source. For a full open source stack, the right answer is usually to upgrade all components to latest, perhaps doing a bit of development as necessary. Much less a mixed bag that way.

  25. Re: mesh potato

    As someone that has been working on mesh networking for about 14 years, I can assure you that we are a VERY long way out from successfully figuring out how to do it well.

    It has enormous potential, but the devil is in the details. I’ve worked on mesh networks for cars, personal networks (the classic broadcast your business card thing), and most recently a babel based ipv6 wireless network in South America.

    One item, recently figured out, was the effects of bufferbloat on a dense mesh, on 802.11s. The results weren’t pretty,.

    Other problems – authentication, routing, resource sharing, etc – remain, with largely theoretical half-solutions.

    The OODA loop here may still be measured in decades, at least partially because there are lots of fundamental patents held by multiple organizations that have zero interest in getting out of the middle.

  26. OK, now watch what happens. VOIP + SIP + fiber to homes + adaptive mesh = who needs the cell carriers?

    People who live in small towns of 10K or so where the next town of any size at all is 20 miles away and the next bigger town is 50 miles away. Like, say…me, at home.

    Because the smartphone’s capabilities are ephemeralized, this is all just software. The rest of the consequences are left as a simple exercise for the reader.

    Yeah, it’s just a SMOP.

  27. I got burned when the C++ compiler on Mac OS 10.3 suddenly stopped working. They pushed out a backward compatible build of the OS 10.4 libstdc++ as a mandatory security update, and it was subtly incompatible with 10.3?s g++. The message was pretty clear: you should have been standing in line outside the Apple Store for a copy of Tiger if you were really, really serious about developing with us.

    I’m going to skip the “if you’re developing in C++, you deserve to lose”…

    Not long ago, I had to make a decision to stop supporting 10.4 for a package I’m involved in. The problem was that developing for OS X 10.4 requires gcc 4.0. 4.0 broke when compiling the OpenJPEG 1.4 JPEG2000 encoding engine. Build with 4.2, as is used for 10.5 and up? Works perfectly. (Well, as perfectly as OpenJPEG gets, which isn’t as much as you’d like.)

    Sometimes there’s a good reason for breaking old software.

  28. What’s happening, with ephemeralization lowering the capital cost of design changes, is that fast cycles of “Ready, fire, aim!” are becoming an effective way to address fluctuating consumer demands.

    You just have to make sure that that first “fire” isn’t straight into your own ass.

    I predict that the first mass-market application of this tech – and a wildly successful one – will be for real-time glucose monitoring in Type I diabetics.

    s/Type I//

    1. >You just have to make sure that that first “fire” isn’t straight into your own ass.

      That’s called “high cost of failure”. If you’re thinking about OODA and business strategy at this level, one of your pressing questions is “How do I minimize my own cost of failure?” If you’re trying to out-OODA the other guy, this is often a more important question that “How do I maximize the gains from success?”, because the former feeds through to an increase in your tactical options for future rounds in a way the latter may not.

  29. More to the point:

    OODA may be the best tactic for 2 fighter pilots trying to kill each other in the next 1-2 minutes. Smartphones is a business so about making money. Apple is doing well at this:

    Sorry, I don’t know how to paste in links but this from “the mac observer, Jan 31th”:

    Apple’s position in the smartphone market continues to improve, and according to Canaccord Genuity analyst Mike Walkley, the company now controls some 41 percent of the market’s profits. In comparison, Apple held about 31 percent of smartphone market profits at the end of 2009.

    You write: It’s anybody’s guess what the lower limit of the OODA loop is. Not so long ago, 90-day product development cycles would have been considered an insane dream – but the rise of 3D printing and commodity SoCs might very well cut time to prototype much further. I can now easily imagine a handset designer with good CAD tools and a 3D printer producing a testable phone in a week.

    Possibly, but could you sell enough of them to make a profit, competing against all the other guys who did the same thing just last week as well?

    This from hardware.info:

    17-02-2011 12:14, bron Boy Genius Report
    The recent resurgence of “iPhone nano” rumors has Wall Street analysts working overtime. They already love Apple, which has basically been printing money lately, and rumors of Cupertino cooking up a smaller, cheaper iPhone could certainly send Apple’s mobile profits even further skyward. In fact, one analyst thinks a cheap version of Apple’s iOS-powered smartphone could expand its addressable market by a whopping 600%, therefore potentially giving iPhone profits a 250% boost.

    I don’t know if this will work but so far Apple seems to have been doing very well profit wise with relatively slow development (a new phone once or twice a year instead of every week).

    I have been reading your blogs on Smartphones for the last 6? months and you always talk about market share, never mentioning profits. You might argue that once you have enough market share the profits will come, but Apple shareholders (me not one of them) would believe that as long as you are making most of the money all you have to do is wait for the others to fail.

    1. >I have been reading your blogs on Smartphones for the last 6? months and you always talk about market share, never mentioning profits.

      That’s because I don’t much care about who makes the profits. To me, these wars are all about who will control the next generation of computing infrastructure – especially, whether it will be open and hackable or closed and controlling. Market share will do far more to decide this than the profits to any individual player.

  30. I’m going to skip the “if you’re developing in C++, you deserve to lose”…

    I wasn’t. However, SDL’s internals are written in C++.

    Sometimes there’s a good reason for breaking old software.

    This is not a case of me being butthurt because some new thing I downloaded conformed to an API or required a compiler that wasn’t available on my platform.

    This is a case of the same software compiling and running fine one day and build-breaking with indecipherable template errors the next.

    Clearly and unequivocally a dick move on Apple’s part.

  31. N.b.: They weren’t template errors, they were link errors because the funny mangled name symbols don’t match or something. Beyond a certain level of abstrusity all C++ problems start looking alike to me…

  32. > These wars are all about who will control the next generation of computing infrastructure – especially, whether it will be open and hackable or closed and controlling. Market share will do far more to decide this than the profits to any individual player.

    I assume by this statement…that you assume: the future of computing industry platforms is still a monoculture, where one participant will ultimately dominate all others.

    I bring it up because it’s worth enumerating. Plenty of people say “this is just like the PC industry”, plenty of people say “mobile computing will be nothing like the PC industry.” I am unsure myself, but then I wasn’t really an active participant in the First Age, so to speak. [Or the Second Age, although my description of 80s-2000s as the First Age of computing is a dead giveaway that I’m pretty much less than 30 years old.]

    1. >I assume by this statement…that you assume: the future of computing industry platforms is still a monoculture, where one participant will ultimately dominate all others.

      Yes. Software polycultures are unstable. The reason is fundamental; they’re in zero-sum competition for user and developer mindshare, and they have network effects with superlinear value functions. This kind of competition tends to produce runaway leaders and monopolies that are stable until a major technology break.

      1. >[Or the Second Age, although my description of 80s-2000s as the First Age of computing is a dead giveaway that I’m pretty much less than 30 years old.]

        Some of us remember computing before 1980. Infant. :-)

  33. One corollary of ever-accelerating development cycles for phones is that companies in China and Taiwan have an advantage, because their supply chain is shorter. When you can go down the street to the touchscreen factory and pick up a fresh batch for your new round of prototypes, you can potentially save a few days over having to order it and have it flown halfway across the world. When the development cycle is a year, this doesn’t make much difference. When it’s 90 days, these little delays start adding up pretty significantly. The Chinese manufacturers can move faster and make phones more cheaply because they’ve got all that infrastructure right there in the same city.

  34. eric: How does your analysis account for the general failure of Linux on the desktop?

    Here’s an alternate take: the cellphone industry used to be at a really crappy local maximum. Apple found a higher peak some distance away, so now the swarm is in a mad rush to find a good spot near where Apple planted their last flag. The race is to find the best local maximum in that immediate vicinity. Meanwhile Apple is looking to find one even higher. Relevant quote: “You don’t cross a chasm in two steps.” Taking the longer-term view is what let Apple leapfrog the competition before; it seems likely they can do it at least a couple more times. In this view, Android’s success is basically parasitic on Apple’s; Android mostly plays catch-up.

    Why can Apple take bigger steps? Apple makes a careful, considered decision, picks the next winner, and invests a half-billion dollars at a time in whatever technology most needs improving to build the next new device. That kind of investment shapes the market and makes new advances possible. Proving the tech is possible, making it practical and generating user demand for it is the hard part; selling clones using whatever leftover capacity and demand exists is relatively easy by comparison.

    When people say Apple “bought up” all the high-performance 10-inch touch screens, it’s not like suppliers were planning to make zillions of multitouch screens for “pad” devices *before* Apple came along – it’s Apple’s demand that *created* the supply for many specific parts that Apple now seems to be (briefly) monopolizing. Anyone else could have done the same, had they been similarly willing to make a bet-the-company sized investment. Since they weren’t willing, the first step in their OODA loop has been “Observe…what Apple does and how customers react to it”. Which doesn’t make for a very tight loop.

    In short: a fast cycle helps explore the little hills after a slower cycle helps make the big world-changing leaps.

    The good news for Apple bashers is that sooner or later Apple is bound to make a mistake and *lose* one of their huge bets. They’ll back the wrong horse, miss a feature, saturate the market, or just run out of ideas.

    The good news for Apple fans is there’s no evidence of that having happened *yet*.

    1. >How does your analysis account for the general failure of Linux on the desktop?

      Microsoft had the space locked up too tight. We had a faster OODA, but one of the circumstances that can negate that is if you don’t have room to move. Our only consolation is that we won essentially everything else, from supercomputers down to smartphones. Might be Microsoft’s going to have to die of its own inertia and rot before we get the desktop too.

      It could have gone the same way in smartphones if Apple had been unchallenged for a few years longer. *shudder* Good thing for everyone it didn’t…

      Your stuff about Apple strikes me as mostly mythologizing invented by Apple’s PR department. In reality, the “vision” fails as often as it succeeds (Newton and iTV, for example) and the moves that have saved Apple are incremental as often as “revolutionary”. And now, it’s Apple playing catch-up with Android.

    1. >Firearm aimed at foot, trigger pulled.

      The article is sloppy and wrong. Microsoft didn’t ban open source, it banned GPL. Non-reciproocal licenses such as BSD are unaffected.

      Still, I agree it was a dumb move.

  35. Actually, I think it only banned version 3 of the GPL. But still, it’s a great way to make me feel personally unwelcome on *any* of their platforms.

  36. On-topic:
    If Open-Source has the effect of parallelizing the OODA loop across a cluster supercomputer, can it properly be called a loop anymore? I think a better term would be “band,” or “cylinder,” which is like a loop but with an extra dimension.

  37. @esr:

    > Software polycultures are unstable.

    Arguably, open source makes software polycultures even less stable (for obvious command and control reasons), but actually increases the stability of underlying (non-open source) hardware polycultures.

    The up and coming dominant CPU platform is ARM. ARM has traditionally been chip-vendor agnostic, and content to seek only a (relatively) small rent. But now they’re getting a bit big for their britches, and even investing in chip vendor startups.

    Nobody likes to see their supplier start to compete with them, so that will add some to the instability. Post ARM, huge stability will be had if and when somebody with some muscle comes up with a vendor-neutral, non-patent-encumbered ISA, or, possibly it will be ARM — after somebody has the balls to go head-to-head with them in a patent fight (which PicoTurbo was apparently winning until they got paid enough to shut up).

  38. @Max E.:

    Actually, I think it only banned version 3 of the GPL. But still, it’s a great way to make me feel personally unwelcome on *any* of their platforms.

    I would be interested in what anyone thinks of my reasoning here.

    Basically, if the “Automatic Licensing of Downstream Recipients” is the real patent clause, and the “Patents” clause is a decoy, anybody with valuable software patents (regardless of what you think of the morality or constitutionality of such) would be a fool to distribute any GPL v3 software.

  39. I adopted the AGPLv3 recently because I feel the GPL has led to the current SaaS architecture. Sort of a retaliatory strike. If it means I can’t work in the US under the current IP regime, so be it.

  40. The strategy I described for Apple is much more about economies of scale than it is about “the vision thing”. Let’s try another analogy: professional magicians. Getting to *be* David Copperfield is really hard, but once you *are* him, maintaining that position for a while is comparatively easy. There are lots of magicians around who regularly invent cool new effects. If Joe The Magician invents some effect that would be *really expensive* to properly develop and make use of – say, an invisible-wire flying rig – his best option is to go find Copperfield and work for or sell stuff to him. The Copperfield brand guarantees lots of asses in seats that can pay for a show, so long as the show is competently produced and regularly surprises with expensive “wow” setpiece effects. At that point Copperfield doesn’t have to innovate in-house, he just has to find good suppliers and fund them sufficiently to maintain a steady supply chain.

    Copperfield is the early adopter for big-budget magical effects. Eventually those same effects will get cheaper and simpler and show up in lower-rent shows or even get sold to amateurs. But that doesn’t mean Copperfield is “playing catch-up”. It means the rest of the market is catching up to him.

    Seriously, in what sense is Apple “playing catch-up”? Both in tablets and phones it seems like it’s still very much their market to lose. For instance, the article you reference charts the lag time in *months* between the release of a new Android OS version and phones appearing in the market based on it; meanwhile when Apple releases a new OS version people get it on their phones the next *day*. Why does anybody – much less most of the market – need to wait *months* to see the benefit of a new Android release?

    One example of local maximum-seeking is the last quote in the article: “This will keep going until phones become just thin slabs with a touch screen”. Me, I’m still hoping to see the eventual triumphant return of the stylus. Multitouch interfaces do many things well but suck for sketching, outlining, and data entry generally.

    1. >Seriously, in what sense is Apple “playing catch-up”?

      At least two different senses. One: uptake of 4G/LTE. Two: Recall the long analytical comment comparing Uis a couple days back? Major theme was that Apple may have less animation lag but the interface design is stale, crowded and claustrophobic compared to Android.

      >Both in tablets and phones it seems like it’s still very much their market to lose.

      Nonsense. Android has pulled ahead in smartphone market share, and that is very unlikely to reverse while it offers more features and better price-performance.

  41. @Maupin:
    Okay, that actually makes sense. But it doesn’t change how I feel about it, and it’s still liable to alienate a lot of people. From a PR POV, it definitely “sends the wrong message.”

  42. > Why does anybody – much less most of the market – need to wait *months* to see the benefit of a new Android release?

    You miss the point. So long as those releases take less time than Apple’s, the consumer won’t notice. And at an average of 6-some-odd months from official release to hitting-your-phone for Froyo, if that small of a delay can be maintained, then you’re still looking at 2 release cycles to the consumer per Apple release, and that’s a lot of feature add. As I see it, it allows Google to effectively double-up; they can use the half-year release to implement counters to anything Apple does, and the year release (for illustrative purposes, roughly matching Apple’s) for pushing out any new innovation on their side.

    That seems pretty hard to compete with, to me. And that’s just features from up-stream, not to say anything of the new hardware and software modifications from the manufacturers.

  43. @Max E.:

    I agree MS is sending the wrong message. IF this is the reason they hate GPLv3, they should man up and tell everybody. If, OTOH, they just can’t stand Stallman’s guts, they should say that too…

    In other words, MS being MS, no matter what message they send, if it doesn’t have a plausible explanation to go with it, I will put it in the bucket labeled “yet more MS FUD”.

    Occasionally, MS sends a message I like — like the one they just sent to GeoHot. But this is so seldom that most of their messages go straight into the FUD bucket with minimal examination.

  44. In fact, I think this whole episode lends some credibility to the FSF’s strategy with the GPL3. Microsoft can’t do anything about it without making itself look bad. I still don’t agree with the FSF’s methods here, but they’re beginning to make more sense now.

  45. > So long as those releases take less time than Apple’s, the consumer won’t notice.

    I guess I’m not willing to grant that so long as clause. iOS gets a lot of updates. From 1.0.0 to 4.3 there’ve been ~32 iOS software releases in 44 months. The first few were tiny single-bug-fix tuning things but over time time the scope of the changes has increased; as iOS sells more, Apple gets more feedback and has more money to spend on improving it – a virtuous cycle. They follow the “put all your eggs in one basket and watch that basket” strategy. The relative stability of the hardware platform reduces the maintenance burden on updates, making it easier to roll out big changes.

    According to wikipedia, while Apple’s had 32 updates – roughly one every 41 days since june 2007 – Android has had 8 updates, or roughly one every 108 days since September 2008. The Apple users could all update immediately; the Android users in nearly all cases had to wait months and often had to buy a new phone to see a benefit. How does the math on that lead to the conclusion that Android is cycling faster than iOS at all, much less twice as fast?

  46. @Glen Raphael:

    What you say may be true now. But remember that Apple had to negotiate long and hard to be able to bypass the carriers and push updates. Once phones are no longer subsidized and you can buy them straight from WalMart with no plan, and not from the carrier, the manufacturer that makes it easy to slipstream updates like Apple does will win. This may be as simple on the manufacturer’s part as not making rooting the phone a hassle and letting CyanogenMod take its course.

  47. esr Says:
    February 18th, 2011 at 6:43 pm

    > >How does your analysis account for the general failure of Linux on the desktop?

    esr says:
    > Microsoft had the space locked up too tight. We had a faster OODA, but one of the circumstances that can negate that is if you don’t have room to move.

    No, the problem was, and is, lack of user friendliness. Unix, and open source, has a tradition of RTFM, or worse, read the source. Unix code, and open source code, tends to be by hackers, for hackers. When I use linux, I am always dropping into the command line. On windows, rarely use the command line.

    When I talk UI with a windows engineer, he knows what I am talking about. When I talk UI with unix engineer, he does not know what I am talking about, gets irritated, does not follow, thinks I am stupid.

    When I first got started on unix, the make utility treated tabs and spaces in makefiles as semantically different! I mean, duh!

    That is why open source lost – because there were a bunch of engineers who could fail to see this as a serious bug.

    1. >That is why open source lost – because there were a bunch of engineers who could fail to see [poor UI] as a serious bug.

      I know this is an easy conclusion to jump to, but I think it’s wrong. And I say that as someone who did more than my share of public yelling about Linux’s UI problems – I’m not underweighting them. We went from “awful” to “not great, but probably good enough” – if not for the Office/Exchange lock-in, the Microsoft tax, and all the headaches around volume preinstalls. Those were the real blockers.

  48. @Max E:

    > I still don’t agree with the FSF’s methods here, but they’re beginning to make more sense now.

    I don’t agree with them either. To me, the fact that GPL v2-only can’t be linked with GPL v3 is a clear indication of the problem. Historically, other licenses have been rewritten multiple times specifically so they can interoperate with the GPL (for example the Python license), but the FSF’s goals are so overarching they can’t even do that themselves.

  49. > When I talk UI with a windows engineer, he knows what I am talking about.

    Just as a point of contention, couldn’t this be because Microsoft got to more-or-less dictate the popular paradigm and what could be considered ‘user-friendly?’ Well, following after Apple/Next’s work of course.

  50. @James A. Donald:

    > When I use linux, I am always dropping into the command line. On windows, rarely use the command line.

    Yeah, I have trouble getting work done on Windows, too.

    But seriously, Linux has come a very long way. There are several polished distros that just work out of the box for a lot of stuff. Unfortunately, there are some gaping holes left. One of them happens to be patented codecs and closed-source drivers that don’t ship by default. But, for example, if google manages to change the default A/V encoding and container format on the web, some of these problems will disappear for the most part rather quickly.

    Other problems have to do with the fact that linux is too open. You can’t get iTunes or Netflix, for example. But Android (again, google!) is going to make linux too tasty to ignore.

    > When I first got started on unix, the make utility treated tabs and spaces in makefiles as semantically different!

    As far as I know, that’s still standard.

    > That is why open source lost – because there were a bunch of engineers who could fail to see this as a serious bug.

    Sorry, but it’s really a non-sequiter to point to a tool that, by definition, you have to be a developer to care about, and extrapolate from there to the programs that Joe Average user plays with.

  51. > That is why open source lost – because there were a bunch of engineers who could fail to see this as a serious bug.

    Sorry, but it’s really a non-sequiter to point to a tool that, by definition, you have to be a developer to care about, and extrapolate from there to the programs that Joe Average user plays with.

    Let me cite an example that trips up everyone, then: case sensitivity. The average person doesn’t see upper and lower case as being fundamentally different, and so tips himself up all the time in case-sensitive Unix systems. (Even to a native German speaker, the number of times the case of a single letter is the only way to disambiguate is vanishingly small.)

    Case sensitivity is the single biggest user interface botch ever inflicted on the average user. That’s one of the things OS X got right: you can largely ignore case. Unix geeks usually don’t see this as a problem because they’ve been forced to train themselves otherwise, and have lost sight of the effect it has on those who aren’t Unix geeks.

  52. Case sensitivity is the single biggest user interface botch ever inflicted on the average user. That’s one of the things OS X got right: you can largely ignore case. Unix geeks usually don’t see this as a problem because they’ve been forced to train themselves otherwise, and have lost sight of the effect it has on those who aren’t Unix geeks.

    First of all, I agree that, for some higher-than-average-but-not-exceptional users, you might have a point.

    But my wife is probably much less skilled than the average user, and she never uses the command line, and, to my knowledge, she’s never had a case sensitivity issue. So I’m curious how an average user comes up against this.

  53. >One thing you can give Microsoft credit for is that they have (usually) been sticklers for backward compatibility

    I wouldn’t give MS credit for that. More like, the customers that MS sells to simply won’t update if the new version doesn’t run the stuff they’ve got.

  54. Patrick Maupin Says:
    > But seriously, Linux has come a very long way. There are several polished distros that just work out of the box for a lot of stuff.

    I have a computer running a current distro of linux sitting on my desk, beside my windows computer. When I use Ubuntu, I am usually using the command line, or editing configuration files. Do you think your girlfriend is going to edit configuration files?

    Just look over the shoulder of anyone running a current Linux distro. They are all male, and usually in the command line.

    1. >I have a computer running a current distro of linux sitting on my desk, beside my windows computer. When I use Ubuntu, I am usually using the command line, or editing configuration files.

      I see your problem. You’re projecting your own usage pattern all over end users. Don’t do that.

      My octagenarian mother used Linux quite happily without ever dropping to a command line. She doesn’t any more, but that’s for reasons not relevant to this discussion. The UI wasn’t the problem.

  55. > Just look over the shoulder of anyone running a current Linux distro. They are all male, and usually in the command line.

    [citation needed]

  56. esr Says:
    > We went from “awful” to “not great, but probably good enough”

    Is Linux as user friendly as windows? You yourself answer that it is not.

    Ordinary uses have a hard enough time with windows. Microsoft got religion about user friendliness, and proceeded to do its best inculcate that in every engineer that worked on windows, with some success. I just don’t see the equivalent in in Linux. Microsoft saw user friendliness as its biggest problem, and struggled mightily with that problem, and continued to struggle until Bill Gates retired. If Open Source was as serious about use friendliness as Bill Gates was, now they could catch up.

  57. @James Donald
    > Just look over the shoulder of anyone running a current Linux distro. They are all male, and usually in the command line.

    Did you read ” In the beginning was the command line”?

    Very entertaining an informative. Just recently I had to clean up a data set. Some 7000 files and a few hundred data files with text records and meta data. Naming an records were very inconsistent.

    All files had to be renamed using a fixed format and text records had to be normalized, split, and merged for direct import into RDBMS. Easy with sed, grep,

  58. Esr,

    This is silly nonsense…don’t compare a simple airplane dogfight to a smartphone platform War. It just not the damn same thing. Anyway have you forgotten about the Nuclear Option that both Apple and Microsoft have? One big Nuclear bomb and several smaller one wipe Google out in a few single blow?

    How about the the story of a small slick black agile and fast boxer loosing his war to a slow Mexican warrior with pure punching power, chin and heart. Now what about that small but fast guy against a big heavyheight puncher?
    How will his skill compare to a big heavy hitter. There is a lot of heavy hitters around. As they say Once you get hit all tactic is thrown out.

    Monoculture? This is really nonsense, Monoculture will never rule, Sure they will be overlap but once it becomes so big it will fracture. But even before it will be so big, it will fracture first and many times too.

    For all this hype and talk, the English language will and had never obtain full Monoculture in this world. And yes it will never will be! What about driving? The simple left hand steering and right hand steering exist still to this day! what about the future? Maybe when we all start flying i guess. And how many years have we been driving? So like with cars the Smartphone will exist with many OSes.

    So what 2011?

    Google Android will probably ship around 120-130 million unit, If they achieve 170 million it will be with help of a cheap SOC system as the reference design.

    Apple would ship around 85-95 Million Iphones. This is without an Iphone Nano/Mini at the $ 200 range in the mix. Most probably the smaller Iphone will come next year. This year we will see more cheaper Iphone Model in the form of an updated version of Iphone3GS that will maybe not sold in the US.

    Nokia will still sell 100 million symbian SMartphone. But since they are Symbian and opensource they will probably be just a glorified DumbPhone pretending to be A Smartphone. They might sell 10-20 million winphone7 Smartphone depending how well Microsoft Platform do.

    BlackBerry will sell around 60-70 Million smartPhone. They are the crackBerry and no QNX will probably not do well this year but it is still young and will be the future of RIM.

    Samsung will probably sell around 30-35 million SmartPhone . 18-20 Million will be Google Android and 10-12 Million will be Bada OS. HTC would see probably see around the same figure. HTC will also introduce a new fork on Android by debone-ning and removing all off Google services and using their and HTC’s partner’s Services. We see a 60-40 split between Google’s andoid Os and various OSes.

    Motorola would reach near bankruptcy level, As Apple Iphones kill most of it and other various android sales in Verizon. Motorola will also fail to penetrate AT&T marketshare away from Apple. Motorola loss of revenue will be hit harder with all their expensive campaign and investment if their tablet project name XOOM(ZUME).

    HoneyComb would also be a full failure as it has the worst UI, worst than any Opensource Linux distro. In a attempt to put A landscape Desktop UI on a tablet , it would force the tablet to be only usable on the desk or own lap. Making this variant of table less useful and harder than a simple netbook. What?? Netbook are both cheaper, simplier and more usefull? Yeah…when you compare it to Honeycomb. Wonder who would be seen standing while holding that thing in lanscape?

    With the help of various Skin maker the HoneyComb UI would be a nightmare worst than all of worst KDE/GNOME/UBUNTU UI combined. Nobody with an insane mind would one to develop anything good and valuable with that environment. Google still be confuse would then promote Web Apps and Chrome OS for Intelx86 hopping it would catch on. But all this Google still ba a huge failure in the tablet space.

    Google will probably not learn the painful lesson that the tablet market is not the same as a smartphone market. Phone or even a smart phone market is where unlimited people will find and come to the phone, while in a tabletspace it was created by Apple and no the people will not come to it, the tablet must come to the few limited people. The table space would represent the Ipod/Walkman Space where apple would finally have 50-60% Marketshare.

    1. @kkman:
      >Anyway have you forgotten about the Nuclear Option that both Apple and Microsoft have? One big Nuclear bomb and several smaller one wipe Google out in a few single blow

      Your posts contain entirely too much vague, directionless raving. I think you are a nutcase and fear I am probably going to have to ban you, but I would like to be wrong about that. You are now under a once-per-day frequency limit. 500 words maximum per post. Excess material will be deleted.

  59. @kk

    In an earlier comment I showed that at current rates, 250M Android phones will have been sold by the start of Q1 2012
    http://esr.ibiblio.org/?p=2961#comment-297134

    I have not seen arguments why these trends will suddenly stop.

    Right vs left driving is 5:1. GSM vs the rest is 10:1 Windows vs the rest on the desktop is 5:1 or more.

    That is what we expect in the mobile world: 10:1 for Android.

  60. You say 200 Million for Android in 2011?
    That is a bit high. I say 150-175 Milion for 2011.

    First you assume Jul-Dec is equal to the Jan-Jun half.
    You need to re analyze with data from Jan-Jul. Jan Jul half is usually much weaker than JUL-DEC.

    The second part for android to see continue huge growth it will need a much cheaper model with cheaper reference design. This is most unlikely to happen this year but we may see it starting next half of next year. Android need at least 1ghz to run in a acceptable fashion.

    . Apple, RIM, Nokia ,Samsung and HP will force Android out of their space in Europe and America while Android hasn’t seen much improvement in the High end space Since Froyo. android would the rely on the Cheap India and Chinese maker to help sell their phone in places and situation where the more refine brand has trouble selling.

  61. @kk
    “First you assume Jul-Dec is equal to the Jan-Jun half.”

    We are talking a world-wide growth of activations of 383% on year-by-year basis. The whole formula is based on average daily activations in July and February. Why should the December sales matter in this? I can only see the US festive season as a small blip on the growth. Anyhow, I never included the sales during the fall.It would only make that there were many more sold than I estimate.

    @kk
    “The second part for android to see continue huge growth it will need a much cheaper model with cheaper reference design. ”

    The trends in the last year show that you are completely wrong. Android realized a 383% yearly growth with current prices. Why should that not continue in the coming months?

    @kk
    “Android need at least 1ghz to run in a acceptable fashion. ”

    50 million were sold running on unacceptable hardware? Demand still increasing? Again, the consumers seem to differ.

    @kk
    “Apple, RIM, Nokia ,Samsung and HP will force Android out of their space in Europe and America while Android hasn’t seen much improvement in the High end space Since Froyo. ”

    So that is why Android grows only 283% per year. I see. In short, Android is selling like crazy, but that will all be over when some as yet unseen products will enter the market at some unspecified time in the future. These companies were unable to stop Android in 2010, and I do not see why they would succeed in 2011. Please enlighten me.

    @kk
    “android would the rely on the Cheap India and Chinese maker to help sell their phone in places and situation where the more refine brand has trouble selling.”

    But ALL phones are made in these countries. All of them. Actually, the Chinese produce the majority of ALL electronics in the world. And when did cheap and plentiful not win from expensive and scarce?

    Somehow, I have the feeling that you have something against Android that clouds your rationality. There are many ways in which Android can flounder. But you did not raise any of them.

    1. >“This is what we hoped to do, this is why it failed and why we lost. But no worries, we are currently winning the next consumer platform.”

      There’s a specific reason I don’t want to do this yet. Some of the things I’d say might be interpreted as excuses to stop working on problems that are still real.

  62. I think it is worth noting here that Google does not have the same priorities as the rest of the open source community.

    Android, while massively successful, is not necessarily the wave that lifts all FOSS ships. I’d be willing to bet it’s far from it.

  63. Has anyone here given much thought to the OODA loops between cable/DSL based (wired, I suppose but for now probably still includes 802.11) internet vs cell carriers? It’s seems hard to escape the conclusion that a showdown is coming between them, in no small part because of smartphones.

    Speaking personally, the main reason I don’t have a smartphone because I can’t justify paying for monthly internet twice. I can’t be alone in this either. My first thought is cell carriers have the advantage because they’ve already figured out “remote” access and it seems just a matter of time (6G? meaning perhaps another 10 years at 5 year loops) and bandwidth before they could compete for my business (against cable that is) as my pipeline at home.

    But with technologies like mesh networking and recently freed up frequency ranges that might improve reception range, I wonder if 802.11 (or some other technology that replaces it) could make up the difference and keep cable/dsl access viable against cell carriers. Or perhaps ubiquity of access points, though it seems unlikely because rural people would be hosed. Or perhaps cable/DSL providers will come up with a way to penetrate the mobile game (I haven’t seen any evidence of this).

    The overall point being, as has been noted here many times, it’s all data and software. Somewhere along the way, someone is going to figure out how to be the best bithauler around- meaning TV, internet, phone, everything. I’m wondering who the front runners currently are and who has the advantages going forward.

  64. @Patrick:

    What you say may be true now. But remember that Apple had to negotiate long and hard to be able to bypass the carriers and push updates. Once phones are no longer subsidized and you can buy them straight from WalMart with no plan, and not from the carrier, the manufacturer that makes it easy to slipstream updates like Apple does will win. This may be as simple on the manufacturer’s part as not making rooting the phone a hassle and letting CyanogenMod take its course.

    Subsidies aside, with regard to updates this is wishful thinking on your part. Nothing in the current behavior of market participants suggests they will do this anytime soon.

    The case studies of Motorola unable to push updates for its lower-end phones like the CLIQ, and Samsung trying to prevent rollout of Froyo to the Galaxy S series as a bargaining tactic for future sales with carriers, suggests is it far more likely that many times they will either

    1. Be unable support slipstreamed updates over their entire product line as a matter of practicality for large lines, or
    2. They will find some Machiavellian or economically expedient reason to refuse to do so.

  65. @twilightomni:

    Android, while massively successful, is not necessarily the wave that lifts all FOSS ships. I’d be willing to bet it’s far from it.

    At the end of the day, each FOSS project should have the chance to live or die on its merits. In the past, this was very difficult in some markets because of the mindset of potential users (how can it be any good if it’s free? what’s the catch? etc.) Granted, users are rapidly getting used to monetarily free non-open-source apps, but I think that’s partly a natural progression, and partly irrelevant. There’s a difference between a stupid game on a smartphone, and a mission critical business app. Android shows that free can be used for mission-critical.

    At a minimum, Android shows that free can be good. More specifically, Android shows that free Linux can have a good UI. This revelation will not directly translate into desktop Linux adoption, but could migrate there via the tablet, which is just now ramping on Android.

    I can only speak for myself, but unlike, say, Winter, I am an open source proponent, not a free software proponent. Although I think open source is usually better for obvious reasons, I am not religious about it, and I willingly admit that in certain markets and circumstances, closed source is the right answer.

    But something I am religious about is the openness of the platform, and it is exceedingly important that Apple not be able to wall off the dominant next generation computing platform. Android is, to date, performing marvelously in the role of spoiler for Apple’s world domination plans, and I, for one, think it’s awesome.

    (about my comment about slipstreaming updates)

    Subsidies aside, with regard to updates this is wishful thinking on your part. Nothing in the current behavior of market participants suggests they will do this anytime soon.

    My post was in the context of a discussion about whether Apple or Android wins on update frequency. There is no wishful thinking on my part. I think, once the carriers are out of the loop, Android will win hands-down on this issue if it needs to. If the current behavior of market participants indicates this is not a problem, who am I to argue with them? If everybody’s selling all the phones they can make, the current right thing to worry about is how to make more phones to cope with the people who would buy your phone now if they could. Worry about increasing its attractiveness later.

    But the whole argument is rather tiresome. People will say “oh, but the customer doesn’t want a lot of releases — he wants stability, which is what Apple gives him” or “but Apple is updating faster than google can update every phone manufacturer” depending on which way the argument is blowing. At the end of the day, Android will win — unlike Apple, nobody forces me to upgrade my older Linux boxes if I don’t want to, yet I can run last night’s code on other boxes if I do want to. And if there is a severe security update, somebody will probably push it out to my 4 year old box almost as quickly as it gets pushed to last night’s box.

    Everybody talks about how bad the Linux desktop UI is, but the point is, Linux does have customers, and supports those customers quite well for most of the things those customers care about. And one area where it shines, where a closed source OS can’t possibly compete, is in getting the update frequency right for each individual user.

  66. I’ve cited this blog piece on Wikipedia’s article on the OODA loop. Good show! The smart phone market shall certainly be seeing its own analogues to the browser wars of yore. Me, I just want to future-proof my own smart phone within the ambit of my own purposes, savvy? And too, there are niche speciality needs. Consider the severely handicapped. I know a woman who has no fingers (or lower legs); she’ll be using an Android phone if she’ll be using a smart phone at all.

    1. >I’ve cited this blog piece on Wikipedia’s article on the OODA loop. Good show!

      Thanks. You may want to fix the link; it seems to point into the article rather than to the top of it.

  67. And interesting side question, what about a single vendor who is functionally using an internal bazaar-type development method to speed their development cycle. I’m thinking HP here and webOS, with a single OS being developed by multiple groups for multiple devices (initially phones & printers, but HP’s also looking at stretching it to their PC division and more). I suspect you’d find them falling somewhere in between Apple/RIM and Android in their dev cycle, but probably closer to Android given printer product cycles. Biggest problem for webOS has been the disruption due to the acquisition of Palm and the problems with Palm itself (insufficient resources mostly)

    And just a note on Boyd, but he’s probably the most over-rated strategic thinker of the 20th century because while he was right about the OODA concept and its applicability at theater and even tactical levels, he was damned near completely wrong about its applications in fighter design, the area which he originally developed the theory for (Boyd’s emphasis on small, maneuverable fighters like the F-5 or MiG-21 was correct versus the relatively limited 50’s era heavy fighters but the most successful fighters of the 70’s and later would be the medium & heavy designs he opposed, like the F-15 and F-14 along with the later Su-27 family and the F-22. All of the smaller designs proved to be compromises, even the F-16 which pretty much was his baby and which only realized its promise in the fighter/bomber role which he opposed. He also strongly opposed the high-tech systemology approach in fighter design as excessively complex, while that very approach would be a large part of what made modern aircraft so effective.

    1. >he was damned near completely wrong about its applications in fighter design

      No, he wasn’t. Not according to my on-call expert on fighter design (yes, I have one, though he acquired this expertise indirectly). Boyd’s E/M theory was correct and is still the basic conceptual framework of fighter tactics. Boyd’s F-16 and “light” fighters were correct designs under a technological constraint which eased a few years later when power plants with a much higher thrust-to-weight ratio became available, leading to heavier but still highly maneuverable designs like the F22 and F35.

  68. esr –

    Since you tend to invite this sort of feedback, I thought I’d offer it. I’d never heard of the OODA loop before, and found it a fascinating concept. I tend to, I think mostly for aesthetic reasons, avoid military topics, so I had missed that. (I could go on a long rant here about how I think politics are mostly about aesthetics, but will simply note that I’m not instinctually a fellow traveler; in general, I’m much more of an urban elitist of the sort that the Sarah Palins of the world like to imagine there are many more of than there are).

    You suggested an entire new avenue to explore. I haven’t thought about some of this stuff since I read _Art of War_ in high school. And as a strange bonus that I think says something about the nature of memory – I was an exchange student in Germany for a year, and read _Art of War_ right after coming back. My German, which has been getting progressively rustier as I lack non-contrived reasons to use it, started popping up again. I had about a week of decompression when I got back to stop thinking in German, and for maybe a year and a half thereafter, German synonyms would mentally pop up about as easily as English. I guess reading military strategy is wrapped around that somehow, and I’m getting echos of it now.

    Brains are funny things.

  69. ESR,

    perhaps any advice about what is the customer to do in such circumstances? I am certainly not buying a smartphone every 90 days. So my problem is that waiting for the next 90 days pays off, because I get something better – I was on the verge of getting a HTC Desire some months ago but I think now in Feb 2011 it would be unwise, let’s wait a bit for something better. But 90 days later I will face the same problem, and another 90 days later the same problem… there is the contradicting desire to get _something_ already, and get something better with a short wait.

    To put it more formally, what is the rational customer behaviour when the time you intend to use a purchased product for outstrips the time it takes to put a better one on the market by about 10 : 1?

    I had the same dilemma in the nineties when there was a similar breakneck race about PC hardware, but back then I had a simple heuristic: does it run the games I really want to play? If yes, don’t upgrade.

    1. >To put it more formally, what is the rational customer behaviour when the time you intend to use a purchased product for outstrips the time it takes to put a better one on the market by about 10 : 1?

      The method I use is to pick a set of features I want and a strike price. When the product reaches that strike price, I buy. Yes, I know it will be cheaper three months later, but I want the features. By promising myself I will buy and not beat myself up about that afterwards, I free up my thinking time for other things and avoid stress.

  70. > To put it more formally, what is the rational customer behaviour when the time you intend to use a purchased product for outstrips the time it takes to put a better one on the market by about 10 : 1?

    Is it good enough?

    I think that’s the only sane answer. There are always effort-maximalization critiques, like the late-start genome sequencing project, that show alternate answers. But a phone is not a directed, single-goal project. You want a little computer in your pocket to do the things that a little computer in your pocket can do. Also, presumably, occasionally making or accepting calls.So, you minimax on what is available when you need to upgrade.

    The other side of a very short development cycle is that most changes in the landscape are incremental. Apple created this market, and now everyone else is getting close to parity in UI, with a huge cost advantage. Don’t look to the Android market for big leaps. This means you can even guess wrong, and replace a crappy phone at what is relatively speaking a small cost, especially if you sell it to some other sucker on Craigslist.

    I’ll illustrate with myself as an example. My primary phone is an iPhone 3G. I expect to buy whatever Apple offers up in June. This is because I have a very important relationship developing applications for it. I have all of the other variants, and several Android phones, as well. (No, not all of them are subscribed, but the total of my cell phone bills is absurd.) My main phone is a crappy old one because it is good enough, and the switching costs are just high enough to make me not get around to it. So, my short-term incentives run against what I see as the long-term outcome. I think there’s nothing inherently wrong with this. The market I’m servicing is pretty much immune to wider market concerns, short of the world switching off GSM. I’m coming up to speed on Android, and have ideas. But my bread and butter are iOS at the moment, so I live in it – I believe you have to, to do app dev well.

    It is also perfectly rational to come to the alternate viewpoint, should you not be in the pretty strange place I am – if you simply consume mobile services, or use it as an enabler of your business unrelated to it, it is likely that the carrier is more of a concern than the particular computer. I guess the one thing I would throw out as a decision fulcrum is, can you root it? I am a firm believer in the notion that one should be in control of one’s own devices.

  71. > To put it more formally, what is the rational customer behaviour

    It’s a problem common to all pioneers. Coping strategies include purchasing a new phone periodically and passing the old phone to a less picky or less independent family member or selling the old phone.

    > I had the same dilemma in the nineties when there was a similar breakneck race about PC hardware

    There was a book in the ’80s about that problem entitled, I think, _Computer Wimp_:

    http://www.amazon.com/Computer-Wimp-John-Bear/dp/0898151015/ref=sr_1_1?ie=UTF8&qid=1298145488&sr=8-1

    You can get it used for $0.01 to $0.89.

  72. As stated by Boyd, in the “Orient” box, there is much filtering of the information via our culture, genetics, ability to analyze and synthesize, and previous experience.

    Since the OODA Loop was designed to describe a single decision maker, the result of applying OODA in business is mixed, (at best) as most business and technical decisions have a team of people observing and orienting, each bringing their own cultural traditions, genetics, experience and other information.

    Many businesses who attempt to appy OODA get stuck in the D stage, and the OODA Loop is reduced to the stuttering sound of “OO-OO-OO”. Getting stuck means that there are no decisions and thus no actions. In reality a decision has been made to do nothing. Time keeps moving, and resources are used. In Boyd’s war-fighter scenario, the enemy gets the upper hand. In business, the competition keeps progressing in its OODA Loops and you keep using your resources while adding no value.

    This is also often described as “analysis paralysis”.

    Competitive advantage comes from quickness over the entire “loop” as with each iteration the changes are smaller (as they are modifications to an understood situation) and can be more easily managed, and you stay ahead of the competition.

    I submit that Android’s OODA loop is not shorter, and that Qualcomm, in-particular, is a growing problem for Android.

    Here are HTC’s Android smartphones, based on what chipset they run. You’re going to detect a theme very quickly.

    Qualcomm MSM7225, 528 MHz
    Wildfire

    Qualcomm MSM7227, 600 MHz
    Legend
    T-Mobile myTouch 3G Slide/Espresso
    Aria

    Qualcomm MSM7230, 800 MHz
    Desire Z
    T-Mobile G2

    Qualcomm QSD8250, 1 GHz
    Desire
    Google Nexus One

    Qualcomm QSD8255, 1 GHz
    Desire HD
    T-Mobile myTouch 4G/Glacier

    Qualcomm QSD8650, 1 GHz
    Evo 4G
    DROID Incredible

    Here is HTC’s announced line for 2011, again by chipset. Phones announced at Mobile World Congress are marked with an asterisk, so they’re hot off the PowerPoint projectors.

    Qualcomm MSM7227, 600 MHz
    Chacha*
    Salsa*
    Wildfire S*

    Qualcomm MSM7630, 800 MHz
    Evo Shift 4G
    Merge (Verizon should have launched late 2010 but pulled very suddenly)

    Qualcomm QSD8255, 1 GHz
    Inspire 4G
    Flyer* (tablet, running at 1.4 MHz)
    Incredible S*
    Desire S*

    Qualcomm QSD8655, 1 GHz
    Thunderbolt

    So the phones announced by HTC at MWC last week are using the same chipsets as last year. These aren’t smartphones that launched this month, these are phones that *announced* this month.

    Not a single dual core SoC in the mix.

    Why is the company that builds the most Android phones announcing phones with second generation Snapdragon chipsets? Why are they announcing a 7 inch tablet (with a single core) for a rumored thousand dollars, and expecting it to compete against the dual core XOOM or G-Slate, or the single core Galaxy Tab that’s available right now, all in the $700 range?

    HTC has hitched their wagon to Qualcomm’s vehicles, and they’re now being slowly dragged into a ditch. NVIDIA was first to market with their dual core Tegra 2, and several manufacturers announced dual core smartphones using them. Motorola’s Atrix 4G and DROID Bionic will be seeing double, as will their XOOM tablet. LG is using the same chipset with their Optimus 2X and the Optimus Pad/T-Mobile G-Slate tablets. LG really hedged their bets by also going dual core with the Optimus 3D smartphone, only this time they used Texas Instruments OMAP 4430. Samsung used their own Exynos 4210 (Orion) dual core processor for their just-announced Galaxy S II and Galaxy Tab 10.1 tablet, although they admitted they will use the Tegra 2 in some regions.

    HTC does have a dual core smartphone in its future: the T-Mobile Pyramid. It’s going to run on Qualcomm’s dual core MSM8260 chipset. It wasn’t announced at MWC, even though it was being discussed as far back as September 2010.

    But Qualcomm is late late late with advanced chipsets for Android. This is really bad news for HTC.

    > All these problems – all of them – are predictable consequences of having a business plan stuck to a huge heavy blob of closed-source software. That is the weight that will ultimately drag down and kill Android’s competitors.

    This would be true if Google actually ran Android as a community-developed open source project. They don’t.

    1. >I submit that Android’s OODA loop is not shorter, and that Qualcomm, in-particular, is a growing problem for Android.

      One handset maker stumbles and you think this is evidence of a problem for Android as a whole. You really don’t get it at all, do you?

      I see you cribbed much of your post from this story on Android Headlines. Not impressive; you should at least attribute your sources.

  73. > This would be true if Google actually ran Android as a community-developed open source project. They don’t.

    Not in the usual sense of having a central repo that people can get commit access to. They do work/communicate with the CyanogenMod folks to some degree, and I can’t help but think that there is an amount of ideas spreading back upstream in that regard.

    It looks to me like they want to control their own repo, but if someone out there builds a custom ROM and does some cool things with it in a good way, they’ll take notice.

  74. > I see you cribbed much of your post from this story on Android Headlines.

    Heh. I was confused when I first saw his post about how Xoom, G-Slate, and Galaxy Tab beating up on HTC’s offering being a problem for Android, but I wasn’t awake enough to realize that he cribbed a full article and then wrapped his own conclusion around it.

    Interestingly, the article itself was quite reasonable (e.g. doesn’t at all support his conclusion) and ends with the obligatory PuddinHead Wilson / Lazarus Long quote about watching the basket after you put all your eggs in it :-)

  75. Jamie, I’m with you on tending to avoid military topics. In my case, I don’t think the reasons are aesthetic– they’re partly practical (I don’t actually need to know that stuff) and partly emotional. I’m pretty sure that military stuff is fun if you imagine yourself using the weapon, or watching the battle from above, or giving orders. If you imagine yourself on the receiving end of the weaponry, not so much fun. And while your point of view can be somewhat modified as a matter of choice, reflexive point of view matters in what you’re doing for fun.

    However, the military also consists of smart people working on figuring out better ways of doing things. Sometimes they succeed.

    What really convinced me of this was a man directing hucksters (merchants at a big science fiction convention) to a loading dock. His gestures were precise and intuitively clear. I asked, and he said they were normally used for directing tanks.

    And just to mention it because otherwise someone else will, the stuff about Athena (the cleverness of war– in opposition to Mars, who represents the brutality of war) in Cryptonomicon is definitely relevant.

  76. Nancy –

    Thanks for your thoughts. I think I’m using the term ‘aesthetic’ is a somewhat more expansive way – I’m not talking just about being turned off by the use of kinetic force against foe, and I don’t think of military types as dumb fodder. I was getting at a wider blind spot, I think, that I hadn’t noticed in a while. (I have to remind myself of things.)

    There is an entire mode of discourse around military topics that is pretty foreign to me. For instance, riffing off of the OODA loop, one can jump to control theory, which I’m actually quite handy with, having written real-time feedback systems for some goofy RC planes I used to build (IR sensors for stabilization – which worked surprisingly well, once you were ~20 feet off the ground, until there was a tree or building). I guess it is when jargon that seems fueled by an in-crowd sense of irony meets, for instance, Boyd’s handwaving about quantum dynamics that shares more than a bit with Deepak Chopra that I have to step off. To be clear, metaphors are fine. However, when your metaphors can sell either a war or a homeopathic elixir, I see a problem.

  77. > What’s happening, with ephemeralization lowering the capital cost of design changes, is that fast cycles of “Ready, fire, aim!” are becoming an effective way to address fluctuating consumer demands.

    Umm.. NO.

    That’s how you piss away your shareholders’ equity, not how you make money.

  78. > That’s how you piss away your shareholders’ equity, not how you make money.

    You know, the longer I’ve been in business, the more amazed I am at how incomplete knowledge and bad estimates can balance other incomplete knowledge and bad estimates.

    I’ve been on a couple of very successful projects where, if you had accurately estimated the costs up front and asked for the money from management, they would have laughed you out of the building, and where, if you had accurately estimated the revenue and profit from the project, they would have carted you off to the funny farm.

    Nonetheless, by thinking that an investment of X would produce Y profit, management went ahead, never realizing that it would actually be 5X and 50Y by the time they were done…

  79. James A. Donald Says:
    Is Linux as user friendly as windows?

    Yes, it is. It’s just a LOT more picky about who it’s friends are.

    Even with the “new” command line based on the .net stuff (which is pretty nice, BTW) doing a lot of the “for file in ls do this | grep | that | sort -n ; done | xargs the_last_bit is still difficult.

    And yeah, my Mom doesn’t want to do that stuff. But then my mom doesn’t own a computer. Used to, but it was too much trouble and expense (yeah, she bought a windows machine. She shoulda bought a Mac. I may get her an iPad if the next interation includes a camera for skyping).

    Eric is right that the primary reason that Windows beats Linux on the desktop is the constellation of crap around Office and Exchange, to include Visio and a half dozen other tools.

    Linux Developers *still* have the command line as a last resort–this is why Apple (at first) wasn’t going to ship a terminal app with OSX and in the early versions (IIRC) you had to dig a bit to get to it. This was a good idea in that it forced developers to do things “purely” in the GUI.

    And yeah, I’ve got a Mac, but won’t buy an iPhone until the apps I want don’t have to go through some communitarian gatekeeper.

  80. >> Seriously, in what sense is Apple “playing catch-up”?

    >At least two different senses. One: uptake of 4G/LTE.

    Do you have any other examples along those lines, examples of must-have *hardware* features that Apple lags on? The trouble with this particular one is that the 4G situation feels an awful lot like the 3G situation of a few years back. Which was: when 3G first came out it had very spotty coverage and the phones that used it paid a big cost in battery life; the first iPhone didn’t do 3G. Then some time later, once 3G coverage was more ubiquitous and reliable, Apple came out with an iPhone that did 3G.

    So far, most of the reviews I’ve seen of “4G” phones have emphasized that 4G doesn’t yet seem to be a killer feature for those phones. For instance, consider this review of the Atrix 4G:

    AT&T and Motorola are billing the Atrix as a 4G device. Hell, it’s got the term in its name! We wish that we could report back that we saw 4G-like speeds on the phone, but it’s actually quite the contrary.

    In comparison to other handsets we’ve tested on the network in the same spots, the Atrix 4G actually got lower speed rankings on both downstream and upstream tests. In general, we saw an average download speed of around 1.5 Mbps, while uploads were even worse at just about 0.15 Mbps. We did see download speeds spike occasionally into 2.2 Mbps territory, but that wasn’t the norm. During the testing, the phone had four or five bars, and was clearly displaying the HSPA+ icon.

    The odd thing is that if you compare the device against the iPhone 4 on AT&T’s network — tested in exactly the same locations — you see much different results. On the iPhone, data speeds were consistently in the 2 or 3 Mbps range for downloads, and hovering around 1 Mbps for uploads.

    We’re not sure what the issue is with data, but we’re not seeing anything resembling 4G on these tests. If AT&T was hoping to impress with its speeds on the HSPA+ network, it still has a lot of work to do.

    Or this MyTouch 4G review:

    The bad news? As far as we can tell, we’ve rarely strayed from HSDPA. Basically, T-Mobile has a very good “3.5G” network in NY, which is either vastly underutilized or just plain good. […] Still, when buying a “4G” phone you have to be aware to what extent a “4G” network exists to support it, and T-Mobile has a ways to go.

    So: what am I missing?

    > Two: Recall the long analytical comment comparing Uis a couple days back?

    Actually I had missed that exchange – thanks for calling my attention to it! You’re talking about this: http://esr.ibiblio.org/?p=2941#comment-296694

    The “claustrophobic” complaint about the icon grid is interesting in that I had the exact opposite complaint – I chafe at there being too much white space between icons. I’d like the option to fit more stuff on the screen by using a tighter grid! Or to switch to a list rather than grid view. I ultimately solved most of the navigation issues by using the “folders” feature heavily.

    Apple does have a history of removing extraneous options from products to provide a Zen minimalist aesthetic which optimizes for ease of first use rather than meeting the needs of power users. I’ll certainly grant there’s room for improvement in widgets and handling of notifications. What I’d really like to see is the ability to simply swap the launcher with a third-party alternative one. It does grate a bit that we can’t do that; coming from the Palm world (and Newton before it) all these little complaints about not having enough options seem to fairly scream “third party opportunity”!

    Still, I’m not sure it’s fair to compare the flexibility of a stock iPhone to that of a rooted Android. The sort of people who immediately root their Android and install alternative environments presumably would do the same for their iPhone and get similar flexibility. I’d rather see a comparison of stock-versus-stock, or rooted-versus-rooted.

    > bitching about his iPhone […] ah, that’s it, he was complaining that a lot of the apps are things he doesn’t want that aren’t removable.

    Yeah, I have a folder called “useless” on my last screen where I keep those.

  81. > Linux Developers *still* have the command line as a last resort–this is why Apple (at first) wasn’t going to ship a terminal app with OSX and in the early versions (IIRC) you had to dig a bit to get to it. This was a good idea in that it forced developers to do things “purely” in the GUI.

    I have to disagree here. Not on the timeline, but on the orientation. I am back on a Mac *because* of the command line. I know many developers who have said the same thing. I was on a string of first FreeBSD, then, Debian machines for ~11 years because that was the best thing around for what I was doing. I’m on a Mac now, because I get everything that Debian gave me, plus a GUI that isn’t insulting (Win) or broken (anything in the open source world), plus interop with corporate clients, plus graphics software that supports my camera habit.

    For me, being an anything-but-Microsoft guy, the war is mostly over. I don’t see Apple as the Next Big Threat – they’re kicking ass at the moment, but it will settle back down into their traditional niche in time. For a unix dev guy, they’re simply making the best box out there at the moment. I don’t care that it is proprietary, but then, I also worked on Sun boxes a lot back in the day.When I’m pushing 47 (instead of looking at 37), I can totally see me typing on a Lenovo device running something Linuxy again.

    I do believe the stable state of the computing world is open source. I don’t believe that means it is always the right solution, or that me, as a strong proponent of OS, needs to be dogmatic. Oh, and command line as last resort? That’s where some of us still spend the vast majority of their day. I write the the vast bulk of the iOS apps I maintain in vim.

  82. @Glen Raphael:

    Still, I’m not sure it’s fair to compare the flexibility of a stock iPhone to that of a rooted Android. The sort of people who immediately root their Android and install alternative environments presumably would do the same for their iPhone and get similar flexibility.

    That may or may not be true. I’d probably root an Android phone, but not an Apple phone.

    I’d rather see a comparison of stock-versus-stock, or rooted-versus-rooted.

    That ignores the dynamics. Within two years, most Apple phones will still be stock, but the state of most Android phones will depend on whether or not root actually buys the average user anything useful.

  83. @Glen:
    > So far, most of the reviews I’ve seen of “4G” phones have emphasized that 4G doesn’t yet seem to be a killer
    > feature for those phones.

    I think that falls pretty heavily on the 4G implementation. At least here in Austin, Sprint’s WiMAX is zippy fast. But, Austin is also one of Sprint’s large markets for Clear wireless internet, too.

    > The “claustrophobic” complaint about the icon grid is interesting in that I had the exact opposite complaint –
    > I chafe at there being too much white space between icons.

    Which is interesting, since both iOS and Android UIs are 4-icons wide as shipped. iOS gets one extra row, using the dock, and keeps space between icons _uniform_, which may be an aesthetic with some amount of psychological effect.

    Full disclosure: On my systems, I avoid window managers with icons, preferring such as Fluxbox, DWM, Awesome and other minimalist setups. I get cranky when there’s too much informationless clutter in front of my eyes.

    > Still, I’m not sure it’s fair to compare the flexibility of a stock iPhone to that of a rooted Android.
    > The sort of people who immediately root their Android and install alternative environments presumably
    > would do the same for their iPhone and get similar flexibility. I’d rather see a comparison of stock-versus-stock,
    > or rooted-versus-rooted.

    I think I mentioned in that post or the prior one that I was still using the Sense UI that came with my Evo. All the points I made were not against rooted features, and I was very careful not to let that taint the discussion. I did end up flashing to CM7 RC1 (Android 2.3.2) afterwards, though.

    Maybe things have gotten better in the rooted iPhone world, but my old (by loose definitions of ‘old’) iTouch, after rooting, didn’t give me more than a few passing options that improved the experience. I don’t recall if there was a full Springboard replacement at the time, but maybe there is now. I don’t care enough to check anymore.

  84. > both iOS and Android UIs are 4-icons wide as shipped.

    Right, which undoubtedly feels less “claustrophobic” on a slightly larger screen. The as-shipped icons aren’t just too far apart for my taste, they’re also too damn big. I think being used to a stylus-driven UI spoils you for a finger-driven one.

    > I think I mentioned in that post or the prior one that I was still using the Sense UI that came with my Evo.

    My mistake. I’ve never used an Evo so I’m not familiar with the terminology; when you said you installed the Sense UI I didn’t realize that was putting back something that was already there before you rooted it. Just out of curiosity, why *was* rooting it the first thing you absolutely had to do prior to using the thing?

    > I don’t recall if there was a full Springboard replacement at the time, but maybe there is now.

    PogoPlank is/was a full replacement, though I’m not sure it works with the latest OS version yet.

  85. > The as-shipped icons aren’t just too far apart for my taste, they’re also too damn big.

    On Android or iOS? Putting my HTC next to my iPhone4, even with the smaller screen the iOS icons are larger than Android’s. But, Android icon size could easily be a result of a specific skin or release, too.

    Just for fun, here is a side-by-side image I just took (was unable to take screenshots in my long post, since I was at work and did not have my canon handy). http://www.projectkutani.com/compare_ios_cm7.jpg Both devices are cranked 100% brightness, which I generally don’t run on either for battery conservation purposes. Also, just for the sake of truthfulness, that image is scaled down quite a bit from the raw, otherwise you’d be able to see the striking difference in pixel densities.

    > Just out of curiosity, why *was* rooting it the first thing you absolutely had to do prior to using the thing?

    Well, strictly speaking, it wasn’t the FIRST thing I did. But it was within the first two hours. : )

    I had my eye set on getting CyanogenMod available as soon as possible, which properly enables features that Sprint wants to charge a toll for (wifi tethering and such), as well as gets me to a more recent Android version. It turned out I actually kind of liked the HTC Sense skinning, to a point. It looks good in a KDE4/Enlightenment E17 kind of way. I don’t find it a distasteful alternative to stock Android, but it’s missing a few features I like. Anyway, rooting was the first step there.

    Also, there are a handful of particularly useful apps that require having root on my phone. Essentially, my rooting of the phone is me taking back the control that Sprint doesn’t want me to have. Largely a principle thing, equally a practical thing.

    > PogoPlank is/was a full replacement, though I’m not sure it works with the latest OS version yet.

    Hey, that’s not a bad-looking thing. I can’t see myself enjoying the ergonomics, but I’m glad to see something’s out there.

  86. That’s called “high cost of failure”. If you’re thinking about OODA and business strategy at this level, one of your pressing questions is “How do I minimize my own cost of failure?” If you’re trying to out-OODA the other guy, this is often a more important question that “How do I maximize the gains from success?”, because the former feeds through to an increase in your tactical options for future rounds in a way the latter may not.

    And of course the shockingly obvious point is that one of the the largest costs of failure for a hardware design is the sunk costs necessary to get a prototype into the hands of test punters. If R&D can confidently abort a project that reeks of failure after 3 or 4 weekly test run iterations because the response has been overwhelmingly negative, then the cost of failure is pretty damn minimal.

    Hell, if you got decent info about why it blew nuggets that would probably justify a month of R&D costs in and of itself.

    1. >Hell, if you got decent info about why it blew nuggets that would probably justify a month of R&D costs in and of itself.

      Indeed. Especially since R&D for this product category simply doesn’t have to be that expensive. Your minimal product-development team: one industrial designer, one radio engineer to do things like antenna tuning, one telecomms/networking specialist, one software engineer to do Android port/build/test. Add one secretary or admin assistant as support. Give them five cubicles and access to a closet full of parts, a 3-D printer and a PCB-prototyping rig. Even with generous salaries you’re looking at probably less than $1.5M a year to run the whole kit and caboodle, and getting three or four design cycles out of that.

      If you’re HTC or Huawei or Samsung, you can easily afford to run four or five teams this size in parallel.

  87. >>he was damned near completely wrong about its applications in fighter design

    >No, he wasn’t. Not according to my on-call expert on fighter design (yes, I have one, though he acquired this expertise indirectly). Boyd’s E/M theory was >correct and is still the basic conceptual framework of fighter tactics. Boyd’s F-16 and “light” fighters were correct designs under a technological >constraint which eased a few years later when power plants with a much higher thrust-to-weight ratio became available, leading to heavier but still highly >maneuverable designs like the F22 and F35.

    Actually, he was wrong in terms of development (His physics was right, his understanding of its applications in fighter design was based on limitations which had been outgrown at just about the same time he acquired influence in the design community). The F22 and F35 engines are further developments of the F-15’s F-100 engine, which was where the heavier but still highly maneuverable designs really started and the F-15 predated the LWF competition which produced the F-16 (and the F-18 indirectly). The F-15 is in fact more maneuverable than the F-16 or F-18 in the vertical domain (albeit slightly less maneuverable in the horizontal) due to its better thrust/weight ratio (the F-15 has approximately 34,900lbs dry thrust with an empty weight of 28,000lbs, the F-16 has 17,155lbs dry with an empty weight of 18,900). Frankly the F-15, which Boyd thought to be something of an unfortunate compromise, fits E-M theory better than the F-16 does despite Boyd pushing the ADF/LWF program.

    Tactically there’s no doubt that Boyd was right, but his only real success in the fighter development side of things was helping kill the fighter variants of the TFX(F-111) program. Note that both of the programs he had any real influence on (the F-15, to a limited extent and the F-16) would go on to see their most successful variants being the sort of multi-role aircraft that Boyd opposed and the better pure fighter of the two was the one he thought was overweight.

  88. Adam:

    At the time that Boyd was involved in fighter development, the F-15A had a worse thrust to weight ratio than the prototypes of the F-16, and the F-14 suffered through three iterations of under-performing engines because the promised engines couldn’t meet the reliability demands or the contracted prices.

    The F-16 was not ‘mildly’ advantaged in a turning battle with an F-15. It was pretty overwhelmingly advantaged until the F-15 started getting revised engines. While what you state is true about the current F-15 and F-16, it was not true prior to the mid 1990s.

    The F-16 became a multi-role fighter largely because its mission became obsolete around the rise of Gorbachev: When you don’t need things to kill Soviet fighter/bombers, those planes needed re-tasking, or a bunch of fighter pilot jobs would, *gasp*, be lost. Given the Air Force’s tendency to see anything but fighter pilot jobs as ‘well, you’re flying a desk at 50,000 feet.’ and career enders, anything that expands the mission of a jet that’s fun/prestigious to fly is going to get a major institutional pull.

    Given the information Boyd had at the time he was influencing the development of US fighters, and given the hangar queen issues of the F-15 for its two first revision cycles, he was right to push for high maneuverability and low maintenance/reliability. Fighter plane revisions do not follow 90 day development cycles, and he’d rather have to fight with things that flew than be stuck with empty promises that they’ll get fixed sometime.

    (Remember that the B1 bomber was the perfect procurement plane – it had parts made in all 435 Congressional districts….)

    US fighter craft development exists in the dynamic tension between “Have the best stuff available, so good nobody wants to even have an Air Force to be shot down…” and “We have to keep the birds in the sky long enough to do their mission.” This is further complicated by “If it doesn’t put jobs in my district, I’m calling it waste.”

    For the most part, Boyd opposed the first polarity…in large part because he’d seen what its focus did in Vietnam. He was firmly focused on the second polarity. He strongly felt that a plane in the hangar waiting for a part was a liability, not an asset. The Russian motto on this is “Better Than is the enemy of Good Enough.”

    He was utterly and completely oblivious to the third polarity, which is why his influence waned RAPIDLY in the Carter Administration, and then fell off a cliff when Reagan took office. The fact that Boyd was a an arrogant, prickly son of a bitch who tended to frame arguments with “I’m a genius and I’m right…” then dismiss concerns from subordinates and peers didn’t help his cause.

    In a nutshell – technology that Boyd wanted to deprioritize rendered a lot of his concerns moot, and his influence on fighter designs outside of the F-16 evaporated FAST. The F-16 became a multi-role fighter to preserve flight rated jobs in the Air Force when its primary mission (stop the WarPac bombers over East Germany) became moot.

    So, in part, you’re right – in part you’re wrong, and in many cases, Boyd’s contribution is overstated in fighter procurement.

  89. Again, Europeans and first-world Asians get 4G coverage at speeds of 20-100Mb/s as a matter of course. 4G not being a huge win is a largely American problem (though I hear Australian and Brazilian coverage suck on ice).

    That said, currently I’m cursing out Sprint for going with WiSUX when everybody else has decided upon LTE. Way to confine yourself to being a marginal network with only spotty coverage in densely populated urban centers, guys. It’s even more fucktarded than remaining a CDMA carrier, and both are akin to sticking with DECnet or IPX in 2011 when TCP/IP effectively rules.

  90. In Sprint’s defense, haven’t they rolled out more WiMAX cities than Verizon has LTE? 50 vs 30 or somesuch? I think Verizon’s launch was mentioned in December as 30 or so odd cities, but in the short term when *everyone* has to roll out new infrastructure, Sprint could be doing worse.

    The catch there is that from what I understand, WiMAX on a practical day-to-day use merely brings Sprint up to the average speed of UMTS 3G except with worse power consumption problems.

    So, their rollout is better…but it’s an inferior technology on power use. If WiMAX had great power consumption compared to LTE or UMTS, Sprint would be a golden boy.

  91. Windows has a command line as the last resort too, but in the case of Windows it really is the last resort. In the case of Linux, it gets resorted to much more quickly.

    And while maybe things have changed, my memory of Unix is that it didn’t just have a command line – it had a command line that was deeply user-hostile even compared to other command lines.

  92. And while maybe things have changed, my memory of Unix is that it didn’t just have a command line – it had a command line that was deeply user-hostile even compared to other command lines.

    Aw, c’mon — how long did it take Windows to add tab-completion? Last time I seriously played with Windows, it doesn’t even have mouse-only cut and paste — you have to select some sort of edit bullshit from the menu tab on the DOS box.

    I grew up in DOS and then Window. I used to do amazing things with DOS batch files, and I only started using Linux as an old man, and I have to disagree completely.

  93. @deep lurker
    “–it had a command line that was deeply user-hostile even compared to other command lines.”

    Yes, in the same way that English is “user-hostile”. Powerful tools tend to require insight and contemplation.

    I do not fins bash more hostile than, eg, Python.

  94. “When I use Ubuntu, I am usually using the command line, or editing configuration files. Do you think your girlfriend is going to edit configuration files?

    “Just look over the shoulder of anyone running a current Linux distro. They are all male, and usually in the command line.”

    So? I’ve had a number of issues with Windows boxes that required either editing the registry (which I find much scarier than the command line), editing config or bat files, or other low-level changes. Windows is NOT immune to this.

    When I was trying to set up my dual-boot Linux/Windows 95 system back around 1999, I was able to get Linux installed and working quickly, but driver problems with Windows 95 drove me absolutely crazy.

    In both the Linux and Windows world, non-geeks rely on geeks they know (either hired, personal friends, or corporate IT department) to resolve these issues for them.

  95. “And just a note on Boyd, but he’s probably the most over-rated strategic thinker of the 20th century because while he was right about the OODA concept and its applicability at theater and even tactical levels, he was damned near completely wrong about its applications in fighter design, the area which he originally developed the theory for (Boyd’s emphasis on small, maneuverable fighters like the F-5 or MiG-21 was correct versus the relatively limited 50?s era heavy fighters but the most successful fighters of the 70?s and later would be the medium & heavy designs he opposed, like the F-15 and F-14 along with the later Su-27 family and the F-22.”

    I believe the OODA concept is valid in the regime where we’re discussing it (technology evolution), but I could make a pretty good case that aerial combat is not a good example of one. Read the book “Fighter Tactics and Strategy” (also titled “The Aces Talk” in some editions, but it’s the same book).

    Highly-skilled fighter pilots in pretty much every era consistently say that speed is more important than maneuverability. The classic idea of the dogfight, with each combatant trying to turn inside of the other one and shoot, is largely mythical. The high-scoring aces typically attacked at high speed, took out the opponent and kept going without sticking around to slug it out with his squadron mates.

    So most successful aerial combat is one-pass, not iterative.

    1. >Ken: and the Air Force is hostile to airplanes designed to assist ground targets, like the A-10 Warthog

      To be fair, this is a rather inevitable consequences of the carve-up that created the post-1947 Air Force out of the USAAF. The Army kept the rotary-wing aircraft and the close-air-support mission, the Air Force got fixed-wing and the air-superiority mission. The A-10 crossed a political line that was locked into place by that institutional design.

  96. One of the things that studies on fighter pilots run into is this:

    We really don’t know what makes for an ace-grade pilot. We know they exist, we know they turn up, and when they turn up, they tend to be dominant in the air combat regime they work within. We know that they have to have good vision, but better than 20/20 isn’t a predictor of being an ace. We know they have to have good kinesthetic abilities, but very few are ‘graceful’ in day to day life. We know strong 3-D visualization abilities are important, but that’s more a case of ‘people who fail in reaching this metric don’t become aces’, not “people who meet this metric become aces”.

    Right now, the thing that the USAF does is it tries to give every pilot a lot of fight hours in training missions, in the hopes that some of that top 2% get shaken out of the mix. The USAF is also pretty careful about identifying those pilots when they spot them, and trying like HELL to keep them in the bird that they first attracted attention in.

    Sometimes, this results in pilots not getting their tickets punched for promotion, and then they get up-and-outed.

  97. Something Eric runs into, with martial arts, is what Iain Banks calls “out of context” problems.

    Eric and I are a good example of dissimilar combat styles.

    Eric has poor reflex speed and at best average precision in body control. Eric can be outrun. Eric has a great mistrust in his sense of balance. Eric relies on being strong enough and durable enough to get close and mangle you…and Eric can get winded and overheat very quickly.

    I know these things because they’re aspects that I ruthlessly exploit when we fence/fight.

    I am blind in one eye – moving to my left tends to hinder my defenses. I have one-in-a-million 3-D visualization abilities, enough so that I can slip into a very non-verbal mode of thought when I’m sparring and see an overlay of vector arrows where someone’s joints are, and tell where they’re going to go. (Sal Sanfratello thinks that he and I both slip into a ‘lizard level’ of the brain and that I’m crediting my 3-D spatial abilities for something that’s more common.)

    I have slightly faster than average reflexes, and a lot of training in ‘no wasted motion’ as a combat style, which has the added benefit of making my parries and counterstrikes seem faster. I can run faster than Eric, and have a longer stride. I also have the genetics (but not the masochism) of a decent endurance athlete. I will never be hellaciously strong, but I can keep sparring at about 80% exertion levels for hours. (At PenguiCon, I was giving fencing demos and lessons for 7 hours straight with breaks for the bathroom and water).

    When Eric and I match up open hand (it’s happened rarely), the outcome is ludicrously one-sided: I either misdirect Eric at long range and push off of him – and then he gets me on his second pass, since I don’t really have the option of running – or I fail in that misdirect in the first one and he gets me in the first pass. The only time I’ve ever thrown Eric on the ground, he was trying something clever that I’d seen before; I think he was bored and was trying to make it more challenging for himself.

    When Eric and I match up with blades, the outcome is also one sided, but the other way. I can usually take out his sword arm or his trailing knee before he’s in range to strike me. I can see the evidence of his thought process on his face, and I can usually see his direction of travel in time to do something about it.

    Where this touches on out-of-context problems is this: Eric’s sword context doesn’t deal well with some things – thrusts and lateral movement, tricks to conceal your reach. Mine doesn’t deal well with people closer than about 3-5 feet away.

    The way you win OODA fights is to get outside your opponent’s context mapping.

    1. >Eric can get winded and overheat very quickly.

      It’s actually almost all the latter. I have pretty good wind, but I am more than usually vulnerable to overheating (and the difference between Eric out of breath and Eric overheated is hard to spot unless you’re Eric). It’s one of the penalties attached to bulk muscle, and one reason I wouldn’t dance around a lot even if I were more agile.

      Ken’s account is otherwise largely correct, and his point about getting outside your opponent’s normal functional envelope is sound.

    2. >Mine doesn’t deal well with people closer than about 3-5 feet away.

      I’ll also note that this is a very common weakness even in well-trained martial artists. Dojo culture encourages some predictable kinds of stylization of combat; one of these is a tendency for engagements to be repeated rounds of run-‘n’-gun at a 4-6 foot engagement range. These means that one of the most effective ways to disrupt an opponent’s OODA loop is often to close to inside the range they’re normally comfortable working at. Puts ’em off their game and changes the fight tempo into a region that’s often troublesome for them.

      I’ve won a shitload of fights this way, both blade and empty hand. Before you try exploiting this, however, be aware what the prerequisites are. You need to be either tough and hard to rattle or preternaturally fast – otherwise you’re likely to take a hit on the way in that will throw you off your game even if it’s not scored as a kill. As Ken notes, I rely on being tough and hard to rattle. I’ve fought people who rely on preternatural speed, and they can easily beat me – but they have zero slack for mistakes.

  98. One other advantage that I have – and it’s an ephemeral one against most fighters – is that I’ve seen and acted in a fairly wide range of sword styles, from SCA power-bat to ARMA to kenjutsu. Am I a master at any of them? No.

    I’m likelier to have a context for whatever sword style I’m facing than most of the people I’ve met. At some point, athleticism and speed and strength come in to play, and someone who’s faster than I am (rare) or stronger (more common) or have better endurance (very rare) will find an exploit.

    One of the things I saw at PenguiCon was that I was better at the initial matchup than most of the Aegis fighters were. None of Sal’s students had anything resembling appropriate responses, until they started doing heavy beats and trying to close. And then kind of got perplexed when I went out of the ‘fencers only fight in a straight line’ conditioning they’d been taught. Eventually, we’d develop mutual contexts and things would balance out.

    Sal was having a blast because he got to face off against something he hadn’t seen in years and was having to figure it out as he went along. (I don’t think Sal had ever tried schlager fencing in the round, for example.)

    Capitalizing on your initial advantages needs to be done quickly before other players see your tricks. This is something Apple tried to do – and is still trying to do. It may well get them an edge in the tablet market, where Android tablets are offering a worse experience and UI, lower performance, and worse battery life…for $300 more.

    Apple did a very ruthless job in grabbing much of the promised production of 10″ touchscreen tablets through 2012, and there are hints that they’re trying to extend that vertical integration lead through 2014.

  99. Jeff & others,

    “Never to my 18-year-old geeky self did it occur that I might see a cute non-geek female carrying a Linux computer as a personal accessory.”

    Yes, and this is significant. Linux has difficulties in the desktop because 1) people don’t want to do something as risky-sounding as exchanging a whole OS, it sounds like putting a Porsche engine into a BMW, you just don’t do it if you aren’t a car geek 2) getting used to different ways of using the computer. People have already invested significant time in learning Windows & MS Office and don’t want to lose that. (Yes, I know users who even moan about the ribbon in the new MS Office. It is easier and better – but new and different and requires a bit of relearning that alone seems to be good enough reason to moan.)

    But take a new device, that comes with Linux preinstalled, and because it is a new kind of device with a new kind of input (touchscreen) everybody is okay with learning a bit, because they would have to do that anyway, even a Windows mobile does not (I think, at least) have a Start menu in the bottom left corner, so as long as it is user-friendly enough they are okay with a bit of learning. Third and perhaps most importantly, mobile devices have forced Linux programmers out of their comfort zone of treating usability as a nice optional extra and seeing nothing wrong with having some support questions in the forums that can only by answered by “open a console and type this”. Mobile devices have changed all that. There is no console – or at least you are not supposed to use that unless you are a programmer. Everything that does not have a nice and simple GUI just doesn’t exist pretty much by definition on the mobile.

    I don’t know whether Linus or ESR have ever suspected that the way to “world domination” leads through devices that come with a kind of Linux preinstalled and don’t have a keyboard, thus being ungeek by definition, but it seems to be working.

    A question of personal interest: beginning a new job around the end of March, if they offer me a Blackberry, does it worth trying to fight tooth and nail for something with an Android, or are current Blackberries good enough?

    1. >I don’t know whether Linus or ESR have ever suspected that the way to “world domination” leads through devices that come with a kind of Linux preinstalled and don’t have a keyboard, thus being ungeek by definition, but it seems to be working.

      Linus and I anticipated the “preinstalled” part, but not the “keyboardless” part. I won’t say the latter surprised me much, though, and I’m guessing Linus wasn’t astonished either.

  100. ESR,

    thanks for the suggestion.

    Jamie,

    thanks for taking the time to write a long answer. “guess the one thing I would throw out as a decision fulcrum is, can you root it? I am a firm believer in the notion that one should be in control of one’s own devices.” _Theoretically_ I believe in that too, practically I never got around to see if I can root my PS3, because it just does the three jobs I want it to do well: games, Blu-Ray movies, and music videos on the big screen from YouTube, awesome for a little party at home with friends. Sometimes it is difficult to reconcile a geeky interest with just being a content customer. I remember having installed Linux on a HP palmtop around 2005 and ssh-d into it with a friend through wi-fi, from an ashtray-sized industrial computer which booted Linux from an USB stick, but other than the short-termed fun of “we do it because we can”, actually there wasn’t much use in the whole thing. And with all this hacking around we managed to irreparably brick it a few weeks later.

  101. >I have one-in-a-million 3-D visualization abilities, enough so that I can slip into a very non-verbal mode of thought when I’m sparring and see an overlay of vector arrows where someone’s joints are, and tell where they’re going to go.

    Ladies and gentlemen, we have located The One. You will shortly be awaking in a bath of pink fluid with an IV drip.

    1. >Ladies and gentlemen, we have located The One. You will shortly be awaking in a bath of pink fluid with an IV drip.

      He’s not kidding. I test in the topmost percentile for 3D visualization and spatial kinematics, but Ken is so much more capable than me that it’s just silly.

  102. I think what Shenpen said hit the crux of the reason why Linux never significantly made it to the desktop, and why it would have never made it past a geek’s desktop (permanently) even if it had all the programs and features that Windows can provide. Linux started it’s consumer desktop-focus extremely late, and all the time trying to match up with Windows, but never really there until recently. I am too young to know what happened regarding consumer desktops in 90s, but from what I have read, Linux was never there, and people had become comfortable with Windows.

    Now, another reason for the failure of Linux is attributed to fragmentation (euphemistically put, choices), due to it’s fast OODA loop. The same thing may happen to android as different vendors may incorporate different changes at different speeds. Who knows how much this is going to deter developers? Forget developers, even the consumer will just prefer to stick with an iPhone and upgrade it just once a year or every two years, and thus avoid the stress and pressure of making a choice.

    An old article that I remembered reading related to this: http://kaedrin.com/weblog/archive/001157.html. Maybe it’s just me, but I think that article is pretty insightful, and quite valid for the smartphone wars.

  103. Third and perhaps most importantly, mobile devices have forced Linux programmers out of their comfort zone of treating usability as a nice optional extra and seeing nothing wrong with having some support questions in the forums that can only by answered by “open a console and type this”.

    More like, a bunch of people who had usability as a foremost concern (Android Inc. was founded by an ex-Apple guy) saw Linux and thought “We could build on this.”

    In before the Deep Lurker rant about how due to being based on Unix Android is deeply and irreparably “user hostile”.

  104. The Army kept the rotary-wing aircraft and the close-air-support mission, the Air Force got fixed-wing and the air-superiority mission.

    Eric, I believe that last bit is incorrect. The Air Force also kept all fixed-wing ground support, and refused to let it go, even though they clearly did not want to actually do the job. Some very senior Air Force general or other was famous for insisting that they “never give up a mission.” Thus their repeated stupid attempts to cancel the A-10 and replace it with more F-16s, even though the A-10 worked great. It’d have made a hell of lot more sense to just give the A-10 – and its budget – to the Marines, but that would require giving up a mission. The latter has apparently been politically untenable to Air Force leadership for quite a long while, although I don’t pretend to really understand why.

  105. >Mine doesn’t deal well with people closer than about 3-5 feet away.
    I’ll also note that this is a very common weakness even in well-trained martial artists

    Btw, Peyton Quinn’s book talked about that extensively. Getting very close (chest to chest) was his preferred strategy, despite being more of striker than a grappler.

  106. @Kvv
    “Linux started it’s consumer desktop-focus extremely late, and all the time trying to match up with Windows, but never really there until recently. ”

    First of all, Linux started only in the early nineties, when MS was already 10 years in the PC business. Only at the end of the 1990’s did a usable desktop environment appear. From then on, the Linux desktop was a year or so behind the Windows desktop. That is to say, people were flaunting Linux for features they applauded MS Windows for only one or two years before. During the naughties, the gap closed fast and the criticism switched to Linux not being Windows, even in areas where it is way beyond MS Windows (installation of new software).

    Second, it is easy to capture the market 90+% in the exponential growth phase, like MSDOS/Windows 3.1, TCP/IP, MP3, and now Android, but extremely difficult and slow to unseat an incumbent with a monopoly. The only viable route seems to be to enable something must-have that the incumbent does not match, or to tap into a completely new market with new users. So after MS captured the market, neither MacOS, BeOS, nor Linux could unseat it anymore. And I have yet to meet a person who claims Windows UI is better than MacOS and BeOS (undoubtedly, this comment will have them crawl out of the woodwork).

    This is also the reason Android will simply take over the complete mobile phone and tablet market and then run over the netbook market. Smoking out MS from the Desktop PC market will take a decade or more.

    @Kvv
    “Now, another reason for the failure of Linux is attributed to fragmentation (euphemistically put, choices), due to it’s fast OODA loop.”

    No, this is the causation backwards. Only one Linux version would have taken the desktop, if it had. The first Linux distribution to conquer the desktop would have taken over all the others just as MS Windows took over all the other OS’. And that version would have smothered all other variants. You see it with Ubuntu in action. Out of all the distributions and business plans, Ubuntu hit the spot and took off. Ubuntu was so extremely successful, that it alone accounted for the majority of Linux desktop installs. Fragmentation is not the cause, but the effect of Linux not becoming the dominant Desktop OS.

    @Kvv
    “The same thing may happen to android as different vendors may incorporate different changes at different speeds. ”

    This is FUD. You can simply upgrade your Android release if you are not satisfied. Or you can install CMod. The differences are smaller than between W98, NT, Win200, XP SP1/2/3, W2003, Vista, W7 SP1/2/… (talk about fragmentation).

    Third, the choice argument is very 1984: Freedom is slavery, Choice is bad, …

    Choice increases transaction costs. That is why the markets will weed out costly choices. Which means important choices will be rooted out (package management, UI) and others will be strengthened (skins, colors). Like in cars, you are left with essentially three types of fuel (Gasoline, Diesel, LPG) and two types of gear boxes (Automatic and hand stick shift H). But you have almost unlimited choices in model and color. As I wrote above, the choice still available in Linux is not the cause, but the effect of it not taking over the markets. It is currently relegated to markets for which the transaction costs are minimal. If it takes off, the number of choices will be pruned down to one per market for the important characteristics.

  107. > Android Inc. was founded by an ex-Apple guy

    I looked Rubin up just now, and a couple of things fell into place. Like, why General Magic and Danger’s products both had that Mac System 6 feeling about them, and why Android is so clunky. The guy’s time at Apple far predates the massive overhaul of the Mac’s look and feel (and more importantly, the development environment) that came with Mac OS X.

    I look forward to seeing what the current generation of Apple engineers come up with when they cash out some of their shares and start their own businesses.

  108. > I have yet to meet a person who claims Windows UI is better than MacOS and BeOS

    Heh. I’ve met a few. They tend to be people whose first exposure to a computer was some version of WIndows.

  109. Excuse if I’m not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named “cloud computing” and the network became the OS?

  110. Excuse if I’m not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named “cloud computing” and the network became the OS?

    No. *nix conquered the server long, long ago. Mission critical enterprise apps such as ERP and CRM and PLM are and will continue to be run on Unix, and, more recently Linux. These Unix and Linux servers are, increasingly, being virtualized and ran on VMWare, KVM, or other more traditional lightweight quasi-viritualization technologies such as Solaris zones or FreeBSD jails.

    As for Android being the killer client app? In a sense, it certainly is. It’s putting Linux on the majority of new smartphones, now isn’t it?

  111. >>The Army kept the rotary-wing aircraft and the close-air-support mission, the Air Force got fixed-wing and the air-superiority mission.

    >Eric, I believe that last bit is incorrect. The Air Force also kept all fixed-wing ground support, and refused to let it go, even though >they clearly did not want to actually do the job. Some very senior Air Force general or other was famous for insisting that they “never >give up a mission.” Thus their repeated stupid attempts to cancel the A-10 and replace it with more F-16s, even though the A-10 >worked great. It’d have made a hell of lot more sense to just give the A-10 – and its budget – to the Marines, but that would require >giving up a mission. The latter has apparently been politically untenable to Air Force leadership for quite a long while, although I don’t >pretend to really understand why.

    Andrew is correct about the AF insisting on keeping *all* fixed-wing aircraft, and their missions. Even ones they hated. It may not make sense to someone who cares about results, but it’s perfectly natural to a bureaucrat (and the AF is nothing if not a nest of careerists). Every mission represents a justification for funding. So giving up a mission means giving up funding, which is NOT GONNA HAPPEN. Once the AF *has* the funding, they know better than any outsiders how to allocate it- they know which missions are actually *important* (yes that’s sarcasm) and direct funding to them. Which means funding always gets directed toward providing more slots for whichever branch of the service the guys making the decisions came up through (used to be SAC weenies, then it was the Fighter Mafia, etc etc).

  112. “Mission critical enterprise apps such as ERP and CRM and PLM are and will continue to be run on Unix, and, more recently Linux.”

    I am an ERP (and a bit of CRM) consultant since 2002 and never seen one. At very large companies, the kind that employs thousands of people there are *sometimes* some legacy character-based system that ran on pre-Linux Unix since about 1980 and recently the OS was swapped under them to a RedHat Enterprise Linux. But these apps are going away, because they look horribly outdated. Pretty much every new installation these days is 100% graphical, with charts and everything, and is either SAP, Navision or Great Plains, SAP being fairly platform-independent and the rest Windows-only. In CRM, with the exception of online SaaS (Salesforce.com) never seen a popular one that wasn’t strongly Windows-based, first was Pivotal a popular one, then MS CRM, because Outlook-integration is a rather important aspect of them. Salespeople I know want to use their CRM entirely from Outlook and never log into any other kind of software.

  113. >Ken’s account is otherwise largely correct, and his point about getting outside your opponent’s normal functional envelope is sound.

    It really does generalize and apply to everything. I’m no martial artist but I have studied a little, and I understand the importance of building muscle memory and learning/practicing counters so that they could be applied automatically/instinctively. Similar thing in sports, you practice until actions are “automatic”. So the oODa (inner ‘o’ and ‘d’) parts of the loop are minimized- information doesn’t have to travel all the way up your nervous system stack to the top layer (the conscious mind) and then back down again. Because that’s slow.

    Same reason people in the field of firearms self-defense always say you will default to the level of your training. In a split-second life-or-death situation you won’t have time to think through what to do, so the best you will manage is to implement whatever automated responses you have pre-programmed (which are your training).

    The OODA loop concept also explains many things about the military art that used to be just ‘rules of thumb’ with no theory behind them. Ex: forces with decentralized command structures, that devolve decision-making authority to sufficiently well-trained local commanders using their initiative, tend to be much more effective than forces with very centralized command (do a search for ‘Auftragstaktik’). Also why commanders who lead from the front (think images of Rommel or Guderian riding a tank or command vehicle) tend to be so much more effective. Faster flow of information, faster OODA loop.

    Also note that in an OODA-type battle, makign faster decisions really is better than making perfect decisions. One of the aims of blitzkrieg tactics is simply to generate data, as in sightings- to overwhelm the enemy decision makers with reports of ‘tanks seen here’, ‘armored column moving here’, ‘scouts probing here’, etc. Generate enough data, and an opponent trying to make a perfect decision won’t be able to make any decision at all, or will make a decision so obsolete (based on where the forces *were* when those reports were generated, except that the forces are *moving*) as to be worse than useless. Enough hesitation, or enough failure loops, with the incoming data looking more and more dire… and the enemy command simply collapses (witness the breakdowns suffered by the various French high commanders in 1940).

  114. > In before the Deep Lurker rant about how due to being based on Unix Android is deeply and irreparably “user hostile”.

    No, my rant is that Android is user friendly in the only way that a Unix version can ever be – by providing a limited, low-bandwidth UI in a situation where doing so is not all that bad a thing.

  115. > Android is user friendly in the only way that a Unix version can ever be – by providing a limited, low-bandwidth UI in a situation where doing so is not all that bad a thing.

    This may point to a fundamental difference between typical Linux geeks and the rest of the population. There is no question that, in information theory, source redundancy can help to get the message through in noisy environments.

    But quite often, the amount of information to be communicated is tiny compared to the available bandwidth. This naturally leads to the question of what to do with the remainder of the bandwidth. Many people automatically assume the best thing to do is to fill the bandwidth with redundant information — charts, graphs, how-to videos, etc.

    I hate this sort of thing. The original IBM PC manual had pages and pages of illustrations of DIP switch settings, showing you exactly how to set all possible combinations for all possible installed RAM combinations, when a simple rule-based chart or diagram would have sufficed. In contrast, the original CP/M manual was only 15 or so letter-sized pages that actually told you everything you needed to know to use CP/M and to program for it.

    In general, I hate video as a learning medium. There is an incredible amount of bandwidth in the channel, but it’s completely wasted in redundancy to make sure the video goes slow enough to work for a huge audience. It’s like the IBM PC manual — the information density is quite low.

    The CP/M manual was the opposite. The information density was quite high — every word meant something. Redundancy was provided as needed by the reader simply by re-reading passages until they made sense in context. Which was no big deal because the whole thing was so short.

    I’ll grant you that the IBM PC manual would remain more useful in the presence of noise (coffee stains, ripped pages, etc.) than the CP/M manual, simply because of the redundancy, but personally, I would prefer retransmission (get another manual) to deal with this issue, rather than making me, the poor receiver, decode a bunch of extra information.

    So I think if you’re the kind of person who can deeply focus on something, then when it comes to what you’re deeply focusing on, you might find brevity more important than redundancy. If you’re learning something new, redundancy is sometimes helpful, so that the source material will actually address one of your primary learning modes and also put the new material into context of old material you already know. But even there, the redundancy is best if it’s controlled by the user, rather than being a low-density linear video stream, so the student can spend more time on the parts he’s having trouble with.

    If I’m being distracted by other things, then redundancy might be needed to get the message across. But, then again, if I’m being distracted by other things, it’s usually because I’m not actually interested in the message (a television commercial, for example).

    In software, redundancy can be good, if a program supports ways that typical users might approach it to do a task. But redundancy is also bad for discoverability. How many times have I tried every single menu item and keystroke on a program to verify that it wouldn’t actually do what I wanted to do? How much extra time did that take simply because every thing that it did do was accessible through four separate mechanisms?

    I think this is why a lot of die-hard command line users don’t like GUIs — the bandwidth of the GUI interaction is much higher, but the actual information density can actually be, and quite often is, a lot lower.

    Why is google’s front page so popular and useful? It’s just a CLI.

  116. I am an ERP (and a bit of CRM) consultant since 2002 and never seen one.

    No. The clients are Windows, but the database and the server software are all on Unix. In 15 years in IT, I have never, ever, ever seen SAP or BaaN or whatever ERP software (except some junk like Great Plains) where the backend was running on Windows.

  117. @Patrick “In software, redundancy can be good, if a program supports ways that typical users might approach it to do a task. But redundancy is also bad for discoverability. How many times have I tried every single menu item and keystroke on a program to verify that it wouldn’t actually do what I wanted to do? How much extra time did that take simply because every thing that it did do was accessible through four separate mechanisms?”

    Ahem, Perl. Or any language that includes both while and for statements. Extra bonus for stuff like list comprehensions.

    So my contention is that there are ways and ways to present an interface, even showing multiple options. Personally, I find that there’s something to say for a modicum of fluff around the crunch, and also for repeating the same concept in different ways as a learning aid. What can I do, I’m not a genius. I’d say that having 0 “useful” redundancy is as limiting as having lots of it for the user: if you don’t understand a concept, and it isn’t repeated in some other way, you lose, just the same as if the concept is buried in the middle of 40 pages or at 2/3 of a video (which BTW I can’t stand either).

    “Why is google’s front page so popular and useful? It’s just a CLI.”
    That is doubly wrong. Wrong because what makes google popular and useful are the results (indeed, I mostly use my browser’s search input, not google), and wrong because google’s front page is not a CLI, but a GUI: it contains graphical (Midnight Commander is not a CLI either) information and options. Three times wrong because the goodness of google is its simplicity. Its GUI is as simple as can be (actually, it has been even simpler. I personally like to have the links to Mail, Reader, etc. at the top). And I’ll redirect you now to http://www.manpagez.com/man/1/ls/ , section BUGS.

  118. > Ahem, Perl.

    I hate perl.

    > Or any language that includes both while and for statements. Extra bonus for stuff like list comprehensions.

    Did you miss the part where I said “In software, redundancy can be good, if a program supports ways that typical users might approach it to do a task”? In any case, while and for statements and list comprehensions do have some overlap in functionality but do not (in and of themselves) provide identical functionality. You might as well complain about a carpenter using both dimensional lumber and plywood.

    > I’d say that having 0 “useful” redundancy is as limiting as having lots of it for the user

    I disagree, especially today, when if you can’t understand something as written, you can google/wikipedia your way to facility with the underlying basic concepts in a heartbeat. (I do agree, as originally stated, that for pedagogical purposes, some redundancy can be useful.)

    > Wrong because what makes google popular and useful are the results

    Which are (now, only mostly, but historically all) text. Like you get from typing at a command line.

    > Indeed, I mostly use my browser’s search input, not google

    Another CLI

    > google’s front page is not a CLI, but a GUI

    And yet, 99% of people’s interactions with it involve the text box. The other 1% are navigational clicks to get to a different textbox.

    > Three times wrong because the goodness of google is its simplicity.

    That a complete non-sequitur. My whole point is that, CLIs are often simpler than point and click based systems, and google shares this simplicity with a CLI — you type into it and press enter and it magically does what you want. To argue that google can’t meet your personal definition of a CLI because google’s interface is too simple just means that your personal definition of a CLI includes some sort of necessary complexity beyond typing. That’s fine for you, but my definition of a CLI is more basic than that.

    In any case, if you seriously disagree with my assessment that google is popular because it managed to take the web medium and slap up a textbox and make it work more like a CLI than a GUI, that may point to a fundamental genius in the way that google approached it — it works fine for both people who like GUIs and people who like CLIs.

    >(redirect to ls manpage BUGS section) “To maintain backward compatibility, the relationships between the many options are quite complex.”

    Ah, so your personal definition of a CLI does include some sort of necessary complexity beyond typing. And yet, the amount of collective mental energy devoted to determining what options to pass to ‘ls’ probably pales in comparison to the amount of energy expended in the excel spreadsheet cell formatting dialog.

  119. @Patrick, I thought my point was more evident than that. You say that the success of Google is somehow because it’s a CLI. I say it’s because it’s simple. You appear to be saying that somehow, the simplicity of an interface is greater if the interface is a CLI and not a GUI, and I say that there are different uses for both, and disadvantages for both, and bad implementations for both. You say things ought to be simple and terse. I say things ought to be simple enough, but no simpler, and that there’s a point where terseness is detrimental. You say that somehow GUIs and video are always synonym with clutter, and I reply with Sturgeon’s Law.

    Also:
    >> I’d say that having 0 “useful” redundancy is as limiting as having lots of it for the user
    > I disagree, especially today, when if you can’t understand something as written, you can google/wikipedia your way to facility with the underlying basic concepts in a heartbeat. (I do agree, as originally stated, that for pedagogical purposes, some redundancy can be useful.)

    What is the ability to search google or wikipedia for a concept in this situation, if not “useful” redundancy? Either your terse manual is useful and clear on its own or not, you can’t say “it’s the ne plus ultra because there’s this other content that clarifies it”.

    >> Wrong because what makes google popular and useful are the results
    > Which are (now, only mostly, but historically all) text. Like you get from typing at a command line.

    Yes, sorry, graphical interfaces have never provided information as text.

  120. I thought my point was more evident than that. You say that the success of Google is somehow because it’s a CLI. I say it’s because it’s simple. You appear to be saying that somehow, the simplicity of an interface is greater if the interface is a CLI and not a GUI, and I say that there are different uses for both, and disadvantages for both, and bad implementations for both. You say things ought to be simple and terse. I say things ought to be simple enough, but no simpler, and that there’s a point where terseness is detrimental. You say that somehow GUIs and video are always synonym with clutter, and I reply with Sturgeon’s Law.

    The whole CLI thing was a side-note on my original theme. I was replying to Deep Lurker’s “by providing a limited, low-bandwidth UI in a situation where doing so is not all that bad a thing.” to show that, in my opinion, it’s actually a good thing most of the time. The overarching theme of my post was that designers will actually come up with counterproductive bad designs (from my perspective) if you give them too much bandwidth, and one of my pet peeves is when those bad designs involve extraneous or redundant information that you have to sift through to get to what you really want.

    In general, GUIs rely on having a lot of bandwidth. In general, CLIs don’t. Google’s front page doesn’t rely on having a lot of bandwidth, but they’ve now done some creative and very useful things when you do have extra bandwidth. I still think part of the genius in google’s design was that it was text based and uncluttered, and yes, I will argue that these are attributes you typically find in CLIs, but often don’t find in GUIs. There is no real reason for this, except that someone who feels compelled to create a hand-holding interface is probably going to default to using a GUI, because the information bandwidth available on a CLI system is just too low for what he wants to do.

    Really, my main point was that some people like/require more information than others. It’s kind of hard to apply Sturgeon’s Law when you have disparate users, and those designs that manage to do this (like google) stand out as sheer genius.

    Also:
    >> I’d say that having 0 “useful” redundancy is as limiting as having lots of it for the user
    > I disagree, especially today, when if you can’t understand something as written, you can google/wikipedia your way to facility with the underlying basic concepts in a heartbeat. (I do agree, as originally stated, that for pedagogical purposes, some redundancy can be useful.)

    What is the ability to search google or wikipedia for a concept in this situation, if not “useful” redundancy? Either your terse manual is useful and clear on its own or not, you can’t say “it’s the ne plus ultra because there’s this other content that clarifies it”.

    Again, my initial post started out “This may point to a fundamental difference between typical Linux geeks and the rest of the population.” You are missing the point that a single manual can simultaneously be useful and clear on its own, and not, depending on who the audience is. I prefer reference manuals to tutorials. This is clearly a preference, and one which (in my experience) is more common in hacker types than in the general population.

    >> Wrong because what makes google popular and useful are the results
    > Which are (now, only mostly, but historically all) text. Like you get from typing at a command line.

    Yes, sorry, graphical interfaces have never provided information as text.

    You’re both hitting on a side-issue, and getting hung up on trying to make a laundry list of what constitutes a GUI. Rather than play that game, I will submit Wikipedia’s first sentence on what a GUI is, which has probably already had a lot more haggling over its definition than Eric would want to see here:

    In computing a graphical user interface (GUI, sometimes pronounced gooey,[1]) is a type of user interface that allows users to interact with electronic devices with images rather than text commands.

  121. @Patrick and @Adriano:

    Did it occur to either one of you that Google is both a GUI and a CLI?

    What does clicking on “advanced search” do? It presents you with a dialog box that then subsequently generates a command line for you!

    All warm and fuzzy like a *nix GUI front end to a command line tool…

    You may now resume your regularly scheduled argument.

  122. Lurker, Android’s UI is fundamentally high bandwidth. It presents a lot of information — in real time — and accepts a lot of real-time information as input. Quite unlike the bash prompt. And while I was originally willing to give your argument about Unix’s “active user hostility” some benefit of the doubt, I am now less inclined to do so because it doesn’t appear to have a technical basis but rather be a meme you keep repeating without justification save for your say-so. (This is, I believe, what Eric calls the “60 cycle hum”.) If you can give a sound technical basis why an underlying system capable of supporting the Android UI[0] is incapable of supporting a similar UI of greater sophistication, or if you can delineate certain technical features of an operating system which is not “inherently user-hostile”, then we have a starting point for discussion. As it is your comments on this issue are not conducive to further discussion. Even Miguel de Icaza’s thoughts on the matter (which I don’t really agree with because slavishly imitating Microsoft’s technology stack is not a good direction for open source) provide a much firmer basis for further debate than I’ve heard from you.

    [0] From a UX perspective, the Android UI is really, really good. Not Apple-good, but certainly better than most Windows software and almost any open-source effort to this point. It’s a worthy benchmark for open-source usability and usability on a Unix base.

  123. @Morgan:

    I think I’ve said everything I’m going to on this: first, that google acts like a CLI; second (after Adriano explained how deeply wrong I was about that) I enumerated some of the ways it was like a CLI, and ended with the statement “it works fine for both people who like GUIs and people who like CLIs”), and third (after Adriano got deeply sarcastic) that other people also take my view of what constitutes a GUI. But (again, other than pointing out google’s genius at writing a really good interface that appeals both to GUI-lovers and CLI-lovers), it’s a side-issue.

    @Jeff Read:

    I got the impression that perhaps Lurker wasn’t talking about bandwith at the same level as you are. For example, the highest bandwidth interface in the system is often the display, yet it requires the same bandwidth whether displaying a text prompt or cool graphics. Likewise, a lot of the high-bandwidth interactions that Android provides might be below the level of the interface between the user and the application. (I could certainly be wrong about that, though.)

    @Deep Lurker and Adriano:

    One real issue (and I am not original in figuring this out) is that, when presented with additional resources (bandwidth, pixels, memory, whatever), many designers simultaneously cater to the least common denominator (which can make it painful for experienced people to use) and the power user (by including lots of extra menus, etc., which can make it painful for everybody to use). Eventually, the “right” way to use the additional resources will seep into the collective consciousness, and we will have fairly standardized, discoverable, intuitive interfaces with that paradigm. For the most part, that happened a long time ago with CLIs and, to a lesser extent, GUIs, but it is still easy to find counterexamples of good design in both.

    But the worst designs almost uniformly rely on newer stuff. Just like I started seeing ransom letters right after the Macintosh came out, people who first learn how to do GUI stuff can go overboard and stuff a bunch of crap in there. Designers often get caught up in being clever with what their design can do, instead of being clever about what it should do. The more resources available, the more ludicrous this cleverness becomes. It’s annoying, distracting and counterproductive. Obviously, a lot of people like beautiful skins on their phones or desktops, but I’m not one of them, at least not if it interferes with the CPU cycles and memory available for doing real work.

    So, to get back to my original point of a few posts ago, and to then expand on it a bit — there is the tension between efficient terseness, and the kind of redundancy that can help a newbie lurch through a process. On a continuum, on a desktop, most hackers will fall more towards the terseness side than will the general population.

    But cell phones change things. First of all, the hacker’s favorite tool — the keyboard — is missing or dysfunctional, or at least sub-optimal. So he’s still looking for efficient ways to do things, but obviously more in a GUI context. But from the other end, the redundant information and wide and deep menus that the general population seems to like on a desktop (or that companies seem to like if only because they reduce support calls) are also missing, dysfunctional, or sub-optimal. There isn’t that much screen real estate, and multiple clicks on a phone are more cumbersome than multiple mouse-clicks or the now-missing keyboard shortcut equivalent, and there is less memory and CPU to try to do really fancy stuff. Finally, most general users are probably doing a much more limited subset of activities on their phone than they do on their desktop. For example, not italicizing things in blog posts.

    So in the cellphone environment, you now have Linux users who are forced to abandon their keyboards and come up with alternative program interaction methods, and you have the general population users, for whom the paradigm they are used to on the desktop would be really slow and painful on a cellphone. They both have to change, in order to use a cellphone at all efficiently.

    I think that the cellphone magnifies the effects of Sturgeon’s Law, relative to the desktop. The range of acceptable UI designs on a cellphone, whether for a Linux hacker or for the general public, are smaller than they are on the desktop. And, percentage-wise, on the cellphone, there is more overlap between what a hacker would consider acceptable and what the public would consider acceptable.

    To take an extreme example of this, consider the lowly wristwatch with a timer function. It does very few things. Some of them have god-awful interfaces, and others are downright intuitive. If you had two examples of this kind of watch and handed them to different people, you would probably get pretty good agreement on which interface was better.

    One practical effect of this is that the stereotypical hacker who doesn’t care about ignorant users will, nonetheless, make a better user interface on a phone program than he would make on a desktop program, if he just follows his practice of making an interface for fellow hackers.

    Another practical effect is that cellphone programs with bad UIs will probably be weeded out or improved much more quickly than their desktop brethren. The amount of user pain associated with bad design decisions is much higher in the cellphone world.

    This is but one of the many reasons why Linux (Android) is quickly becoming the dominant smartphone platform — as Linux hackers migrate to Android, anything they decide to write for themselves will probably be much more appealing to a much wider audience than anything they wrote for themselves on the desktop.

    I certainly expect a knock-on effect from cellphones to tablets, but it will be interesting to see if the lessons learned here (on the part of Linux hackers, about good UI design; on the part of the general public, about Linux and free software generally) actually migrate back to the desktop, leading to more Linux uptake there.

  124. Excuse if I’m not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named “cloud computing” and the network became the OS?

    No. *nix conquered the server long, long ago.

    Granted *nix server had majority market share long ago.

    ESR cited that 83/23 (78/22%) for new workloads occurred circa 2007, so perhaps the conquering was 90/10% rule (roughly Pareto squared) complete by 2008, thus meeting ESR’s deadline.

    Is Android the killer app because it paradigm-shifted open source hackers to optimize for hardware without a keyboard– flatmapping the world closer to the programmers are the users and vice versa? Open source for the masses. On deadline for the roughly 10 year cycle for the imminent arrival of a new programming language for these masses:

    1975 he started using “structured programming” techniques in assembly language
    1983 a new era dawned for him as he started doing some C programming
    1994 when he started doing object-oriented programming in C++
    2000, he made the switch to Java

    Can Java be language on Android, invalidating the 10 year cycle deadline? Will the next language be the virtual machine with a smorgasbord of grammars?

    Tying this to the OODA and martial arts discussion, note that solving a problem by “mapping to a new context” or “passing through an envelope” abstraction, is a monad model and hence the mention of flatmap. Could the next programming language achieve monads-for-dummies?

  125. Patrick:

    Speaking as a UI designer, let me begin by saying: Gee, overgeneralize much?

    Claiming that hackers will, in your words:

    One practical effect of this is that the stereotypical hacker who doesn’t care about ignorant users will, nonetheless, make a better user interface on a phone program than he would make on a desktop program, if he just follows his practice of making an interface for fellow hackers.

    We have both suffered through far too many UIs that, like stray cats, just happened. They weren’t designed. They were implemented by programmers who didn’t care about the end result, had no idea what an actual user of the software would want to do, and probably figured that the CLI was superior in every way, and it showed.

    We’ve both seen code for software that wasn’t designed. It just happened, and in some cases, looks like it happened by the programmer dropping his drawers and leaving a steaming turd in the memory address space. Indeed, the entire nature of Perl is designed to facilitate undesigned programs. The fatal flaw of programming as a discipline is that it’s possible to make “readable once” code that works…at least when bad UI comes up, it is VISIBLE. (Though, like bad programming practices, bad UI practices can become entrenched as “the way we’ve always done it”).

    I do not attribute Android’s UI (and I’ve used iPhone, WinP7 and Android side by side – Android is in third place in my estimation) to “hackers designing UIs for hackers”. I attribute it to Google having a strong position as stake holder in the project to making it “not suck” and offering to do most of the UI lifting through pre-generated libraries, so that hackers don’t have to be designers. That said, Google does not have the ability to really tell its developer base “This way works, use it” and expect them to follow it.

    With very few exceptions, hackers are horrible at UI design. The stereotype of the hacker as a shadow autist is more viable than most…and without that ability to actually visualize how someone else would use their program, they are completely unaware of how their UI sucks, and utterly bored and disinterested in trying to optimize their programming for the operating environment of the user’s visual cortex.

    After all, there’s no CLI to that interface stack.

    Now, you do have a germ of truth in this.

    Designing your UI to use the highest data density and the smallest amount of bandwidth leads to better UI design. Good UI designers know this, much the same way that good programmers know that writing clean, and elegant code (in the fewest lines possible) results in generally better programs and much easier to maintain programs.

    UI designs also need to be examined every few years to see if they’re still meeting the needs of the userbase – this is far more than mere ‘skinning’ – and to see if more efficient techniques have become possible (think of this as an updated library in programming) or to see if a memetic transmission of a convention has happened among the users, which can be exploited.

    Apple UIs are lovely – and amazingly well thought out. The Windows Phone 7 UI isn’t as lovely as Apple’s, but it is ALSO very well thought out, and is (surprise) not a half-baked copy of the iPhone UI; they chose to present fewer nodes of more accessible (and continually updated) data.

    Android’s UI is getting there, and is largely a re-implementation of the iPhone UI with some different graphical conventions. The hacker culture appears to finally realize that coding to satisfy customers rather than than fellow programmers might actually be something to consider; note that the most successful of the Android apps are Android reimplementations of a lot of stuff from the Apple Store..

    As to CLIs

    Knowing what to type, and where, in a program or CLI is domain specific knowledge. Every profession has its own domain specific knowledge.

    Why is your domain specific knowledge more privileged than mine? Particularly since I arguably get FAR more benefit out of computers that are aware of my domain specific knowledge than you do out of computers cognizant of yours?

    Seriously – proper PANTONE and CMYK implementation with a GUI? I literally get to multiply my productivity by a factor of 20 to 25 with well designed UIs. Has anything in computers had a comparable boon to the ability to generate well designed, well architected maintainable code?

    If you have not read it, I strongly recommend that you read The Visual Display of Quantitative Information by Edward Tufte. Judging from your choice of terminology, I think you’ve been exposed to it – but that you haven’t read it recently.

    Much the same way that re-reading Knuth on a regular basis is useful for your profession, re-reading Tufte once a year is useful in mine.

  126. Speaking as a UI designer, let me begin by saying: Gee, overgeneralize much?

    Why, yes, I do. Absolutely. Especially when I’m thinking out loud.

    (Snipped lots of true stuff about UI design.)

    I do not attribute Android’s UI (and I’ve used iPhone, WinP7 and Android side by side – Android is in third place in my estimation) to “hackers designing UIs for hackers”. I attribute it to Google having a strong position as stake holder in the project to making it “not suck” and offering to do most of the UI lifting through pre-generated libraries, so that hackers don’t have to be designers. That said, Google does not have the ability to really tell its developer base “This way works, use it” and expect them to follow it.

    Don’t disagree. But then, I think that, at the OS and screen manager level, Linux is close enough to Windows. The question, then, obviously is about whether enough designers follow through and do enough good things, which apparently hasn’t happened enough there, either in the application land, or the OS utility land. My wife doesn’t use a console when she uses Linux, but I’m the local sysadmin (and not a very good one, at that, but for the amount of effort I expend, things work better than when home was a Windows shop — and I used to be a Windows expert).

    With very few exceptions, hackers are horrible at UI design. The stereotype of the hacker as a shadow autist is more viable than most…and without that ability to actually visualize how someone else would use their program, they are completely unaware of how their UI sucks, and utterly bored and disinterested in trying to optimize their programming for the operating environment of the user’s visual cortex.

    Right. But they are also users. And, more to the point, they are mostly sane users. They just don’t think, in the desktop world, like most users. They design the engine, and the UI is an afterthought. But in order to even use their program on a cellphone, they have to think about the UI design a bit more. And then, there’s the “hey, look what I can do!” factor. Much easier to show others your work on your cellphone than your desktop, and get feedback. You’d think they wouldn’t care about that, but if they tell somebody that they write apps for cellphones, somebody’s going to ask them to show and tell, and it’s harder to make lame excuses like not having your laptop there…

    Designing your UI to use the highest data density and the smallest amount of bandwidth leads to better UI design. Good UI designers know this, much the same way that good programmers know that writing clean, and elegant code (in the fewest lines possible) results in generally better programs and much easier to maintain programs.

    Sure, but just like, in the early days of the Mac, when it was impossible to write a reasonable program unless you studied enough to learn a lot of new, foreign concepts, so it is now with the cellphone. It’s going to be too difficult to hide crap. If you want to write cellphone programs to stroke your ego, you’re going to have to deliver UI at a higher level.

    UI designs also need to be examined every few years to see if they’re still meeting the needs of the userbase – this is far more than mere ‘skinning’ – and to see if more efficient techniques have become possible (think of this as an updated library in programming) or to see if a memetic transmission of a convention has happened among the users, which can be exploited.

    Makes good sense. Some of that’s probably organic — if you have a popular program, and don’t have the reputation of biting people’s heads off when they suggest something, somebody will probably tell you about some things you can change. In other words, listen to your customers. Obviously, as you point out, independent ongoing research is useful as well.

    Android’s UI is getting there, and is largely a re-implementation of the iPhone UI with some different graphical conventions. The hacker culture appears to finally realize that coding to satisfy customers rather than than fellow programmers might actually be something to consider; note that the most successful of the Android apps are Android reimplementations of a lot of stuff from the Apple Store..

    Getting started, of course, you do the obvious thing that everybody belittles the Chinese for — copy something that seems good. Now, it will be interesting to see if, in response to “that’s not bad — almost looks like the iPhone” they can do even better. I think we’re at a psychodynamic inflection point. If enough hackers start getting their egos stroked for doing UI design, this could take off. I could be overly optimistic, but I think, even for closet autists, mild praise will work better in the feedback loop than the sort of disdain some of them have been used to getting. When everybody’s a critic, it’s extremely easy to just shut down and say “well, I don’t really do UIs” (that’s my line, BTW) but when people say “that’s great, but could you do this?” with small, incremental suggestions, that’s a whole different ball game.

    Knowing what to type, and where, in a program or CLI is domain specific knowledge. Every profession has its own domain specific knowledge.

    So are GUIs. I mean, sure, Apple and Microsoft have done a pretty good job of making a lot of it computer-specific domain knowledge that’s widely available, but if GUIs hadn’t been ascendant, the same thing would be true of some of the dominant CLIs.

    Why is your domain specific knowledge more privileged than mine? Particularly since I arguably get FAR more benefit out of computers that are aware of my domain specific knowledge than you do out of computers cognizant of yours?

    Is this a rhetorical question? I mean, the obvious answer is that the most privileged domain-specific knowledge is backed up by programming or dollars to pay for programming.

    Seriously – proper PANTONE and CMYK implementation with a GUI? I literally get to multiply my productivity by a factor of 20 to 25 with well designed UIs. Has anything in computers had a comparable boon to the ability to generate well designed, well architected maintainable code?

    I know what PANTONE and CMYK are, but I’m not sure what (Adobe?) software you’re referring to, or even if you meant “without a GUI”. I think if you read what I wrote carefully, you will see that I haven’t trashed GUIs as unuseful, just a lot of programs that make careless use of them. I even admitted my wife knows nothing about the command line. Personally, I do things like schematic capture and board layout. I do portions of those tasks programmatically, but I do understand the value of a good GUI.

    If you have not read it, I strongly recommend that you read The Visual Display of Quantitative Information by Edward Tufte. Judging from your choice of terminology, I think you’ve been exposed to it – but that you haven’t read it recently.

    Much the same way that re-reading Knuth on a regular basis is useful for your profession, re-reading Tufte once a year is useful in mine.

    Yeah, I have it — got it when it first came out, so that must be over a quarter of a century ago. It’s a nice small book, so I can see what you mean, but there’s no way I would slog all the way through Knuth once a year!

  127. Ken Burnside,

    Seriously – proper PANTONE and CMYK implementation with a GUI?

    The problem is that PANTONE is copyright- and trademark-encumbered and proper CMYK implementation may be patent-encumbered. The correct course of action would be to license them properly and provide them perhaps as downloadable binary modules. The Stallmanites, however, have won the day in the hearts and minds of open-source hackers, who would rather release broken software than software which understands and conforms to the appropriate industry standards. As a result you will have a hard time getting open source to ever be taken seriously in the realm of print media.

  128. I simply blame GNOME choosing GTK instead of GNUStep, eight months after Apple bought NeXT. We could have wound up in a situation where Mac programmers and their UI standards osmosed into the Linux desktop . . .

  129. Maybe a little extra information on what makes GUIs easy and CLIs powerful?

    A desktop metaphor GUI is a finite state machine running over a semi-static tree based menu. Added to that are tree grafting procedures. The original MSDOS CLI was actually little more than a file launcher (who actually used the pipes in MSDOS?).

    A *nix shell like Bash (or whatever you like) is first an interpreted Turing Complete programming language. So that aspect is all-powerful, but only useful if you are actually programming. Second, the piping part implements a rewriting grammar. In Chomsky’s hierarchy this is one step up over a tree structure like the tree based menu. Most people will use it with a sequence of regular expressions (sed etc), but with awk/perl/python inserted, you can really do the full string rewriting of context-sensitive-grammars. That is very powerful and allows full control over your computer with only a “small” collection of simple tools.

    So, yes, if you really want to control your computer, a GUI is simply a crutch that won’t do the job. But if everything you want is only to navigate you computer “internals”, a tree based menu&file system is all you need.

    And people are very comfortable with tree based menu systems. That is “user-friendly” by genetics.

    But it is definitely NOT powerful enough to control your computer. Witness all the very complex tools needed to manage a Windows computer. Underlying, an MS windows computer is controlled directly in C, one tool per task. That is cumbersome and inefficient.

    Btw, there are very fine and user-friendly GUIs for Linux. Just run Linux Mint or KDE4. They are just not MS Windows. And we know, different is wrong, choice is bad and freedom is slavery.

  130. “you can really do the full string rewriting of context-sensitive-grammars”

    I was wondering whether you could have a CSG in a GUI to have your cake and eat it, then I realized that a GUI with the Firefox icon on the desktop etc. is essentially an Egyptian-style hieroglyphical alphabet, and there is a reason such alphabets have been replaced by phonetic alphabets (except for the Far East): phonetic, letter-based alphabets are much better at this sort of thing.

    Which means the CLI is here to stay for anything that’s even remotely close to a grammar – but could it at least be deuglified in order to make it more non-technical-user-friendly? Symlink “grep” to “findtext”. Make it so that all command-line params accepted by a program are selectable from a list. Focus on the long version of the parameters in the online tutorials. Actually I think it is much more of a cultural thing than a technological constraint…

  131. From the user experience point of view Google isn’t a CLI, it is an AI.

    In a CLI the smallest things matter and the user must be careful even about case: grep -v and grep -V ain’t the same. To Google I can give in some garbled, silly, stupid, grammatically wrong string like “pizeria around heamarkt, vienna” and it it will correct me “did you mean pizzeria near Heumarkt, Vienna?” and when I click on that link it will show ’em. Google has extremely high, almost human ambiguity tolerance, while most CLI have zero.

  132. Surprise, surprise. In the European smartphone market, where customers have more choice, Android’s growth is less dramatic, and Apple’s lead more commanding, than in the US market.

    Or maybe Europeans just like the German-inspired lines of the iPhone a whole lot more…

    1. >In the European smartphone market, where customers have more choice, Android’s growth is less dramatic, and Apple’s lead more commanding, than in the US market.

      First, Apple doesn’t have the “lead” in the U.S. market; Android passed Apple last quarter. As I predicted nearly a year ago.

      Second, web hits are a notoriously bad proxy for market share. Canalys, which collects data on actual retail sales, tells a very different story; in its figures, Android leads worldwide just as it does in the U.S. (a fact Stephen Elop acknowledged in the infamous burning-platform memo). If these web-hit based figures were showing an Apple lead in Europe are actually representative, something quite exceptional must be going on there. It’s more likely something is simply wrong with their assumptions or methodology.

      1. >Second, web hits are a notoriously bad proxy for market share.

        Especially when, as in this case, we are told nothing about the slice of the web from which StatCounter is collecting information. Can you say “selection bias”? Really, these figures might just as well have been pulled out of someone’s ass.

  133. @esr:
    > It’s more likely something is simply wrong with their assumptions or methodology.

    Maybe people who buy Android phones actually like getting work done. : )

  134. @Jeff Read, @esr:

    According to a new comscore report, Apple does, indeed, lead Android in Europe.

    But I think that’s temporary — Android’s curve there simply started later. But despite what Jeff said, comscore’s survey says Android’s growth is “more dramatic”:

    Meanwhile, Symbian’s stronghold on the European smartphone market loosened in 2010 as other
    platforms gained traction. Symbian saw its smartphone market share decline from 63.0 percent in
    December 2009 to 47.8 percent in December 2010, while Android consumed the largest portion of
    Symbian’s market share, growing more than 10 percentage points to 11.9 percent of the smartphone
    market at the end of 2010. Apple also saw its share grow considerably, from 13.8 percent to 20.0 percent,
    while RIM grew marginally to 8.6 percent share.

    I don’t have time to read the report right now, so I’ll just toss that out there as a teaser.

  135. @shenpen:

    > From the user experience point of view Google isn’t a CLI, it is an AI.

    The definition of AI does not encompass how one communicates with it, and the definition of a CLI defines how one communicates with a program, not what it does.

  136. Here’s an iPhone V data point.

    I don’t agree with all the author’s conclusions — he probably hasn’t be dissecting the market as thoroughly as we have here, and I’m sure a small amount of math would show that the variance between sales of app #90 and app #110 in Apple’s store is huge. Nonetheless, the raw data in the graph he provides is interesting….

  137. @Jeff Read:

    Surprise, surprise. In the European smartphone market, where customers have more choice, Android’s growth is less dramatic, and Apple’s lead more commanding, than in the US market.

    Uh huh. And based on a survey of all the websites I track via Google Analytics, I’d say nearly 50% of all Internet users were on Linux.

  138. “The definition of AI does not encompass how one communicates with it, and the definition of a CLI defines how one communicates with a program, not what it does.”

    Yes, of course, if we want to nitpick, then it is true. But still get what I was driving at? That Google has this near-human level of “just do what I mean, dammit” kind of ambiguity tolerance that most CLIs don’t, and is therefore an exception from the set of not-very-user-friendly-CLIs.

  139. @Shenpen:

    Yes, of course, if we want to nitpick, then it is true. But still get what I was driving at? That Google has this near-human level of “just do what I mean, dammit” kind of ambiguity tolerance that most CLIs don’t, and is therefore an exception from the set of not-very-user-friendly-CLIs.

    See the work of Jef Raskin and his son Asa. A somewhat poor example is the “Ubiquity” Firefox extension, which is based on the ideas that came out of Jef Raskin’s “Network Appliance”

  140. > First, Apple doesn’t have the “lead” in the U.S. market; Android passed Apple last quarter. As I predicted nearly a year ago.

    http://www.eweek.com/c/a/Mobile-and-Wireless/Verizon-iPhone-to-Nibble-at-Android-Market-Share-Munster-618892/

    Piper Jaffray analyst Gene Munster said the Verizon iPhone will score a palpable hit on Android, stealing 1 million units from the platform through March.

    Munster sees Android’s share falling from the 32 percent plot (IDC numbers) it acquired through December to 28 percent or even as low as 26 percent in March.

    Conversely, Apple’s iPhone share should grow from 16 percent to 20 percent as a result of the new wireless carrier channel. Munster had previously said Verizon could ship 9 million to 15 million iPhone units.

    It ain’t over ’till it’s over….

    1. >It ain’t over ’till it’s over….

      I’m thinking Gene Munster wrote that before the first-day flop of the iPhone V. He might be less…optimistic…now.

    1. >http://www.dailymail.co.uk/sciencetech/article-1359689/Apple-iPhone-knocked-Trails-budget-rivals-UK.html

      Interesting. So in Great Britain, iPhone gets pipped by four cheap Androids and a Blackberry.

      More evidence for the theory that consumers no longer perceive an Android vs. Apple quality difference that justifies Apple’s higher price.

  141. I was thinking this morning that rather than being the instrument of breaking carrier lock in, this may end up being the source of re-instating it albeit with much less control than previously.

    Even at $75 there are going to be people(and i think a large segment of people) who aren’t going to want to shell out to upgrade their phone every 3-6 months but at the same time don’t want to miss out on features. If the carriers changed their basic model from a repayments model to a lease-to-own model, they could offer 3-6 monthly upgrades. You surrender your old phone when you upgrade (and likely that your plan restarts from day 1) but you get the new hotness for “free”(or at least without having to pay out your old phone).

    I think the benefit to consumers in this model is that this means we’ve confiscated the carrier’s stick and forced them to use the carrot.

    1. >If the carriers changed their basic model from a repayments model to a lease-to-own model, they could offer 3-6 monthly upgrades.

      And the carriers get stuck with a bunch of paperweights every six months, instead of pushing those costs off on the customer. Seems doubtful to me they’d go for this.

  142. > And the carriers get stuck with a bunch of paperweights every six months, instead of pushing those costs off on the customer. Seems doubtful to me they’d go for this.

    So if the phone costs $75, the carriers can’t pay it off in 6 months? We’re talking $12.50/mo, Eric. Seems to me this might be a cheap method of keeping acquisition costs down.

    > More evidence for the theory that consumers no longer perceive an Android vs. Apple quality difference that justifies Apple’s higher price.

    I’m not seeing where this is true (in the US). Apple’s iPhone can be had for between $49 (for an 8GB 3GS on AT&T) to $199 (16GB iPhone 4 on Verizon or AT&T) or $299 (32GB iPhone 4 on either Verizon or AT&T). Hell, AT&T has 8GB 3GS refurbs for $19.

    A Droid Pro on VZW will run you $179. A Galaxy S (Samsung Fascinate) will run you $199.99 on VZW, and yes, there are a couple $49 Android phones on VZW, and one that is ‘free’. The Galaxy S isn’t available on T-Mobile (yet), but there are a bunch of “3G Slides” available at $49.

    So sure, you can get a free or low-cost Android phone, but “Apple’s higher price” isn’t justified by the evidence.

  143. (why am I stuck in the moderation queue?)

    the good news in all of this, Unix (finally) won. Apple is a huge force in notebooks, and between Apple and Google, *nix has a chokehold on the smartphone space. If Google ever gets serious about the cr-48 type devices for ChromeOS, I’ll predict the death of the Windows notebook in < 3 years. ChromeOS will take over the notebook space like Android has taken over the smartphone space. OEMs will ship ChromeOS in volume on sub-$200 notebooks with full-sized screens via carrier channels.

    (Unfortunately, you can't do a lot at the Unix-level on these devices.)

    Between Apple on the Mac (with it's App Store for Mac) and Google on Chrome OS (there is an app store there, too) and the combination of Google's Marketplace and Apple's iOS app stores, software is going to get a lot less expensive. No more multi-hundred dollar apps. Windows is going to be really, really challenged, and I think Balmer knows it, thus the NoWin consortium.

  144. And the carriers get stuck with a bunch of paperweights every six months, instead of pushing those costs off on the customer. Seems doubtful to me they’d go for this.

    How would you estimate that cost would compare to how much churn costs them? I would think you could mitigate paperweight costs through pre-owned sales.

    My understanding of the current system is that the only incentive to go on a plan is pricing and widespread churn is likely to bleed their price margins dry anyway. This plan gives a consumer incentive to sign up.

    I wonder how plausible a “tech upgrade” system would be (i.e. “buying back” the paperweights and refitting upgraded gear on them). Sure it’d have the same shell but it could add commercial life to an “old” product at budget price.

    1. >How would you estimate that cost would compare to how much churn costs them?

      Dunno how to estimate that. What I am pretty sure of, though, us that smartphones are headed towards a price point so low that lease-to-own won’t make any sense. If there’s a window for this, it closes in two years at the outside.

  145. > I do things like schematic capture and board layout.

    From what I’ve seen of products like OrCAD and the Mentor Graphics suite, EDA software in particular is in dire need of decent UI designers and QA. I have friends using the Xilinx ISE who tell me that it’s normal to have a multi-second lag between a mouse click and any indication from the app that the event occurred.

  146. From what I’ve seen of products like OrCAD and the Mentor Graphics suite, EDA software in particular is in dire need of decent UI designers and QA.

    Yeah, industry consolidation, with established EDA companies buying up companies once their schematic capture programs capture more than 1% of the market, has been ongoing for a couple of decades. OrCad was purchased by cadence, and they keep trying to end-of-life it and push people over to their Allegro Capture or whatever, which is an even bigger steaming pile. I tried to use it once and it kept coredumping the X server.

    My strategy for the last several years is: if it’s a big board, I have a professional (in-house, or contract — I work with several of various degrees of skill) do the schematic capture and the board, but if it’s smaller, I do it myself. The last board I did, about a year and a half ago, I did schematic capture with the FOSS package KiCad, which actually isn’t bad, compared to some of the really expensive offerings. It has the usual sort of FOSS UI rough edges, but the expensive packages really suck in some ways. Schematic capture is one area where opensource packages really are better supported. One of these days, I think Inkscape will grows a domain-specific extension to do schematics, and then you’ll have a very well supported underlying graphics engine for all platforms.

    Xilinx is another story entirely. In a few ways, their software has gotten a lot better lately, but historically, I think any good programmers got to play with fun stuff like mapping Verilog to logic, and the interns were doing the UI.

  147. > What I am pretty sure of, though, us that smartphones are headed towards a price point so low that lease-to-own won’t make any sense.

    Once the transition to SSD is complete, there will be little difference between the COGs of a ‘notebook’ and that of a ‘phone’.

    One has an extra radio (but most of that cost is just transistors), the other had a larger screen (but again, most of that is just transistors).

    You can’t predict one without allowing the other.

  148. @esr:

    > Interesting. So in Great Britain, iPhone gets pipped by four cheap Androids and a Blackberry.

    I have some very dated experience with Europe (primarily England, where I met my wife) vs. US high-tech buying habits for corporations. I dunno whether this translates from corporate buying to personal buying, or whether it translates from mid-eighties to now, but I will mention it anyway, since this data point might fit.

    I used to work for a company which sold protocol converters which let you attach non-IBM terminals (think Wyse, TeleVideo, etc.) and printers to IBM mainframes (via 3270 or 5250).

    All of our US customers purchased the product because they wanted to do something different with it. Like connecting a tektronix vector graphics terminal, or a numerically controlled underwater welding machine.

    All of our European customers purchased the product because they wanted something that worked just like the IBM products but cheaper.

    The US customers were very tolerant of bugs, because we allowed them to do things they couldn’t otherwise do. The European customers were completely intolerant. The whole point was to spend less with zero percent change in functionality.

    So if this holds today, it could explain (a) earlier adoption of Android in the US, and (b) that if Android grew by 10% of the market compared to Apple’s 6% in Europe last year (as per the comscore report), then Android is now recognized as “the same as Apple but cheaper” in Europe, and growth should really accelerate this year.

  149. Patrick,

    “All of our US customers purchased the product because they wanted to do something different with it. Like connecting a tektronix vector graphics terminal, or a numerically controlled underwater welding machine.

    All of our European customers purchased the product because they wanted something that worked just like the IBM products but cheaper.”

    A very interesting observation and I wonder if it has any logical connection with another observation of mine: in the US it is somehow acceptable to buy thingie A, B, C, from vendor X, Y, Z, put these three together, put a logo on it, and call the resulting assembly “your product” with a straight face, and nobody says that it is, in fact, not yours, it is just products from vendors X, Y, Z assembled by you.

    In Europe we tend to think to really call something truly your product you should build at least most of it from scratch yourself alongside with most accessories, e.g. http://software-lab.de/doc/tut.html

    Thus, quite often, over here a programming language comes with its own text editor, database, app server, GUI from scratch, even fonts… in America it is more common one person writiing a programming language and an interpreter, another an editor plug-in for it, a third one a mod for Apache, a fourth one a library for accessing common databases, a fourth one some TK GUI bindings and so on…

    At the end of the day I think it is a matter of trust, I think we have less trust in products that were assembled together from different brands, i.e. do not have a single brand, corporation or person responsible for it and thus one cannot judge it’s quality by simply looking at the brand, and one does not know who is responsible if something fails. “Cheaper IBM” I can parse, I am trading reliability for price, but “TektronixIBM” would make me feel kind of insecure – how do I know if it is any good?

  150. Nokia’s ex-head of design Adam Greenfield wrote a lengthy post on his blog a few days ago about the lack of understanding of design at the various upper levels of management at Nokia. It’s nothing new to anyone who’s used Nokia’s products, but the insider view is somewhat interesting. His job as the head of design was given to Marko Ahtisaari, whose merit on matters of design, as far as I can see, is that his father is an ex-president of Finland and the 2008 Nobel Peace Prize recipient. He sold a small travel-related social networking start-up named Dopplr to Nokia in 2009 and Nokia has since done a little as possible with it.

    Greenfield’s comments about the awkward use of SMS killing various user experiences rang a bell with me. I have a Nokia dumbphone with a Series 40 OS that I’m actually quite happy with, but I did laugh when I first noticed the various extremely clunky SMS-based services that Nokia and the operator had put on it. The thing has GPRS/EDGE networking and a minimal version of Opera, which is actually surprisingly useful considering the size of the tiny screen. Things like weather forecasts sent back by SMS seem ludicrous when at the same time they don’t have a weather web page optimized for minimal bandwidth and screen size. Ok, so it’s about extracting a few pennies from fools or people stuck somewhere with an even dumber phone and no web access, but still.

  151. “Some of them have god-awful interfaces, and others are downright intuitive.”

    Every time I see/hear someone use the word “intuitive” to describe an interface, I feel compelled to point out that the only truly intuitive interface is the nipple. That is to say that most babies are literally born able to operate that interface, with no training required. For them (not for the babies that must be taught to feed from their mothers’ breasts) it can truly be called “intuitive”.

    What most people mean when they say that an interface is “intuitive” is not that one need not be trained to use it; they mean that the training required has already been done in some other context. The vaunted “intuitive” Mac OS interface is in reality a consistent interface: Once you’ve learned to run one Mac app, most of what you’d need to learn to run another, you already know from learning the first. A few apps break those rules, but very few break very many of the rules, because the Apple user base recoils at having to learn a whole new way to interact just to run your app.

    As has been discussed above, the cell phone form factor is sufficiently different from desktop/laptop that UIs based on different principles are to be expected. The question, then, is whether those principles are consistent within the cell form factor, or app writers insist that users learn something new to run their app.

  152. @The Monster:

    I was describing wristwatch interfaces, where you can set the time, or use a stopwatch, or whatever, with that statement. Sure, some base knowledge is required, like “if I press a button, something happens” “I can hold buttons for longer” “I can press them multiple times” “the display shows time in hours, minutes, and seconds, using Arabic numerals”. When you parse all those out, a lot of knowledge is required to operate a wristwatch, but most of it is completely independent of the actual wristwatch interface and was acquired in different contexts. That’s way different than your description of the Mac interface.

    I have seen some of these watches, where, if you actually drew out the state diagram, you would realize that no sane person could hold a working mental model of what was going on inside the watch. For these, you always have to refer back to the documentation to figure out how to do anything. Then there are the ones that a 7 year old can figure out with no prompting. If you want to restrict your own use of the word “intuitive” to suckling newborns, that’s fine with me, but here I use the word to mean a combination of “easily discoverable” and “easily memorized.”

  153. TheMonster wrote “Every time I see/hear someone use the word ‘intuitive’ to describe an interface, I feel compelled to point out that the only truly intuitive interface is the nipple.”

    It is true that people confuse ‘intuitive’ and ‘familiar’ much too often. Alas, your claim about nipples goes too far in the opposite direction. There are lots of things that brains are wired to learn more easily. Alas, the specific examples I remember are experimental animals beloved by 1950s psychologists. (E.g., apparently it is much easier to train a pigeon to peck something in order to get food than it is to train it to do other things — jump, blink eyes, turn around in a circle, or just refrain from pecking — in order to get food.) It’s almost guaranteed (from theoretical constraints on how incredibly slowly unbiased learners must learn) that humans must be wired with learning prejudices too. For humans or even monkeys, I don’t know any super-clean experimental work on how such things wired in for the complete *learning* process, but it is at least well-established that lots of stuff is wired in for *noticing*. E.g., simultaneity, parallel orientation, and similar shape are all detected very efficiently by various primate visual systems. It would be surprising, indeed astonishing, if that didn’t affect the learnability of relationships involving simultaneity, parallel orientation, or similar shape. (E.g., I’m fairly confident that a primate will learn much faster that similar-shaped buttons behave similarly than a primate will learn that a device with an even number of buttons behaves differently from a device with an odd number of buttons.)

  154. It’s shit like this, Google: http://code.google.com/p/android/issues/detail?id=4991

    tl;dr: having the ‘low space’ warning on your Android phone results in the rejection of SMS messages. Even when there’s actually >10mb of storage available. The bug has been present since 2009 and appears not to have been fixed yet by Google.

    related: http://code.google.com/p/android/issues/detail?id=5669

    The developer side of SMS messaging isn’t pretty either. If you give sendTextMessage a bad phone number – NullPointerException, if you give it too long of a message – NullPointerException. http://developer.android.com/reference/android/telephony/gsm/SmsManager.html

    Because of the high quality of iOS and it’s large market reach, Google panics and Android development got rushed.

    This caused most of the programming resources to go towards developing base features (android 1.x – 2.x) or implementing tablet features after the iPad arrived on the scene, and very few resources were left for developers support (the emulators are slow, severe lack of animations framework, etc…) and for bug fixing.

    Once things start to settle down (*), bugs will start to get fixed.

    (*) about the time Apple stops innovating

  155. Totally, totally Off topic.

    I’m looking for VPN ideas in reference to this article: http://www.nytimes.com/2011/02/17/technology/personaltech/17basics.html?_r=2&src=me&ref=general

    I want an inexpensive household VPN with this configuration:

    VPN endpoint 1 ( VPN software ) > Laptop / Desktop CPU > Laptop / Desktop Wifi /NIC > Wifi / Wired router > VPN endpoint 2 ( VPN hardware ) > Cable / Telco modem > Cable / Telco infrastructure > The Internet

    Even better would be this, but I don’t want to pay more on my monthly cable bill (although I would pay more one time for standard VPN software I can run on all Cable / Telco competitors):

    VPN endpoint 1 ( VPN software ) > Laptop / Desktop CPU > Laptop / Desktop Wifi /NIC > Wifi / Wired router > Cable / Telco modem > Cable / Telco infrastructure > VPN endpoint 2 ( VPN hardware ) > The Internet

    I have SureWest, and they don’t offer this service even at extra cost. I have tried my Google / shopping fu on this problem and so far I have not had a solution grab me.

    Yours,
    Tom

  156. I’d say intuitiveness runs on a continuum: from unintuitive interfaces at one end (Mornington Crescent, American health care plans) to completely intuitive interfaces at the other (the nipple). Virtually all software systems fall somewhere in between, but the ones which make use of the cognitive skills we have built-in, or which developed over time in the course of being human, rightly may be called more intuitive than their counterparts which force the user to adapt to the interface.

  157. The Monster:

    “What most people mean when they say that an interface is “intuitive” is not that one need not be trained to use it; they mean that the training required has already been done in some other context. ”

    The training has often been done in contexts outside computers. The idea of a “button” came from everyday appliances like VCRs or washing machines. The idea of a “menu” came from restaurants. Icons? As I wrote before, essentially hieroglyphical alphabets. We are trying hard to build software that resembles anything but software. This is, I guess, OK for a while, Ford’s first tractor had reins and rifle-equipped soldiers had kept spears in the form of rifle bayonets for centuries. Sooner or later, though, users will have to be educated into using software as software. Not sure if it has to happen anytime soon though.

  158. @Ken Burnside: The F-15 has never had a worse thrust-to-rate ratio than the F-16 during either aircraft’s development. The dry thrust-to-weight ratio for a bare F-16 has always been under 1. The F-16 entered service with the F100-PW-200 series engine, which was a minor update from the F-15A’s F100-PW-100 engine, both had slightly under 15,000lbs dry thrust at full military power, enough to give the F-15A a thrust-to-weight ratio of just over 1 (based on empty weight of the airframe at 28,000lbs. F-16A’s were approximately 16,000lbs empty). The F100 did have reliability issues in its early versions, but that affected both the F-16 and F-15. The F-14 was a different story as the TF-30 never delivered on reliability or power promises (yet another failed legacy of TFX).

    The F-16 was never deployed by the USAF as a primary air combatant, the F-15 always held that role due to its larger range, more powerful radar, larger weapons load overall better performance. The F-16 was initially deployed as a replacement for long-obsolete Century series aircraft in the continental US (mostly replacing F-106’s) as well as backup for the F-15 fleet, it wasn’t until it got its multi-role capabilities that it really came into its own as anything other than a cheaper alternative to the F-15 for US use.

  159. @Monster:
    “I feel compelled to point out that the only truly intuitive interface is the nipple. That is to say that most babies are literally born able to operate that interface, with no training required.”

    So you don’t have any children then? Sometimes newborns figure the nipple out on their own, sometimes they have to be “taught”. This is especially true if for some reason they are first fed from a bottle, and then given the breast.

    @Shenpen:
    The bayonet wasn’t because Soldiers were familiar with the spear, it’s because early guns had a REALLY slow rate of fire, and it’s hard to get horses through a prickly hedge of pointy bits. After a time they were kept because they are REALLY fucking useful when your resupply doesn’t work. Like those tommys in Basra in 2004. (http://www.liveleak.com/view?i=0bd_1249524865) (Basra is a f’ing shithole BTW). The Marines still do bayonet drill in boot camp, it helps build a certain aggressiveness and refusal to quit that is the trademark of the devil dogs. The Army, being about 1/2 pussies (not the Combat Arms fields, the OTHER half) gave up bayonet training in favor of more physical exercise. Which means more running.

    We’re turning into the French. But I digress.

  160. The idea of a “button” came from everyday appliances like VCRs or washing machines.

    The “VCR flashing ’12:00′” cliche is as old as VCRs themselves. I can’t believe you typed that in a discussion of “intuitive interfaces” without collapsing into a fit of hysterical laughter.

    “I feel compelled to point out that the only truly intuitive interface is the nipple. That is to say that most babies are literally born able to operate that interface, with no training required.”

    So you don’t have any children then? Sometimes newborns figure the nipple out on their own, sometimes they have to be “taught”.

    I have two daughters, each of whom now has children. Monsterette 1 breastfeeds, Monsterette 2 uses bottled formula. The Bride of Monster and I have had discussions with them about how well that works out.

    Do you realize that the statement “most babies … no training required” is not in the least bit contradicted by “sometimes they have to be ‘taught’.”?

    You might have raised this question: “If most babies need no training, then some do. Is the nipple therefore intuitive to those babies?” Well, of course it isn’t intuitive to those who need training. That’s tautological. Any statement about whether a particular group of people finds something intuitive is statistical in nature; flatly declaring that something is intuitive implicitly includes “to most of _____”. ISaying “apples are red” is not invalidated by green and yellow apples. (However, “all apples are red” is flat wrong.)

    Also, a bottle has a nipple too. Taking the case of babies who have become accustomed to one kind of nipple having difficulty adapting to another nipple is not a commentary on whether either kind of nipple is intuitive. It just shows how we can be taught to the point where something seems intuitive when it is in fact the product of our earlier learning, and things that would otherwise not require any training now require us to unlearn something we were previously taught.

  161. Thanks. I’m trying LogMeIn Hamachi since it’s also free and was very easy to understand and set up. (We are talking brain dead simple.) I’m looking into OpenVPN now. How do you tell if your VPN is working? Hack your own network?

    Yours,
    Tom

  162. LogMeInHamachi implements this:

    VPN endpoint 1 ( LogMeInHamachi software ) > Laptop / Desktop CPU > Laptop / Desktop Wifi /NIC > Wifi / Wired router > Cable / Telco modem > Cable / Telco infrastructure > The Internet > VPN endpoint 2 ( LogMeInHamachi hardware ) > The Internet

    That’s pretty good for free for non-commercial use. Your Cable / Telco can’t even snoop.

    I’m using it now, and I don’t see any particular performance degradation.

    Yours,
    Tom

  163. How do you tell if your VPN is working? Hack your own network?

    Silly question. Try connecting from outside your LAN. If you can see internal services, it works!

  164. > Silly question. Try connecting from outside your LAN. If you can see internal services, it works!

    Well, that tests the V (Virtual), which, on my home network I’m not worried about, although it is a good test to run. I’m more worried about the P (Private).

    Yours,
    Tom

  165. Estimate the impact on Android, especially vs. iPhone, if Google’s ad revenue was declining?

    Observe Google’s primary Achilles heel being the often unattainable simultaneous maximization of both CR and CTR, or that paid and unpaid ads are not co-mingled. Eliminating this tension derives from a fundamental epiphany.

  166. I was wrong about LogMeIn Hamachi. It works entirely between the computers I own, independently of my WiFi router. I was hoping to protect the connections between the computers and the WiFi router using the VPN, but it isn’t happening. When I connect my work laptop to the work VPN it takes over all my network connections to the internet and we only reach the internet through the work gateway. That’s the model I was hoping for.

    Yours,
    Tom

Leave a comment

Your email address will not be published. Required fields are marked *