Contemplating the cute brick

Some years ago I predicted that eventually the core of your desktop PC would morph into a physically tiny compute engine that would merge with your smartphone, talking through standard ports and cables to full-sized peripherals like a keyboard and (a too large to be portable) flatscreen.

More recently I examined the way that compute bricks – small-form-factor fanless PCs running low-power chips – have been encroaching on the territory of traditional tower PCs. Players in this space include Jetway, Logic Supply, Partaker, and Shuttle. Poke a search engine with “fanless PC” to get good hits.

I have a Jetway running production in my basement; it’s my Internet-facing mail- and web-server. There’s a second one I have set up with Devuan that I haven’t assigned a role to yet; I may use it as a backup host.

These compute bricks are a station on the way to my original prediction, because they get consumers used to thinking of their utility machines as small compute nodes attached to human-sized peripheral hardware that may have a longer lifetime than the compute node itself.

At the lowest end of the compute-brick class are little engines like the Raspberry Pi. And right above it is something slightly different – bricks with a fan, active cooling enabling them to run the same chips used in tower PCs.

Of course the first machine in this class was the Apple Mac Mini, but it dead-ended years ago for reasons that aren’t Apple’s fault. It was designed before SSDs were really a thing and has spinning-rust-centric design assumptions in its DNA; thus, it’s larger, louder, noisier and waaay more expensive than a Jetway-class brick. Apple must never have sold very many of them; we can tell this by the fact that the product went four years between refreshes.

On the other hand, a couple days ago I dropped in a replacement for my wife’s aging tower PC. It’s an Intel NUC, a brick-with-fan, but unlike the Mac Mini it seems to have been designed from the start around the assumption that its mass storage would be SSD. As such, it achieves what the Mac Mini didn’t quite; it opens a new front in the ephemeralization wars.

Perhaps I shouldn’t say “new”, because Intel has been shipping NUCs for about five years. But I didn’t really understand what Intel was doing until I actually eyeballed one – and discovered that, as well as having a case design that is almost absurdly simple, it’s really pretty. Like, high-end stereo equipment pretty.

The comparison nobody but a geek will notice: my Jetways have something like ten case screws each, and then if you need to change out the SSD you have to deal with six more on a set of detachable rails. The NUC gets away with just four much larger case screws, which double as posts for its rubber feet. Inside, the SSD sits in a fixed drive bay that’s positioned so you never have to move it and doubles as a guide so you literally cannot engage the SATA connector incorrectly or off axis.

What an end-user will notice is the dark-silver anodized finish on the body, the contrast with a top that looks like black glass, and the nice rounding on the corners.

I mean, damn – somebody did a brilliant job of industrial design here, combining easy teardown for servicing with being a pleasure to look at. It’s hard to imagine even Apple, notorious for its attention to the surfaces of tech, doing better.

That makes the comparison to Jetway-class compute bricks rather stark. Despite occasional twitches at a “media center” look, they mostly have DNA from industrial-control machines and all the aesthetic appeal of a mining truck.

If you wonder why I’m focusing so much on appearance, it’s because having spent engineering and manufacturing budget on pretty does not quite square with the NUC’s official positioning. Somebody had to defend that nice finish against the usual headwind of “reduce bill of materials to the bone”.

NUCs ship barebones – processor on board but no DRAM or SSD – because Intel has a bunch of non-compete agreements with the PC manufacturers it sells chips to intended to keep it out of the consumer PC business. The official line is that the NUC is a “development platform” intended to showcase Intel’s newest CPU chips and graphics hardware. A trustworthy source informs me that its other function is to be dogfooded – Intel issues NUCs to its own developers because it wants to avoid the you-don’t-really-know-what’s in there effects of ruthless cost-cutting in the PC supply chain.

Neither use case explains why they made it pretty.

Now, I could be overinterpreting this. It could be somebody just slipped the nice details past the beancounters and it doesn’t actually mean anything strategic. But if I’m right…

…here’s what I think Intel is doing. I think it’s positioning itself for the smartphone-as-portable-compute-node scenario I sketched at the beginning of this blog post. Intel’s planners aren’t stupid; they know there’s a low-power revolution underway in which ARM and Atom are likely to shoulder the “classic” Intel/AMD architecture aside. They may not be ready to think about making smartphones themselves yet, but they want something to compete with when compute-node-centered peripheral clusters start to displace tower PCs.

Why am I not talking about laptops as the doom of the PC? Because, as I’ve pointed out repeatedly before, the ergonomics of laptops actually suck pretty badly. The biggest deal is that you can’t put a bigger display on a laptop than a person can comfortably carry; the craptasticity of laptop keyboards is an issue, too. Sure, you can close the lid and plug in better peripherals, but now what you have is a compute node that is stupidly heavy and expensive for its job.

Fundamentally, carrying around a display with you is an unstable hack that made sense in the past when that hardware was rare and expensive, but not when every hotel room has an HDTV and even airline seatbacks are growing displays. OK, maybe a screen that’s tablet or smartphone-sized makes sense to carry as an occasional fallback, but we’re rapidly moving towards a world where a compute node in your pocket and a USB-C cable to local peripherals will usually address both PC and laptop deployment cases better than hardware specialized for either.

I say “usually” because there are special cases at the high end. All the other towers
in my house have been replaced by compute bricks, but the Great Beast is still the Great Beast. Tower cases retain advantages when really hotrodding your ride requires component modularity. But that’s a 1% case and going to get rarer.

The NUC gives Intel cred and a bit of early-adopter visibility as consumer-facing compute-node makers. Which is going to be handy when the PC and laptop markets crater. There Intel will be, smiling a disarming smile, with a NUC-descended compute node in its hand, saying “Psst…and it’s pretty, too.”

The first signs of the PC market cratering are already past us. Everybody knows sales volume is down from the peak years of that technology. The common mistake is to think that the laptops that have been eating its lunch so far are a stable endpoint rather than, themselves a transitional technology.

The truth is that in late 2018 conventional PCs are like Wile E. Coyote running on air. When I buy one I’m mostly getting metal and bulk. The motherboard is physically designed to host a bunch of expansion slots I’ll never use because the things cards used to do have migrated onto the mobo. The actual working parts don’t actually take up any more volume than a compute brick. They used to, before thumb drives turfed out DVDs and spinning rust got tiny under laptop pressure only to be mostly displaced by SSDs. But today? The main constraint on reducing the size of a computer is that you need surface for all the ports you want.

I think it’s likely that the last redoubt of the PC will be gaming rigs. We’re already at a place where for 99% of consumers the only real reason to buy a tower case is to put a really bitchen graphics card in it, the kind that has a model name like MegaDoomDestroyer and more fan capacity than the rest of your computer and possibly your next-door neighbor’s computer put together. Your typical home-office user would be, like my wife, better served by a cute brick.

But the MegaDoomDestroyer will pass, too. The polygon arms-race will top out when our displays exceed the highest resolution and frame rate the human retina can handle. We’re already pushing the first and the second is probably no more than two Moore’s Law doublings away. After that all the NRE will go into lowering footprint; on past form we can expect ephemeralization to do its job pretty quickly.

The laptop collapse is further out – harder to see from here. Probably a topic for a future post.

234 comments

    1. >Some of the larger NUC models have room for a 2.5? laptop hard drive.

      My wife’s is one of those. It’s the i3 spin with the tall case; I put a 500GB SATA SSD in the bay.

      The choice not to go with a faster CPU was deliberate. Her computing needs are relatively light, so I optimized for lower heat and noise.

  1. Raspberry Pi replaced ITX XP computers a few years back when we had to ‘upgrade’ because of the ‘end of life’. Saved a fortune in new windows licenses, powered from the USB port of the screens they were running, and left big holes where the ITX boxes used to live …
    Currently looking at replacing the ITX boxes driving the CNC gear with a ‘credit card’ solution.

  2. One fly in the ointment:

    I, at least, am WAY more reluctant to plug random peripherals into my computer now that they can “helpfully” auto-install drivers (which often run at a privileged level).

    This is *probably* not a problem with Linux, but it’s a major issue with some others.

    Then there are the various malignant USB “physically toast the machine it’s plugged into” tricks.

    Then there are keyloggers (and analogs for video displays).

    I can see this strategy working for a person going between home and office (in fact, I did just that with a MacBook Air for several years — full-sized monitor, keyboard and mouse at home, and likewise at the office), but there’s no way I’m ever going to feel comfortable connecting my machine to peripherals in some random hotel room.

    1. >I, at least, am WAY more reluctant to plug random peripherals into my computer now that they can “helpfully” auto-install drivers (which often run at a privileged level).

      The correct answer: Don’t use shitty insecure operating systems.

      >Then there are the various malignant USB “physically toast the machine it’s plugged into” tricks.

      I don’t think that’s relevant at all here. If I plug my USB-C cable into the docking hub in my hotel room, I’m using their screen and their full-sized keyboard, but not their USB slots. I have USB slots on my compute hub.

      >Then there are keyloggers (and analogs for video displays.

      That seems to me like the only threat vector you’ve brought up that needs to be taken seriously. But the mitigation measures aren’t complicated, either. If you fear keyloggers, carry a keyboard and plug it direct into your compute node and you’re still winning by having more screen real estate than a laptop display at zero carry weight. As for video loggers, you defeat the worsrt attack through those through the standard practice if echoing credentials masked by asterisks.

      1. I don’t fear state actors subverting random hotel room hardware–I fear common thieves subverting random hotel room hardware. Trust between the HID and the host CPU is a serious security problem even when both endpoints and the wires between them are inside monolithic smartphones.

        I’ve never been a fan of full-sized PC keyboards. I grew up on the diversity of 1980’s keyboards, and I prefer the keyboards of smaller laptops over the standard 104-key keyboard. So I’d probably bring my own keyboard along by preference.

        1. @Zygo: I don’t fear state actors subverting random hotel room hardware–I fear common thieves subverting random hotel room hardware.

          Concur. Getting money is a far more common motivation than politics in most places. I’ve interacted elsewhere with the tin foil hat crowd who are convinced MS is in league with the NSA and left back doors so the NSA could snoop on their computers. All I could say was “You wish you were important enough for the NSA to be bothered to snoop on your machine The folks who can do that have far better things to do. They only way the folks you go on about would notice you exist is as something they scraped off their shoe after stepping in the wrong place. You aren’t important, you don’t matter, and nobody cares what you think or do. Deal with it.”

          I’ve never been a fan of full-sized PC keyboards.

          What you grow up with is what is right and proper to you. I’ve used a number of keyboards. The problem is muscle memory. If I switch, productivity plummets, because keys are in different places on the new KB than the old one, and muscle memory is wrong.

          My current KB on the desktop is a Logitech USB model, and I have a smaller Logitech Portable USB model that ravels with me, but my speed still drops when I shift to the Portable. It’s bearable because I don’t do all that much typing when traveling, and most stuff waits till I’m back home.

          >Dennis

            1. I grew up with a squishy atari 130xe keyboard. I’ve recently used one again when repairing my old 130, and I have no nostalgia at all for the keyboard itself. It’s an awful experience. I should probably learn how to use a 40% keyboard at some point, but I’m currently using a full sized Lightstrike keyboard with the number pad on the left.

              1. Slightly OT, I suppose, but I can’t quite understand the modern love affair with 40%, 60% and navless designs. I could just about deal with a TKL, but to take away my arrows, nav cluster, F keys, etc? Yikes, no! If I could get a Sun Type 5 with either Matias Click or Kailh Box Navy switches, I’d do it in a New York minute.

                Are they really more efficient by some metric other than desk real estate? What’s the draw?

                1. >Slightly OT, I suppose, but I can’t quite understand the modern love affair with 40%, 60% and navless designs.

                  I don’t get it either, so I’m hoping someone will offer up a sensible explanation.

                2. Pure action gamers use the WASD keys for navigation so the numeric keypad and navigation keys are wasted on them. They might prefer a smaller keyboard because it’s more lap friendly, and they’re not touch typing so typing reflexes aren’t an issue.

                  Games who play things like MMOs that have a text chat element are more likely to prefer a full size keyboard with navigation keys. The WASD keys are not available for navigation because you use them for chat, so the arrow keys are useful. They still may not care about the numeric pad.

                  1. On the MMO side, its still WASD. Most games you hit enter to start typing into chat and hit enter again to send your message and automatically toggle back to movement and abilities on your keys.
                    WASD are far from the only keys used continuously for both playing and chatting. I have actions bound to 8 function keys, 13 keys in the number row, tab, and every key that would be touch-typed with the left hand. Some of these keys are re-used with shift and ctrl modifiers as well.
                    The toggle to chat does occasionally mean your teammates will get a message that reads wwwwwwwwwwwwww.

                    1. The toggle to chat does occasionally mean your teammates will get a message that reads wwwwwwwwwwwwww.

                      That’s okay. The Japanese players will just think you’re laughing.

                3. From how it was described to me, the point was that instead of moving your fingers to your function and navigation keys, you use chording to move the function and navigation keys closer to your fingers. So you might have a button that you press that replaces part of your space bar that turns 1-9 0 -= into F1-F12. You might have another button you press that turns part of the rest of the keyboard into arrow keys or a number pad. All to reduce wrist strain. The size is an added benefit since you can toss your keyboard in your backpack and take it with you easier.

                  One thing I was looking at was if I ever built a small PC inside a retro computer case, the Atari 65xe case doesn’t have function keys or a separate navigation block, so a 40% or 60% style layout would let me use an SBC Linux PC in one without having to carve extra key cutouts into the shell.

                  I’ve been tempted by stuff like this in the past, I’ve been trying to find a GOOD version of the touchpoint mouse on a desktop keyboard to keep from moving my arms off the keyboard as much. The problem is the main Lenovo desktop keyboard for sale right now has a terrible port on it with no strain relief, so I don’t want to rely on that keyboard.

                  1. From how it was described to me, the point was that instead of moving your fingers to your function and navigation keys, you use chording to move the function and navigation keys closer to your fingers.

                    Chording is a fast tract to RSI. That’s why you have modes and use hjkl to navigate. Doy!

            2. Absolutely! As I’ve said many times… its as reliable as a Colt. 45 ACP and never gives trouble. Well worth the additional size – it will always have a place on my desk surface no matter how small the box that does the heavy lifting.

          1. I don’t think they’re planning on monitoring my machine. I do think it plausible, however, that having a universal “in” allows them to target whoever they care to more easily, without it having to be a specific effort. In that scenario, they may monitor my machine just because it’s possible and/or someone got bored.

          2. Concur. Getting money is a far more common motivation than politics in most places. I’ve interacted elsewhere with the tin foil hat crowd who are convinced MS is in league with the NSA and left back doors so the NSA could snoop on their computers. All I could say was “You wish you were important enough for the NSA to be bothered to snoop on your machine The folks who can do that have far better things to do. They only way the folks you go on about would notice you exist is as something they scraped off their shoe after stepping in the wrong place. You aren’t important, you don’t matter, and nobody cares what you think or do. Deal with it.”

            Have you been paying any attention at all post-Snowden? Total surveillance is the goal — and the easier surveillance becomes the less important you have to be in order for the authorities to take an interest in surveilling you. Even if all the authorities want to do themselves is get money — think of all the shiny new SUV squad cars and precinct-breakroom espresso machines street light cameras have paid for over the years, despite having been proven not to be cost-effective at reducing road accidents.

          3. “You wish you were important enough for the NSA to be bothered to snoop on your machine The folks who can do that have far better things to do.

            This is a very 1980s attitude.

            The NSA measures it’s compute and storage by the *acre* and sieving through data is what computers DO.

            It’s not whether I am important enough to them to spy on, it’s whether they have the bandwidth to analyze my traffic.

          4. > productivity plummets,

            And that muscle memory is a *bitch*. I used to be able to type 80WPM with an 86-key IBM keyboard. The best I’ve ever managed on the 104-key is 25, and that was probably a fluke.

            And I’ve been using the 104 exclusively for eight years now…

        2. I don’t fear state actors subverting random hotel room hardware–I fear common thieves subverting random hotel room hardware.

          If you go to China and you work for a company with valuable trade secrets, expect your hotel room to get black-bagged. Any serious company will equip their employees with burner laptops for overseas business trips for this reason.

          1. As a physician at a university hospital, I am now forbidden to carry my institutional laptop or cell phone outside the boarders of the US. For trips overseas they issue a laptop and cellphone that is wiped on return. The laptop has a locked down image that disables the USB ports and only allows access back to a virtual environment on campus by a vpn.

          2. A buddy took a backpacking trip to Mongolia (details fuzzed a bit).

            This required transit through the People’s Republic of China.

            They noticed that there were parts of the bathroom mirrors that didn’t get fogged…

    2. There is a project that adds USB device white listing to the linux kernel. The white listing prevents USB devices from emulating additional devices after you plug them in. Overall it’s similar to the way bluetooth access controls work.

      https://usbguard.github.io/

      It’s not on by default in large part because people without PS/2 keyboards would have a hard time getting their keyboards white listed.

      1. There’s another one available as a kernel module that will let you do things like overwrite the disk if it notices that a specific USB device is not plugged in either on boot or at any time thereafter.

  3. Agreed but with laments. We could do so much more computing with say R and a Linux cluster on the desktop. What we have is high resolution displays and tremendous editing power replacing the wallet full of baby pictures.

    Dr. Pournelle’s pocket device access to any answer known led to more informed decisions in fiction. We are surrounded by less informed decisions. We have pocket devices. Time was it took a very full house to have a terabyte storage, something to brag about, now it’s a terabyte in the pocket.

    Folks aren’t actually doing serious computing with all the cheap readily accessible power. Long ago the Club of Rome included their model in the back of the book.

    I’d like to amuse myself with the models for climate change/global warming.

    It would be interesting to look at medical success and failures in the United States compared with the rest of the world in conjunction with legislative efforts. In my neighborhood just west of Yellowstone Park we lose a lot of young people who kill themselves off road in the summer mostly on sand buggies in the dunes and in the winter from avalanche and exposure. Working Obama Care wouldn’t change that. I can’t readily get the data.

    The prediction is correct because the prediction is the easiest way to meet previously expressed desires. Be nice if the efforts by Bill Gates, our host and so many others to give everybody personal storage and CPU and power had expanded the market to what it might be.

    1. We have devices in our pocket that can access the entirety of human knowledge, and what do we do with them? We look at porn and pictures of cute cats.

      I wish I knew who said that first, but it’s true. The devices don’t make us any smarter.

      1. > We look at porn and pictures of cute cats.

        Vice is *always* the first street use of a new technology, from wood block printing (Playing cards, which is to say gambling, which lead to the rise of the middle class) to internet porn.

        1. In the mid-1980s I was on a panel to discuss “virtual reality.” I got uninvited when I forecast the first profitable market for VR would be porn.

    2. >What we have is high resolution displays and tremendous editing power replacing the wallet full of baby pictures.

      I’ve learned to be philosophical about this. Technology exists to serve human needs. I don’t get to tell people what their needs are; my job as a technologist is to create possibility, not predetermine what people do with it. Yes, many uses will be trivial because there’s this bell curve – rather than fashing myself about that I prefer to evaluate by the best uses of a technology that wouldn’t have been possible before.

      1. To borrow from WC Fields, “I spent half my terabytes on porn and on kittens with ironic captions, the rest I just wasted.”

      2. Moreover, trivial needs often create a technology that can be used for non-trivial ones.

        For example, graphics cards are mostly used for gaming. But thanks to their existence, we can now run computationally heavy tasks cheaply, like training deep machine learning models.

      3. Agree. Like the prediction that self-driving cars will lead to less DUIs and more teen pregnancy. Creators of new technologies aren’t necessarily the ones to predict the paths of least resistance their uses will follow.

    3. … “might be” for what market?

      “A linux cluster supercomputer running R” is something basically nobody wants on their desktop, practically speaking. It’s the nichest-of-the-niche.

      (Oh, some people in the sciences or engineering, sure, but that’s a tiny market … and more likely to have that or something like it available already.)

      Everyone else, if they ever need massive computing capacity, is almost certainly going to be better off just spinning up an AWS or Azure instance to handle it.

      I just checked Amazon – a “c5.18xlarge” instance on EC2 is a default 72 3 ghz Xeon cores – for $3.06/hour on-demand, available with per-second only-as-used billing.

      GPU clusters are also available, competitively, and MS and others also offer comparable solutions.

      It’d take a LOT of local computing to make that not cost effective, especially including power and cooling costs for a “cluster on your desktop”, and deprecation on your own dedicated computanium stores.

  4. @esr “The main constraint on reducing the size of a computer is that you need surface for all the ports you want.”

    And it’s already within reach where the only connection needed might be a single USB-C port (along with the already ubiquitous WiFi and Bluetooth).

    The price/performance on USB-C hubs/docking stations prolly needs to improve a bit yet. But it could already be done.

    1. When USB-C was announced, I knew it was the Holy Grail needed to make ESR’s convergence work. It’s non-proprietary, and can handle enough bandwidth to drive two 4K monitors and still have plenty left over for data needs. It thus satisfies the requirements for “universal docking stations”.

      1. >When USB-C was announced, I knew it was the Holy Grail needed to make ESR’s convergence work.

        Yes, it looks that way to me, too.

      2. Work recently got serious about security and is issuing laptops and (reasonable) policies to all employees. I chose the Mac route because the hardware that was on offer is better. I also got a USB hub to go with it. It drives my entire previous desktop setup; two monitors, USB keyboard, USB mouse (I like wires if I haven’t got a portability concern, they tend to work better), headphones, two monitors, and power for the device. (Running at 1080p, but they could be running at 4k from a technical perspective.) My workflow is: Bring in the laptop, plug in the one cable, get to work.

        90%+ of the time, I don’t really care that the laptop has its own keyboard, mouse, and display. But in this use case, it’s also not that big a deal, and when I do walk away with the device it becomes useful. It is definitely nice that I have an integrated setup away from the desktop environment.

        (I run a heavily customized Ubuntu VM inside Parallels for my actual day-to-day interaction. If you watched me, it would be hours between times that you can tell I’m on a Mac. This ends up working out even better than I had hoped, on the whole, although there was a lot of “no, please send this keystroke to the VM rather than whatever MacOS thinks it should do” configuring before I had everything set up.)

    2. That’s exactly what the Nintendo Switch does. A single USB-C plug in the dock connects to a jack in the console itself, connecting it to power, HDMI, and peripherals.

  5. You’re overooking a few things here:

    1) Consumer-grade SSDs are still not as reliable as spinning rust, and when they do fail, they fail hard. I’m typing this on a laptop whose SSD just decided to up and die one day, and currently running it off externally attached spinning rust. The same thing happened at the same time to my Nexus tablet. This problem may get worse before it gets better; electronics companies are know for driving costs down by cutting corners on quality and charging a premium for halfway decently made equipment.

    2) Pixel density does not establish a hard upper bound for GPU performance because we use GPUs for so much more than sweet graphics nowadays. MegaDoomDestroyers are super expensive and hard to come by in today’s market because prospectors are buying them all up to do cryptocurrency mining with. If you’re doing compute-intensive work like hacking AI, machine learning, or simulation, you are still best served by a tower case crammed with as much RAM and GPU silicon as will fit. Even sweet graphics still has huge headroom before we reach any sort of limit in demand; the next frontier is real-time raytracing, and Microsoft is already working on a DirectX standard to make it available to developers.

    3) Mobile computer users compute in other places besides “home” and “office”. People are still going to want to camp out in coffee shops, and it’s difficult to do that without a laptop or a hybrid device like the Surface. And, as Doctor Locketopus mentioned, I would be very hesitant to plug my computer into a strange port somewhere, especially with state actors taking such a high interest in harvesting people’s data. Laptop ergonomics may suck, but they’re the best we have for the kind of work we expect to do on something called a “computer” (as opposed to a tablet or phone).

    But yeah, the NUC is useful and adorable, a perfect computer for your mom or anyone whose use cases don’t extend much beyond office software, email, messaging, and basic web browsing. Intel even makes a MegaDoomDestroyer model of NUC so the kids can play Fortnite without stutter. I’ve thought of picking up one or two myself to experiment with.

    The vanguard here is the Nintendo Switch, a cute brick with a screen that’s radically reshaped the notion of what a home game console should look like. Snap it into the docking station (included) and you can play on your living room TV, or take it out and attach the included controller halves to the sides and you have a (largish, chunky) portable system. For general purpose computing, the ergonomics will be much harder to get right, but I think Microsoft is currently the closest with the approach they took for the Surface, which has really disrupted the laptop market. All they need is a box with a slot that you can drop your Surface into to connect it via USB-C to your stationary keyboard/mouse/monitor/etc. setup. Maybe the box will have a more powerful GPU in it as well.

    1. >Consumer-grade SSDs are still not as reliable as spinning rust, and when they do fail, they fail hard.

      Normal engineering problem solvable by free markets via a few tens of million dollars of NRE. Spinning rust used to be flaky with horrible failure modes, too.

      >If you’re doing compute-intensive work like hacking AI, machine learning, or simulation, you are still best served by a tower case crammed with as much RAM and GPU silicon as will fit.

      Granted. This is effectively the same 1% case as my Great Beast. Not impressed yet.

      1. @esr: >Consumer-grade SSDs are still not as reliable as spinning rust, and when they do fail, they fail hard.

        Normal engineering problem solvable by free markets via a few tens of million dollars of NRE. Spinning rust used to be flaky with horrible failure modes, too.

        Agreed. Back when, the wisdom among the tech crew I hang out with was “If you are going SSD, get the high priced spread and buy Intel, as the stuff most likely to be robust and not die horribly on you.”

        These days, I’m not that fussy. I saw torture tests posted online where even known low priced budget brands required petabytes of sustained writes before they would fail. The SSD in my desktop is a Crucial, but Samsung also makes quality kit. I just got an Inland 120GB SSD as an upgrade for an older machine. It cost me a whopping $30. Since what it will be used for is boot drive where OS and applications live, and it will be read from far more than it will be written to, I expect to replace the entire machine before I will even notice drive wear.

        There are about five outfits that actually make the NAND flash used, but they all have considerable experience. Everyone source from one of them and puts their own name on the label in a OEM deal. More variance exists in controller circuitry, but that’s improving too.

        I’m sorry to hear of Jeff’s problem, but I don’t consider his experience representative. Someone has to draw the short straw, and he did.

        I’m not losing a moment’s sleep of worry in deploying SSDs.

        >Dennis

        1. >what it will be used for is boot drive where OS and applications live, and it will be read from far more than it will be written to,

          Provided that you’re using an OS where you can make sure your swap and temp files don’t live on the boot drive, yes.

          Even in Linux, it’s hard to get people to understand the rationale of the Filesystem Hierarchy Standard, separating things that change often from those that are only changed when an admin is doing an upgrade or reconfiguration of the system (or just one app on it).

          1. @The Monster:Provided that you’re using an OS where you can make sure your swap and temp files don’t live on the boot drive, yes.

            Or not. The above presumes there’s more than one drive in he system, which won’t be the case here.

            Linux normally uses separate partitions rather than separate drives, and swap tends to be a raw partition where the OS does low level disk access and avoids the overhead of a file system.

            Windows puts the swap in a pagefile that lives on the main drive, though if you have multiple drives (or partitions) you can place it elsewhere. I did that on my old home built desktop, putting the pagefile on a different drive than the boot drive.

            These days I normally don’t bother. SSDs attempt to evenly spread writes over the entire drive. Since they are NAND flash, with any cell accessible in the same amount of time, you don’t worry about stuff like disk fragmentation.

            For the sort of flash used in consumer SSDs, the write limit is about 10,000 per cell before it goes bad, the SSD is over-provisioned, and the firmware tries to transparently migrate data on a failing cell to a good one and mark the one failing as bad and not to be used, similar to bad block tables on spinning platter drives. How long do you think it will take for any individual cell to be written to 10,000 times? (I wouldn’t try to hold my breath waiting…)

            On Windows and Linux, I have enough RAM that swap is almost never touched, and I could probably live without swap or a pagefile, though Windows complains if you try to do that.

            Temp files will still exist, but fundamentally, I’m not concerned. Even if swap and temp files are on the same partition on the boot drive, the uses for the machine still mean the SSD will be read from and order of magnitude more than it will be written to.

            Again, I’m not losing a moment’s sleep of worry in deploying SSDs.

            >Dennis

            1. And you can try to outsmart the system, and move temp files and swap/pagefile to a ramdisk.

      2. Normal engineering problem solvable by free markets via a few tens of million dollars of NRE.

        Your market fundamentalism is misplaced yet again. (“You can’t expect a man to understand a problem when his job depends on him not understanding it.”) All of the NRE in SSDs has gone into making them flakier (to achieve more storage density) and compensating for it with firmware and redundant cells. Someone upthread cited 10,000 writes for a single flash cell — it used to be on the order of 100,000. I suspect that the use cases SSD manufacturers are engineering for do not match my usage patterns, and are more along the lines of “storing family photos, documents, and game save files on a laptop that will be disposed of and replaced in three years”. More reliable SSDs may come along, but they will only be available in bulk to large enterprise customers — kinda like what is true of (standard dry-cell) batteries today.

        1. >Someone upthread cited 10,000 writes for a single flash cell — it used to be on the order of 100,000.

          I believe the lower figure for thumb drives, but not for SATA or M.2 SSDs. If they’d really pulled lifetime that low on the latter there would be a level of buzz about device failures that I’m not hearing.

    2. I work for a company that sells to customers that buy “statistically significant” amounts of storage. Their experience is that SSDs -> already <- have substantially lower failure rates than Spinning Rust, and many are switching over to all-ssd solutions because, even though the $/MB is higher, the management costs are much lower, because they're failing many fewer devices.

      That's not consumer-grade, but I think it's directionally significant, and I think you're just unlucky.

      1. @Doug:many are switching over to all-ssd solutions because, even though the $/MB is higher, the management costs are much lower, because they’re failing many fewer devices.

        I was fascinated by a contact elsewhere recounting migrating a server he was responsible for to all SSD. It was a database server running one of the noSQL DBMSes. He replaced 16TB of SATA HD with 16TB of 2TB Samsung SSDs.

        Performance got an order of magnitude better. The box screamed through DB queries and updates.

        He (correctly) wasn’t concerned abut SSD reliability. His interest was improved performance, and he got it in spades.

        What I found significant is that the cost of SSDs dropped to the point where it was an affordable upgrade. An awful lot of stuff folks have wanted to do on systems didn’t happen not because it wasn’t possible, but because it was too expensive. As hardware gets steadily faster, smaller, and cheaper, a lot of barriers like that are going away, and I think we are seeing the tip of the iceberg of changes that will result because things are no longer too expensive.

        >Dennis

        1. >What I found significant is that the cost of SSDs dropped to the point where it was an affordable upgrade.

          I agree, this is a big deal. Bulky, power-hungry spinning rust (and the cooling requirements that went with it) was what was keeping the tower form factor somewhat interesting even after the incremental value of a motherboard large enough to host expansion slots had mostly vanished.

          But no more. Thus, the rising fortunes of Jetway- and NUC-class small systems.

          1. By my understanding, it was never the spinning rust that had the highest cooling requirements in a PC tower.

            I’ve torn down several brand-name towers. In many of those, the PSU fan, case fan and CPU fan were combined. In the ones that had a second fan, I’d see combined CPU/case & PSU or combined Case/PSU & CPU. Almost never have I seen a fan blowing across the hard drives. These brand name units are cut to the bone; anything engineering can’t convince the bean-counters is necessary for the unit to survive the warranty period isn’t there.

            ISTM that the only time spinning rust starts requiring active cooling is 1) high-performance drives with spindle speeds of 10k or 15k RPM, and/or 2) when you start packing drives tightly together.

            No, the biggest thing that’s been keeping the tower case format alive is the need for GPU cooling, IMO. The GPU will draw anywhere from 2-4x the power draw of the CPU. You need an un-small amount of of surface area to dissipate enough of that heat to keep the GPU within safe operating temps.

            1. >No, the biggest thing that’s been keeping the tower case format alive is the need for GPU cooling, IMO.

              Actually you’re right about this. I had forgotten about GPUs because the kind of machine I specify and deal with is for job mixes that don’t involve 3D graphics. But may well involve high-performance drives.

            2. @esr: I’m the chap that mentioned an upper limit of 10,000 writes per cell on current consumer grade SSDs. (And it’s possible I’m relying on older information and they are higher now.)

              Yes, write limits vary depending on what you’re deploying and you can spend a lot more and get higher write limits.

              But my other earlier comment still stands. SSD firmware tries to evenly distribute writes across the entire SSD. How long is it likely to take any individual cell to be written to 10,000 times and become unusable?

              And over provisioning with spare cells is equivalent to spinning platter HDs with a bad block table and unused blocks that can be substituted if a block goes bad and gets marked unusable.

              I’d love to know what Jeff Read’s usage patterns were that the limitations bit him. I’d expect to see drive wear as graceful degradation, with visible evidence being slowly decreasing usable storage space as cells reached limits and were marked unusable. Catastrophic failure rendering the drive unusable I’d expect to have other causes.

              I still don’t see his woes as representative.

              >Dennis

              1. I still don’t see his woes as representative.

                You may not see it, but it happens. A lot.

                Like most of the SSDs deaths that we’ve had, this one was very abrupt; the drive went from perfectly fine to completely unresponsive in at most 50 seconds or so, with no advance warning in SMART or anything else. One moment it was serving read and write IO perfectly happily (from all external evidence, and ZFS wasn’t complaining about read checksums) and the next moment there was no Crucial MX300 at that SAS port any more. Or at least at very close to the next moment.

                As you said, wear is levelled across the entire disk. Meaning that assuming all cells are similar, when one cell goes, failure of the entire disk is imminent. (You see the same thing with RAID arrays; if you use the same make and model of hard drive throughout the array, a single disk’s failure means chances are pretty good you’ll have to replace more than one disk, possibly the entire array.) Maybe the firmware is not coded with that assumption. Maybe it outright lies in its SMART statistics. Maybe it’s just really hard to detect imminent failure before it happens.

                Whatever the case, I would love to replace my spinning rust with cheap, fast smart sand. I know it’ll happen someday. But I run a Slackware box, and much of the software I use, I compile myself. I don’t trust the smart sand that’s available at price points I can afford to last long, nor to fail gracefully, under such a storage-intensive workload (compared to that of the mythical Average User.)

            3. @Jeremy: ISTM that the only time spinning rust starts requiring active cooling is 1) high-performance drives with spindle speeds of 10k or 15k RPM, and/or 2) when you start packing drives tightly together.

              Like in server farms where you get boxes with 10K 0r 15K spindle speeds, stacked up in 1u form factors on racks?

              At one employer, I could tell the heat build-up was becoming a problem when I didn’t need to put on a sweater when I went into the server room. The heat exhausted by the cooling fans was warming the room. My boss had to get higher power electrical services pulled in and more A/C for the room.

              This is another area I expect drops in SSD prices to be beneficial. Those spinning platter drives with associated power requirements and cooling needs will start becoming history.

              >Dennis

  6. Laptops, NUCs, and phones are all transitional computers. Ideally, you have one device, and you carry it with you. When you need a “real” computer you attach your “device” to a screen and it stops being a tiny touchscreen and provides you an interface which speaks intelligently to a keyboard and mouse – something like Windows/Xfce/Azure – whatever you prefer. You might even carry a keyboard/mouse/screen rig in your car for Starbucks.

    But the big thing in this scenario is that your phone is just a phone, or maybe a module that you attach to your real computing device – the phone should do nothing more than maintain communication to whatever network you’ve selected – your phone should be as secure as you need it to be and not tell whoever wrote an app about everything you do.

    1. Ideally, your data is in the cloud and you can use any device that is handy (trust, of course, would be part of the ideal).

      Until then, I use a laptop, because when I am storm chasing I need the display in my car, and in cafes and in hotels that will modernize in a decade or two, not now. So I have a MacBook Pro for the quality hardware, OS and support, and I paid through the nose for it. I use it for software development when needed, general computing that needs a fair amount of horsepower, and everything else.

      Perhaps a couple of decades ago, when I was younger, a Linux box might have done the trick (were it available in the right form factor). These days I’m too old and too focused on other things and using too many applications to deal with Linux except where I need it, even if I find Linux to be the neatest idea.

  7. @esr: I’ve been predicting for a while that something like you suggest will happen.

    The technology gets steadily smaller, faster, and cheaper. An example is the current hullabaloo about the Internet of Things. We’ve had remote sensors and micro-controllers for some time. What’s different is that the chips involved are now powerful enough to be full multitasking CPUs, able to support a full TCP-IP stack so they can be on the Internet, and cheap enough that they can be deployed as sensors and controllers in an affordable manner.

    I tell folks we aren’t far from having a device that can be your smartphone while traveling, but when you reach your destination, you plug it into a dock with a large monitor, full sized keyboard, mouse, connection to a LAN, NAS storage, and broadband to the Internet, and it becomes your main computing device.

    The other piece to the puzzle is increasingly pervasive cloud computing. Your device is a controller. The data you work with is on data servers, and the work is done by compute servers. The amount you need to have locally is a fraction of former practice.

    Intel’s NUC is interesting, but I’d call it a proof of concept. Because they supply components to manufacturers, they won’t want to be in competition with customers by supplying fully equipped production ready devices. (And if I’m the customer, I probably prefer to add my choice of things like SSDs, so a bare bones box I can populate is a feature.)

    Intel’s big challenge is the CPU. ARM has effectively won in the mobile device space, because the scarce resource is battery life, and ARM CPUs are more power efficient than X86 designs. Intel has tried hard with their ATOM processors, but has not closed the gap, and I recall seeing elsewhere that development of ATOM chips has essentially ceased.

    ARM CPUs don’t have the brute power that the X86 chips have, but for the use cases they fill, they don’t have to. They are powerful enough. ARM also has a shot at the server market now that they have 64 bit designs available. Folks like Google and Facebook constantly building new data centers stuffed to the brim with 1u servers also have power concerns. Those racks full of servers use lots of power, and generate lots of heat. That both drives up power costs to drive them, and drives up the cost of the cooling required and the power to provide the cooling. The use cases for those servers mean that the brute power of X86 CPUs isn’t required, because additional demand is met by adding more servers.

    I think Intel’s challenge in the CPU market is power efficiency. Back when, Intel made ARM chips in their StrongARM division. They sold that off in a corporate reorg years back (to Freescale, if memory serves), so having CPUs that are power efficient enough for current market demand will be a challenge unless they gear up to make ARM CPUs again (and negotiate a new license with ARM, Ltd to allow them to do so), or solve the challenge of getting X86 power efficient enough to compete..

    I still have a desktop at home. I’m not at the point where a NUC works for me and what I do wants local storage. But I concur that gamers drive the high end of the PC market. I don’t see desktops going away any time soon because there is a huge corporate market that will be slow to switch over, because of the costs of doing it. And I’ve never been a big laptop fan. I can’t see using one as my main device simply because of the need for big monitor, full KB and mouse, and the weight is an issue when traveling. I currently have a 10″ Android tablet with portable USB KB as laptop replacement to provide something smaller and much lighter than what I used to lug. If I’m traveling, I’m doing other things besides working on the machine, and the tablet is fine for checking email and quick web searches.

    My phone is deliberately the smallest, cheapest feature phone Samsung makes. All it does it calls and SMS. That’s all I want it to do. Anything else is something else’s job, because the anything else simply needs a larger display than any practical phone can have. But I have encountered a number of folks for whom their phone is their computing device, because it can do what they normally need, and they don’t have to have a PC to function.

    >Dennis

    1. ARM CPUs don’t have the brute power that the X86 chips have,

      That will change in the next rev or two of Apple CPUs. Apple’s A12 chip already surpasses x86 on some benchmarks. Sometime in the early 2020s, x86-based Macs will probably be discontinued, as Apple’s own ARM silicon will be more powerful, more power-efficient, and developed in-house allowing Apple to evolve the hardware and software in tandem.

      1. @Jeff Read: > ARM CPUs don’t have the brute power that the X86 chips have,

        That will change in the next rev or two of Apple CPUs. Apple’s A12 chip already surpasses x86 on some benchmarks.

        I agree. But part of my point was that for the current ARM use cases, they don’t need to be as powerful as X86. They are powerful enough, and get design wins based on die size and power efficiency.

        >Dennis

        1. Given that probably 95% of the processing power of desktop machines is never used it’s just a vast waste of energy anyway. You may need fast response times, but the lag due to internet messages is the main delay anyway. With faster internet then slower processors are fine and possibly a tenth of the power used?

          1. Modern CPUs are actually quite good at doing nothing: when they’re not actively running code they enter a variety of low-power idle states, with deeper states using less power but taking longer to wake up from. If you look at the CPUs in some of the high-end smartphones, the single-threaded performance often exceeds that of desktop CPUs — and yet it can sit in your pocket, not getting hot, with decent battery life.

        1. Yeah, but where they go, the industry seems to follow. USB-C started coming out in Android phones first, but the peripherals didn’t start materializing until the appeared on Apple laptops. They’ve also seemingly ended the standard wars for wireless charging (finally) by choosing Qi for the iPhones late to the game. They normalized the “notch” approach to edge-to-edge screens on phones. They popularized wedge-shaped thin-and-light laptops. Et cetera.

          When Apple makes ARM PC’s cool, then the rest of the industry will take notice. I’d LOVE something like a Raspberry Pi in it size, standardization, and software support that was as powerful as, say, Apple’s A12X SoC. Heck, why isn’t the Pi 4 out yet, with USB-C and 4GB RAM? The RAM is really the big thing keeping it from being a usable desktop.

          1. Perhaps you mean with USB-C with USB 3.1

            As I understand, that still hasn’t been done because of some sort of per-item licensing cost.

      2. That’s kinda funny.

        Apple started out on 68k architecture, moved to PPC, which they partnered with IBM to design and “control” then moved to X86 when that…didn’t work as well as they wanted.

        Now they are moving to ARM developed in house, so they can control it.

  8. Samsung DEX. Dock and you have a 4k screen, keyboard, mouse, ethernet, etc.
    Note 9, Galaxy Tab S4.
    And they now have a beta to run a full Ubuntu Linux system on Dex.

    1. @tz “And they now have a beta to run a full Ubuntu Linux system on Dex.”

      Just shut up and take my money already.

  9. Apple actually sold a gazillion Mini’s. It was very regularly refreshed until the management changeover post-Jobs. The new management didn’t understand it, so they ignored it until they could no longer pretend and then did a pathetically bad job of the refresh (essentially stuffing an Macbook Air into it). That particular model sold quite poorly due to the lack of expandability (specifically no DIMM slots) and ULP CPU, but the new refresh going back to the original idea looks to be starting off well.

    Much the same happened to the Air itself and the Mac Pro. The Air has also finally seen a real update by a product team that understood the core market.

    1. >so they ignored it until they could no longer pretend and then did a pathetically bad job of the refresh

      Had the lead in this segment and fumbled it. That happens.

      1. Yeah, the mini sold well, it just didn’t sell as well as the iPhone. Since Jobs died, Apple has given very short shrift to anything that isn’t the iPhone or iPad, and that’s what stagnated the mini (and pretty much every other mac in the lineup too).

        I don’t know if it’s “fumbled” or just “didn’t care”.

      2. It’s practically Apple’s stock in trade. Seen it so many times from them. Including several of their ‘Successes’ like the iPhone where they’ve gone from market dominant to popular niche product.

    2. The new management didn’t understand it,

      Wrong. They understood that it sells in the thousands, while the laptops sell in the millions, and engineering time is a scarce resource. The Mac Mini is barely able to economically justify the attention it gets, and that’s why it gets refreshed so rarely. The Xserve and Xserve RAID products were much loved by their users (including internal users at Apple), but it just wasn’t worth the time and attention to keep them in production.

      1. >They understood that [the Mac Mini] sells in the thousands, while the laptops sell in the millions, and engineering time is a scarce resource.

        I think this is probably right. I’d have made the same call in Apple’s shoes.

  10. There’s definitely a flavor of professional that needs to use certain commercial software packages, where said packages often require a good graphics card, or other very specific qualities. I don’t know the insides of those packages well enough to predict what would happen in the case of the GPU changes you predict.

    One of my current paranoid suspicions is that we are on the edge of a political scare that could change the consensus on IT and information security.

    1. I use a Mac as my main machine, but way too much of both the electrical engineering software and the ham radio software requires Windows, or sometimes Linux, which I run on VM’s. Sadly, the Mac is in third place with that sort of stuff, and the open source stuff too often in one of several sorta-Linux distributions that do not coexist well in one Mac. For open source, Linux is probably king, but for my world, not adequate.

      1. Hams are cheap. Few are willing to pay the price of Apple hardware, so ham radio software doesn’t tend to get developed for Macs. What exists is mostly ports of software that was developed for Linux; since they’re both Unix-based it’s mostly not a difficult port. Software that uses audio ports is the major exception (that includes a lot of SDR software because QSD/QSE based SDRs usually connect to audio ports or present themselves to the computer as audio ports) because the APIs for that are different. Software that enables USB control of radios is another problem spot.

  11. I like the NUC, I have one from several years ago currently sitting with just a power cable and network cable running Ubuntu. It’s been other things in its time.

    But it’s obvious that for most people for most purposes the phone now does the job when in the not to distant past it would have been a laptop and previously some sort of standalone PC.

    It’s interesting, and possibly a sign, that Adobe Photoshop is now on the iPad.

  12. There will always be a need for portable screens. I took my Surface Pro 4 with me to a trade show I was just at; and it’s basically a battery powered display with a NUC-grade computer spread across the back adding a couple millimeters of depth to the assembly. The laptop form factor may compress to that, but it’s not going away. Airlines are doing away with seatback displays in favor of telling the customer to bring their own (because the customer is already bringing their own). I won’t be surprised to see hotel rooms stop supplying TVs soon as well, for the same reason. Why buy those when your customers already have them? Why pay to maintain them, same reason? It’s just a headache they don’t need.

    And when I’m out and about, Panera or Starbucks isn’t going to put a screen and a keyboard in front of every seat on the off chance I need them. There’s too many use cases where I’ll still need to bring a KVM with my compute box. May as well strap that compute box to my screen.

    1. There are still times when a big screen is useful, and hotel room stays are often one of them. How else is the couple – or everybody at the orgy – supposed to see the screen comfortably?

      What may go away are hotel-provided TV services other than perhaps their own screen for checking out and the like. I have seen a taste of that; a hotel that has smart TVs with the apps for Netflix, Amazon Prime, and Hulu built in, and an invitation to set up your personal account to watch. The cautious will remove their accounts when they are done, but if you forget the TV is set up to automatically wipe your records when you check out. Unlike those rental cars that have the pairing info for a dozen people’s Bluetooth phones in them…

      1. There is absolutely no fscking way I will enter my Amazon or any other account into hardware I do not control. And I’m not particularly security-conscious.

        Maybe there will be a market for “fleet” tvs with nothing but a USB-C port (and a locked-onto certified cable – I’ve heard the USB-C horror stories). Technically a monitor, but I don’t know how many people will be interested in buying a monitor that big for SoHo use.

        1. I did use my Netflix account there the past two years when I was there for Anime Boston. But I changed the password before and after. And it’s paid for by a low-limit credit card, so even if they somehow managed to get the full number out of Netflix (which normally isn’t possible) they couldn’t do a lot of damage. I don’t think the Netflix app on that TV even allowed access to the account control page; if you needed to do that you had to use some other device like your phone or laptop.

          I agree about not wanting to use my Amazon account at all. Much higher risk, since people can use that to order stuff at my expense.

  13. RasPi is nice, but requires frequent reboots if you let the web browser run for a few days with several open tabs. Also is a pity it can’t run Netflix. Rules it out as an all-around compute brick. Kudos to Stephen Wolfram for packaging his wolfram language with RasPi.

    About portable screens, haven’t they been promising us foldable screens for a while now? Imagine a screen like an ancient scroll, that you roll up, then unroll when you want to use it. We should be getting closer, right? Right?

    1. What machine these days runs without needing regular reboots? Every one of the sites where we have now got W10 running ( 32 bit to talk to legacy kit ;) ) has it configured to reboot over night. My Linux desktop will only run for a few days before it plays up. My Samsung phone has rebooted overnight … and now wants another reboot to install the latest timezone update! And the Pi3 is no different ( and streams video quite happily so what is the problem with Netflix? )
      A roll up screen to go with the roll up keyboard would be nice …

      1. If your Linux machine consistently needs to be rebooted every few days, you’ve got either flaky hardware, or a shitty distro. Consider something more reliable like Debian, Devuan, or Slackware.

        And the Pi3 is no different ( and streams video quite happily so what is the problem with Netflix? )

        Netflix has proprietary DRM that probably requires use of a TPM or other cryptographic hardware the Raspberry Pi does not supply. Support for DRM is a non-negotiable requirement for any consumer-grade hw/sw that expects to be considered “halfway decent” — that’s why Web standards have been drafted for it.

        1. I’m still using openSUSE and the problem is not hardware, but crappy and flaky software! KDE is next to useless these days, and Gnome is not much better. I’d LOVE to get back to a simple desktop that simply works as the reboots are invariably down to security fixes which every distro pushes out daily these days? It’s probably getting to the point where I should switch off the machine when not using it, but then we get the windows problem … my laptop invariably takes a long time to boot if it’s been off for a few days because of the updates! THAT is why the sites schedule reboots for early morning so there is some chance the machines are working when the public start coming in …
          That DRM does not work with some hardware is a fact of life, but there is support on the Pi … it’s back to the software of the distro again! Have to say I’ve not looked recently, all my Netflix stuff is on the local media server and just works to the Pi TV.

          1. I’d LOVE to get back to a simple desktop that simply works

            There’s still XFCE, LXDE, and all the bare window managers without “desktop environments” still available in all the major distros. If you use Linux heavily, I recommend (if you haven’t already) that you learn to work without a desktop environment and operate your computer primarily through the shell. Or, failing that, switch to Mac — or to Windows with WSL. Open-source software simply cannot get its shit in one sock when it comes to developing a consistent, easy to use, and pleasant UI.

            Back when GNOME was in the run-up to version 1.0, a lot of people on Slashdot were excited for it. Somehow, my instincts told me “stay the hell away”, and I managed to avoid/get by without GNOME for years hence. I really called that one; GNOME somehow manages to be slower and buggier than the Windows desktop.

            As to the security update problem… Slackware doesn’t update automatically. Debian and Ubuntu allow you to disable unattended-upgrades, so if you know what you are doing and are willing to assume the risks, you can set the computer to update less frequently, or not at all unless you trigger it. But the safest option is to suck it up and accept the automatic updates.

            1. I was on KDE for a long time, and this IS the desktop machine, all the servers are safely text only ;) Yes I should probably try yet another windows manager and Gnome is pain in the arse, slow and second best simply because KDE will not even run my 4 screen setup :( Updates only happen when I kick them, but invariably finish with needing a reboot. The question is just what can I use that copes with 4off 1920×1200 screens and allows me to run as a flat screen as Win98 did out of the box all those years ago.
              Perhaps today is IS better to have a computer per screen and let them do the cross screen stuff over the network?

              1. I’m using Xfce and I’m really happy with it. It’s nigh-infinitely tweakable and can easily imitate either a Windows or Mac desktop, or something else completely. My personal setup uses three panels, one set up as a windows-like taskbar, one present but completely invisible*, and one “hidden” on the left-hand side of the screen and containing my most frequently-used applications.

                * The “invisible” taskbar pushes a full-sized application out of the way of some screen real-estate I want kept clear.

            2. >Back when GNOME was in the run-up to version 1.0, a lot of people on Slashdot were excited for it. Somehow, my instincts told me “stay the hell away”, and I managed to avoid/get by without GNOME for years hence. I really called that one; GNOME somehow manages to be slower and buggier than the Windows desktop.

              GNOME was great when I first started using Linux. Then GNOME 3 hit. But MATE (a fork of GNOME 2) is still decent and lightweight enough to run on a Pi 2 or 3.

          2. Do you use Nvidia drivers, or are you on a fairly old kernel? I’ve not had problems with KDE at all on my KUbuntu 18.10 and Radeon RX570. Before I installed the latest kernel 3 days ago, the system was up for 3-4 weeks since the last kernel update. About the only time I need to reboot is when Ubuntu decides to push up another revision release on their version of the kernel. Though, I do really enjoy running a super light WM like Ratpoison, xmonad, or i3.

            KDE won’t handle your 4 screen setup? I’m using 4 1920×1080 screens with two of them rotated portrait here and KDE handles it just fine. My work system has 3 screens, 2 2048×1152’s in portrait and 1 4k screen in landscape on KUbuntu 18.04, with an RX480.

            1. Just move the machine to Leap15 from 42.3 and yes nvidia driver is the problem. I did switch to an AMD graphics card but could not get the 4th channel working on that so back on the Radeon R9 360 which is working fine again. It had been perfectly stable with KDE on SUSE R13 for a long time and I only changed because I could not get the security fixes once that was end of lifed. The fact that the whole system WAS working perfectly and now isn’t is down to do-gooders trying to push their own agenda on what was a perfectly functional framework :(

              1. I know that some older AMD cards won’t support every port on their board at the same time. For example, my 7850 I was using at work before the 480 has trouble using HDMI + 2 DVI at the same time, you can only do 3 monitors reliably if one of them is a Display Port monitor. The newer boards like the 480 are 3DP and 1HDMI, so that’s not a problem anymore.

                1. That’s also true of some motherboards. I have a couple of AM3 motherboards that have VGA, DVI-D, and HDMI ports, but the two digital ports don’t work at the same time; you can have one analog and one digital monitor.

          3. How do you get netflix to your local server so you can stream it to your Pi TV?

            And yes, the problem is the Vines DRM module which is only provided for x86 architectures. There were some hacks to make it work on the Pi, but every time someone figured out a workaround, netflix did something to break it within a couple months.

            1. I just record the interesting stuff on the linux TV, trim the adverts and then can watch at leisure later … I’m not bother about ‘4k’ yet which is more of a problem.

          4. I’m running openSUSE as well. It’s rock-solid with LXDE, running months at a time on servers. My own machines run KDE, which, unfortunately, gets the gollywobbles somewhere around 7 to 10 days of uptime. Simply restarting KDE will take care of it, though I usually just reboot the whole machine since I’m the only user.

            I shifted between Gnome, Enlightenment, and KDE long ago, running a different one every day, and it didn’t take long before KDE became the default. And Enlightenment is still rocking like it’s 1999, and Gnome is downright crippled by comparison to KDE, which moves right along after you turn off all of the cruft that’s enabled by default.

        2. The DRM doesn’t have to be done in hardware except for 4K streaming, which you’re not going to be doing with a Pi in any case because it doesn’t have a 4K video output. The DRM requires a software component that isn’t installed on the Pi by default, but there are workarounds. You can find information on the web.

          4K Netflix streaming requires a full HDCP 2.2 solution in hardware. That means hardware H.265 decoding and a video card that implements HDCP 2.2. Suitable video options include the integrated graphics in 7th generation and later Intel processors and Ryzen APUS, NVidia GTX 1000 series and later graphics cards, and AMD RX400 series, RX500 series, and Vega graphics cards. (Systems with those cards will work even with earlier CPUs.) It also requires using either Microsoft Edge or the Netflix app from the Microsoft Store. The Netflix page listing hardware requirements has not been kept up to date and only lists the Intel option.

          There are SBCs other than the Raspberry Pi that have the requisite hardware capabilities for 4K Netflix streaming; for example, the recent flood of products based on the Rockchip RK3399. But don’t hold your breath for Netflix to enable them.

      2. My phone basically only ever rebooted the few times it got a system update and my laptop currently has an uptime of 18 days and it’s only that low because I turned it off for traveling. I’m sure if I were a little bit more relaxed about using suspend to ram while not attached to AC and willing to not update my kernel for that duration I could easily get an uptime of half a year on my laptop.

      3. I’ve got Linux machines that have run for YEARS without rebooting — my web server runs on a debian VM that I get from DigitalOcean. But that’s all server stuff which is much more stable than desktop software.

        But my Mac laptop and Google Pixel 2XL go months between reboots. I typically only have to reboot them for OS upgrades. My wife’s Chromebook only gets rebooted when she lets it run out of power.

        My daughter’s Pi 3 desktop computer does need to reboot once every couple weeks. I think it overheats or something. I’m not overclocking it, and I did install the optional heatsinks that came with the kit, so I don’t know if this is normal, or we just got a crappy one.

      4. My work lapdog (Windows 7) runs a week or two between reboots.

        My old Dell (2007 vintage Precision Workstation) runs weeks to months between reboots.

  14. The laptop (meaning: a 13″+ screen with attached keyboard) will likely not die because it’s as convenient a form-factor as its ‘notebook’ name suggests. That said, the brains of it will/should soon live in your always-physically-present microcomputer you call a smartphone.

    Seen the EOMA68 project? It’s a little further down the path than a NUC.

    1. Eoma68 is interesting, They’re using the same processor stack as the Pinebook and Pine64. (Unfortunately, Allwinner makes kind of garbage processors and doesn’t do a great job of following the GPL. I think I’d prefer better on the processor front.)

      Have you looked at the Miraxess Mirabook? It’s a similar idea, but instead takes your cell phone and hooks it to a laptop display and keyboard. Combined with the Librem 5 phone that could basically be a linux computer that does the same thing as EOMA64. You plug your phone into a USB-C dock for home use, or a laptop shell for portable use, then use it directly like a phone for ultra-portable.

      1. Yeah, but it has the same problem every other docking station has – even at their “pre-order price”, I can buy whole laptop for that price.

        The reasonable solution would be to buy the laptop and use a terminal emulator to hook to the phone.

  15. I think the biggest thing that is going to slow this progression and keep pc towers and laptops holding out the longest will be the need for power and cooling, cooling especially.

    The ARM chips in modern smartphones and tablets are often plenty of horsepower for most of the computing tasks they’re given, and manage to consume little power (and require little cooling) when doing so. But, they mainly do this by avoiding running at full power whenever they can. Occasionally, they are given tasks that push them hard, and when they do run at their max potential, they get hot, and quickly. A friend of mine would jokingly ask me to play “Find the CPU” on his new tablet when it ran updates; it would take but a few seconds to find where the tablet felt the hottest.

    Mobile processors will throttle their performance to prevent themselves from overheating. Any modern processor will do this, but on a laptop or desktop, that’s more of a failsafe mechanism that should never get used in normal operation, even if you make full use of its capability constantly. At worst, your cooling system will get kinda loud as the fans run at high RPM. On a smartphone handheld computer, you *will* send it into thermal throttling if you try to make it work hard. The form factor is too small to provide adequate cooling to to allow it to run at full power for a full duty cycle.

    I imagine the docks you’re envisioning will mitigate that to a degree, probably by providing a heatsink and fan that will pull off additional heat, but I don’t think that will ever fully solve the issue. All that heat has to go *somewhere* and that somewhere is usually going to be atmosphere. And to dump heat to atmosphere, you’re going to have to have surface area.

    I imagine “The Cloud” could play a role here. Need to run a computing task that requires more thermal budget than physics says can fit in your handheld compute node? Rent time one of MicroGoogAzon’s mainframes datacenters Clouds.

    Another possibility would be non-portable docks that contain the MegaDoomDestroyer / Professional Rendering class hardware and cooling necessary to run it. There’s already whispers in this direction. I’ve seen enclosures meant to accept a high-end video card. Said enclosure then connects to a computer via thunderbolt.

    Of course, all of this won’t matter if we manage to find some way to make computers far more thermally efficient than they are now. I’m thinking of the orders-of-magnitude difference between vacuum-tube-based computers and integrated circuits. Presently a MegaDoomDestroyer might draw 600 watts of power when running flat out. A handful of those watts become the display pixels of the latest Crysis demo. The rest become waste heat. If a MegaDoomDestroyer were to need only 6W, and actually used most of them, then yeah, it could probably fit into a person’s handheld compute node without worry about power or thermal budgets.

  16. >But the MegaDoomDestroyer will pass, too. The polygon arms-race will top out when our displays exceed the highest resolution and frame rate the human retina can handle. We’re already pushing the first and the second is probably no more than two Moore’s Law doublings away. After that all the NRE will go into lowering footprint; on past form we can expect ephemeralization to do its job pretty quickly.

    Hm. How many more Moore’s Law doublings remain available, though?

    Intel’s cutting-edge release chips were 10nm last I heard, with R&D having a 5nm that was suffering troubles with quantum tunneling and heat dissipation before it could go to mass production. The physical limit is at silicon atom width of 0.2nm, and the practical limit for structures is presumably quite a bit before that, which suggests the end is in sight for chips as we know them.

    1. Intel’s cutting-edge release chips were 10nm last I heard, with R&D having a 5nm that was suffering troubles with quantum tunneling and heat dissipation before it could go to mass production. The physical limit is at silicon atom width of 0.2nm

      The atomic limit is not relevant at this time; nanometer figures haven’t reflected any real measurements on the die since roughly the 90nm era.

      1. Yep. And, the nanometer “nodes” mean different things with different fabs for the same number. Hence Intel 10nm is roughly TSMC 7nm.

  17. >I think it’s likely that the last redoubt of the PC will be gaming rigs.

    Consoles are eating that. As gaming became widely popular, it reached a target audience who doesn’t want to fiddle with settings for optimal looks at 60 fps. I think simulator gaming (racecar, airplane, space) will be the last redoubt. They tend to be the kind of people who find it normal to calculate a realistic field of view setting using trigonometry, for instance.

    Personally, I find that most gaming outside simulators is quickly turning silly and childish, think Fortnite. But it is to be expected, we are living in an age where adults still read comics books and watch anime – in my school you got teased if you did that after 13 years old. I don’t like adults turning to childish tastes, but it is clear that this is the future. So they like videogames that look cartoonish, and that doesn’t even require that much graphics output. The main reason gamers always wanted the latest hardware was to make things look completely realistic, if you look at the progression of the GTA, Elder Scrolls or similar series. If it is not so important for example to have water in the river look and wave and splash around like real water, because it is a cute cartoon game anyway, why care about high-end? These little boxes can probably run them.

    So if Fortnite is the game of the year and people playing are considered gamers, then I think we realism-oriented folks should rename ourselves simmers, simulator players or something. This is quickly becoming two different, separate domains.

    1. Consoles are eating that. As gaming became widely popular, it reached a target audience who doesn’t want to fiddle with settings for optimal looks at 60 fps.

      Ever heard of an expanding market? Rising tide lifts all boats.

      I think simulator gaming (racecar, airplane, space) will be the last redoubt. They tend to be the kind of people who find it normal to calculate a realistic field of view setting using trigonometry, for instance.

      You have a bizarre set of categories here: people who use calculate a FOV rather than setting it based on the performance / visibility tradeoff, versus the filthy casuals.

      Personally, I find that most gaming outside simulators is quickly turning silly and childish, think Fortnite.

      First off: never played Fortnite or its kin.

      But why would it be surprising that people gravitate to something with simpler graphics when the trend for many years was towards “gritty”, “realistic” graphics while doing nothing for gameplay? As for gameplay I don’t know enough to say whether Fortnite is shallow or not, but the shallow gameplay problem is caused by the beancounters slavishly following the holiest grail of them all: The Larger Audience. They crank that one a few times though and the fans leave, and without the fans a series dies because they won’t bring in the normies.

      But it is to be expected, we are living in an age where adults still read comics books and watch anime

      Do I get to sneer at all book readers (self-sneer FTW!) because Dick & Jane is so childish? Mediums do not come from the medium factory with DRM that prevents them from being used outside a certain age group.

      – in my school you got teased if you did that after 13 years old.

      Meh, jocks ruin everything. Or rather they did until they realized that they work for us and had an ally in us when the SJWs came knocking.

      Yeah, about that. Why is it that the all the supposedly mature people who keep civilization running unilaterally surrendered the moment the cancer showed up, but the supposedly pathetic trash of society fought back hard enough to give the SJWs a bloody nose that is still bleeding? Someone talking loudly about how they protect civilization from moral rot isn’t much different from someone doing the same about all the chicks they bedded.

      Putting aside the cheap shots; your moral grandstanding is boring. If you want to get anywhere in the unpopular opinion virtue game you need to pick a target that is not the single most reviled demographic in the entire universe. (no the irony of grandstanding against grandstanding is not lost on me)

      Do you want C. S. Lewis quotes thrown at you? Because this is how you get C. S. Lewis quotes thrown at you.

      I don’t like adults turning to childish tastes, but it is clear that this is the future. So they like videogames that look cartoonish, and that doesn’t even require that much graphics output.

      Cartoonish != Immature. The Incredibles is anything but immature and is cartoonish.

      The main reason gamers always wanted the latest hardware was to make things look completely realistic, if you look at the progression of the GTA, Elder Scrolls or similar series. If it is not so important for example to have water in the river look and wave and splash around like real water, because it is a cute cartoon game anyway, why care about high-end? These little boxes can probably run them.

      Again: the big producers pushed graphics as hard as they possibly could while sacrificing everything else. The result of that is for “it looks nice” to be as much of an insult to a game as “she has a nice personality” is to a woman.

      For someone who makes the implied claim of being Old Guard you sure don’t seem to be familiar with what has happened over the last couple decades.

      So if Fortnite is the game of the year and people playing are considered gamers, then I think we realism-oriented folks should rename ourselves simmers, simulator players or something. This is quickly becoming two different, separate domains.

      First: we already have those names. “Hardcore” vs “fucking casual” (or “filthy casual”). “PC Master Race” vs “Console Peasant”.

      Second: If you don’t want to be thought a hypocrite then I expect you to not single out one medium for this separation of the Untouchables. So you need to do the same for Books, Movies, TV Shows, Plays, etc.

      To wrap this up, riddle me this; If you are correct about the wider cultural shift how did Demon Souls/Dark Souls 1/Dark Souls 2/Dark Souls 3/Bloodborne/Nioh sell so well? Why is everyone and their dog trying to make a soulsborne style game?

      1. Ian and Dividualist, alternate theory: Cartoony games are becoming more common (and yes, I think I am seeing this) not so much because audiences are juvenilizing as because such games are cheap and fast to make compared to sims, so a studio can squeeze out more profit per unit time.

          1. >This is in fact true.

            :-)

            I feel an aphorism coming on. Ah, there it is…

            A firm grasp of economics is the Swiss Army knife of analytical thought in the real world.

        1. Realism is expensive, and at a certain point in developing the technology you start running into all sorts of uncanny valley effects, and deliberately making the style abstract again avoids this without needing expensive calibration. At some point we may develop the tools to do realism past the uncanny valley cheaply.

          1. The uncanny valley is why Pixar *still* runs to a cartoony aesthetic, despite the advances in rendering.

            1. The Pixar DVD for the first “Incredibles” made fun of the old “Clutch Cargo” — Cambria — SyncroVox style of animation. And yet the STORIES were memorable!

        2. As anyone who ever played a roguelike game will tell you; good, fair design is everything, and ASCII is just fine if you know how to use it. IMHO the smart thing to do is release a game in ASCII. If people like it, then upgrade to something graphical but with the same rules.

              1. It also worked (more or less) for The Legend of Zelda: Breath of the Wild, whose majr mechanics were prototyped in an engine that played in 2D and looked like the original Zelda on NES.

          1. I couldn’t get into that stuff of letters symbolizing monsters, but text used as text, so interactive fiction to MUDs, yes, I like some of them very much and that is really the cheapest and easiest way to make things happen. I am not even saying “D” is not a good symbol for a dragon, I just think we are more used to “dragon” being the symbol for it, so text as normal readable text format. Anyone who enjoys reading books can pick up IF or MUDs, while rougelikes may take more getting used to. I think IF and MUDs have a future, just not the telnet whatever:9999 way but integrated into some app people are using for text-chatting anyway.

        3. To elaborate a bit:

          There is also the factor that – like all the other artistic mediums – it is getting cheaper to produce a given level of graphics. Both in the sense of things like motion capture getting cheaper and also not having to squeeze every last drop of performance out of a top of the line card.

          That opens the door for a rebirth of the middle market which previously got consumed by the likes of EA. This for example was a tiny group of ex-Hitman developers. True, they are “cheating” by only using a few different character models. And the character looks kind of plastic. But aside from that one can’t fault the graphics.

          What small low budget developers can do is only going to get better.

          1. This looks pretty cool. Also, the Czech stuff – Arma 3, Kingdom Come etc. Mount & Blade, developed by a small Turkish studio is another good if old example, albeit they are really taking too long with the follow-up Bannerlord. I hope EA is approaching their well deserved bankruptcy by combining shameless pay-to-win, gambling and socjus, a perfect combination to get gamers boycott them.

            1. This strays off the OP, but is interesting enough to comment on.

              > I hope EA is approaching their well deserved bankruptcy by combining shameless pay-to-win, gambling and socjus, a perfect combination to get gamers boycott them.

              I also hope this, and for more than these reasons.

              I don’t have my ear to the ground, but from what I’ve seen and what I’ve been told, gamers have become more conscious. Studios and distributors are no longer blindly trusted. The generations of gamers are growing older, and growing up jaded and distrusting.

              Indie games (before they sell out) have become quite a valued market.

              Reviews are no longer trusted, since media and reviewer corruption are now open knowledge.

        4. Ian and Dividualist, alternate theory: Cartoony games are becoming more common (and yes, I think I am seeing this) not so much because audiences are juvenilizing as because such games are cheap and fast to make compared to sims, so a studio can squeeze out more profit per unit time.

          There are game-design considerations at work here, too. Deciding to make a game realistic means you have to commit to realism or risk breaking the immersion for the player. It’s not hard to come up against these immersion-breaking seams in modern games: invisible boundaries the player cannot walk across, the famous red flashing “Return to the Combat Area” warnings from Call of Duty games, even things like walls or ledges that are somehow specially marked as “climbable”. Players expect a more abstract world and fewer rules from games with a more abstract, simpler representation. You don’t need to specially mark the blocks in Mario as climbable; the player can easily guess (or figure out through trial and error) that anything that looks like a solid block can be stood on.

          Compare Doom to any modern FPS. Doom wasn’t particularly cartoony, but its level design was abstract, and that allowed the game designers to create varied, freely navigable environments that were large enough to get lost in and admitted an improvisational style of play wherein the player could easily figure out how to use the terrain to their advantage. Modern FPS games, by contrast, often have hard boundaries and situational gameplay that ask the player to guess and follow the play style the game designer intended for that portion of the level. In some of the worst cases, if you stray too far off the rails the designer has set, the game will tell you to “return to the combat area”.

          Or compare the original Tomb Raider, with its abstract, grid-with-heightmap-based level layout, to one of the more modern games, which admittedly lovingly render every strand of Lara’s hair with jaw-dropping fidelity, but also feature levels littered with compass waypoints and obviously-marked interact-here scenery.

          There’s sort of a spectrum here between “game” and “setpiece”. The more realistic, hence setpiece-y, the studio commits to making the game, the less flexibility in terms of gameplay they can realistically implement without either breaking player immersion, or running into budget or hardware limits. In fact, monetary budget and commonly available PC/console hardware aren’t even the only constraints here: there’s also team size and complexity, which as everyone who’s read their Brooks knows, can only reach a certain upper bound before the communication complexity overwhelm’s the team’s ability to get shit done. The bugginess in AAA titles and the need for day-one patches suggests that we are at or near that complexity threshold for games, and so some studios are looking at building more abstract games with fewer generative rules, such as Minecraft and No Man’s Sky, and just letting them naturally combinatorically explode to create the variety of locales and situations gamers are hungry for. In such abstract gameworlds, glitches can even become interesting features rather than immersion-killing bugs.

          1. And like Ian says about graphics, the tools we have for creating the other content still have available development room for cheaper production at older levels of simplicity.

            If AAA titles are near some fundamental complexity limit, that would seem to indicate that the state of computer science has not made said complexity limit so well understood that designers could more safely avoid it. In which case, we might expect that similar problems are hanging around undiagnosed in other parts of the computer world. Or is concern about that already a well known issue in these parts?

        5. >Cartoony games are becoming more common (and yes, I think I am seeing this) not so much because audiences are juvenilizing as because such games are cheap and fast to make compared to sims, so a studio can squeeze out more profit per unit time.

          Well, then you have the cartoony sims, e.g, Kerbal Space Program. From the cute, big headed little green men that form your astronaut corps, you can tell that this game isn’t rocket sci–

          Oh.

          1. >you can tell that this game isn’t rocket sci–

            KSP is a beautiful thing. I’m tempted to get a Steam account just so I can play it, but I really don’t need that kind of time sink in my life.

        6. I don’t think low production costs have much to do with immense popularity at the players / customers side. Unless the idea is not simply to spend less money but to spend the same budget on other things the customers / players actually value more. Like the depth Jeff mentioned.

          1. Indeed. Fortnite is a product of Epic. one of the biggest names in gaming, and certainly capable of producing AAA near-photorealistic setpieces (like Gears of War) if they so chose.

      2. Okay, it is actually a good point that the kind of guys whom I considered manchildren because they cared about superhero comics at 20 years old did a better job at pushing back against the SJWs than, say, Linus. It really surprised me. My theory before was that shit attracts flies. That is, something turns to shit first, then attracts SJWs. For example github – Zed Shaw wrote around 2014 that github had that kind of puerile culture where adding people to projects that draw ASCII art dick picks is considered an excellent joke. So I was think SJWs infiltrate mostly those kind of – childish – places. The whole thing about Linux came at me as a shock and had to revise my model.

        About the rest. Beancounters, visuals, gameplay. As far as I can tell the end goal has always been to create perfectly believable, cannot tell from real life virtual realities. It is just that our computers sucked, we kept updarting them every year fro 1990 to 2014 and every time some new game came out with a new level of realism. Look, Doom looks kinda like 3D! Look, in Duke Nukem, the terrain is actually 3D, you can get inside a submarine! Look, in Quake even the monsters are 3D! Look in XYZ they don’t look like made of bricks! Look, animater water! Look, even better animated water! I remember the shock how real water and fire looked in Skyrim. And all this harmed gameplay? Is that why people are still modding it 7 years later?

        It’s not “looks nice”, it is “suspension of disbelief”, or “one step closer to perfectly virtual reality you cannot tell from real life”.

        I don’t consider Dark Souls realistic at all. And my point is really that from 1990 to 2014 it seemed gaming is moving towards the end of goal of some kind of cyberpunk wireheading with 5-sense perfect simulated realities.

        1. > suspension of disbelief

          It is suspension of disbelief that makes anything more real than real life, though it can be said that a cute brick which can still enable the suspension of disbelief or general oldschool acceptance can be perfectly adequate as, say, a “console killer”.

          > So I was think SJWs infiltrate mostly those kind of – childish – places.

          Like universities?

          You are mistaken on several levels. Unfortunately, none of this is related to this topic.

        2. Don’t worry. SJWs won’t target things you think are Really Important and mature. The ERP customization community is safe.

          SJWs go after what they — rightly — see to be the pillars of our culture. Once they control these vital areas, they reason, they can either change the culture or failing that, bring the whole thing crashing down to start anew. Things like superheroes and Star Wars are mythical in their significance. Stan Lee knew and capitalized on this; that’s why Spider-Man is often described with Beowulf-style kennings like “the Web-Slinger” and “the Wall-Crawler”. If you still don’t believe me, consider that a superhero myth dominated the entire Western world for 2000 years. Yes, Jesus fills the same psychological role that, say, Superman does. Here is a perfect man. He has all of humanity’s strengths and none of our faults. He is noble to the point of self-sacrifice for those he loves, which encompass all of mankind. Think how much better you could make your corner of the world if you were more like this guy. Every culture has similar heroes. The Greeks had Hercules and Perseus, the Hebrews had Samson and Solomon, the Polynesians had Maui (now the star of a major Disney film).

          If you want to conquer a people thoroughly, attack their heroes and their traditions. The version of Beowulf we read today is rife with Christian influence, because that’s how Christians rolled. They came across a new people and they undermined their myths, their heroes, and their traditions, giving them a new Christian exegesis. That’s what SJWs are doing. They’re recasting everything in terms of white male privilege and marginalization of nonmales and people of color, and applying that interpretation to our most cherished modern myths and stories.

          Today, geeks and “man-children” are the guardians of these modern myths because they’re the ones most in need of them. The world is a rough place out there, getting rougher all the time. These people often don’t have the skills to cope, but they have a vague sense that they can learn those skills from heroes. A hero is like a template for how to function as an adult — how to endure suffering, face adversity, and bring back benefits for yourself and your fellow man. Note that the SJWs are not geeks. Demographically, they skew towards hipsters — normies who wear intellectualism as a sort of fashion statement, the way the “alternative culture” in the 90s wore outréness and “scene kids” in the 2000s wore angst. This adds a “kicking sand in the wimp’s face” wrinkle to what the SJWs do: they take satisfaction in taking geek culture away from geeks and then using it against them to punish them for being geeks and liking these ideas. They twist the ideas into something unrecognizable and evil to discredit them forever. And it’s not new. It’s what Derrida did. “Oh, you like reading the classics? Well, what if I told you the classics could be interpreted, by torturing the language, to support fascism? Now what, motherfucker?” The villain of Incredibles 2 was a brilliant but jaded feminist who sought to punish her brother for having faith in heroes by taking his heroes away and manipulating them to show how foolish and childish he is.

      3. beancounters slavishly following the holiest grail of them all: The Larger Audience. They crank that one a few times though and the fans leave, and without the fans a series dies because they won’t bring in the normies.

        *cough Fallout *cough

    2. Have you ever played Fortnite? It’s quite deep. Small decisions made early on — such as where to land — can affect your success or failure later in the game. The goal of the early game is to gather equipment you’ll need to survive — including weapons, defensive and healing items, as well as “mining” materials such as wood, stone, etc. with which to build defensive structures. These are all readily available in settled, urban areas of the island; not so much in wild areas. The drawback is that if you choose a settled area to land, lots of other players will have done likewise, thrusting you into combat early in the match and possibly ending it for you right then and there. And while you’re in combat, you have to locate your enemies based on where their gunfire is coming from relative to you, defend yourself against them and/or get the drop on them.

      Honestly, calling non-simulation video games shallow and stupid is like calling board games shallow and stupid — which I’m sure Eric will be glad to correct you on. Granted, being a game auteur wannabe may have given me a certain degree of “Matrix vision” that lets me see the game design and the complexities behind the graphics.

      But it is to be expected, we are living in an age where adults still read comics books and watch anime

      You have the likes of Hayao Miyazaki, the late Stan Lee, and John Lasseter to thank for that! These artists decided to make comics and cartoons intended for adults to appreciate. Other creators have since followed suit in an attempt to imitate the greats. Comics and cartoons were never intended solely for children. Even the old shorts with Bugs Bunny, Porky Pig, and that were adult-oriented despite being lighthearted; they appealed to adult sensibilities in humor, which fits with their function as apéritifs for the possibly more serious feature film to follow. In the U.S., these media became more juvenile as a result of self-censorship under pressure from “concerned parents” groups between the 1950s and the 1980s or so. The Comics Code Authority laid waste to entire genres of comics including horror, romance, and crime-themed comics.

      1. Glad to see Fortnite being defended here. It really is very good (and I play sim games most of the time). Recent dinner party conversation: “I’m worried because my kids say they want to play Fortnite.” Me: “I tried it to see what it was like. It’s a really good game.” Cue tumbleweeds…

  18. @Cminek I’ve actually done that for temp files/but not swap/pagefile. I spend most time in Windows, though Linux is available too.

    On my old homebrew desktop, I had a 32bit machine with 4GB RAM. For technical reasons, Windows could only see/use about 3.2GB of it. I found a freeware RAMdisk that could use the RAM Windows couldn’t access, so I had a 768MB RAMdisk seen as Z:. The first use was placing Firefox’s cache there. That was easy, as there’s a preferences setting where you can specify where cache lives, and I made the RAMdisk the location..

    The next step was “Can I put the entire Firefox profile on the RAMdisk?” Yes, with a little hacking. The profile lived in a Zip archive on my HD. A boot script extracted it to the RAMdisk. (That was faster than a copy of the uncompressed profile from HD.) Firefox Profile Manager let me create a custom profile and specified where it lived. I pointed it at the RAMdisk. Run FF using that profile, and profile and cache were on the RAMdisk and things were sped up a treat. A shutdown script re-archived the profile on the RAMdisk to HD to capture changes made during a session. At the peak of development, I was creating backups of the archive by date, and I could recover from a prior iteration if I had an “Oops!” moment and shot myself in the foot. Worked just fine. (The scripts were enabled by Group Policy Manager, which requires at least the Pro flavor of Windows. That’s what I had, and that sort of thing is why I won’t use Windows Home flavors.)

    On the current 64bit desktop, I found an open source Windows RAMdisk which I have installed. I experimented with reproducing my former setup, but I’m running off an SSD, and that’s fast enough I saw no measurable benefit to running from RAMdisk rather than SSD. I still have the RAMdisk enabled and am thinking about what other use I might make of it.

    On Linux, I set FF to place cache in POSIX shared memory on /dev/shm, which exists in RAM or swap but not in the file system. (I don’t care about preserving cache. I have fast broadband, and can repopulate cache as I use the browser.)

    >Dennis

    1. Someone should write a utility which erases all FIrefox configuration material and removes Firefox prior to system shutdown. Then it would re-install Firefox as part of the boot process and provide an alternate system of bookmarks.

  19. @esr: >This is in fact true.

    :-)

    I feel an aphorism coming on. Ah, there it is…

    A firm grasp of economics is the Swiss Army knife of analytical thought in the real world.

    I’d settle for any grasp of economics grounded in fact.

    But the sort of games that drive gamers to DIY gaming stations with graphics cards that may cost more than the CPU and huge hi-res monitors that may cost almost as much as the PC remind me a lot of movies.

    Movies are fantastically expensive to make, with long lead times. The studios green light films, have fun raising the money to make them, release them, and cross their fingers they’ll release enough that become $100 million grossers to cover their costs and remain in business. Go long enough without a hit picture and the studio folds.

    Current graphic extravaganzas like Bioshock or whatever the current game du jour is are similar efforts. They cast a boatload to make, have long lead times, and the developers pray to $DEITY they’ll have a hit, because the alternative might be going belly up. Games that can be popular and sell well that aren’t “bet the company” releases will be very attractive.

    >Dennis

    1. While all of the Bioshock games are gorgeous, I’d hardly call 2013 “current”. Not even 2014 if we are counting from Burial at Sea.

      And while Infinite certainly drank the money I’m pretty sure that doesn’t apply to the first two.

      1. @Ian Bruene: While all of the Bioshock games are gorgeous, I’d hardly call 2013 “current”.

        I never said it was. I don’t play such things, and have only a vague notion of what is out there.

        But my point stands – games like that are extremely expensive to create and require long lead time. Among other things, you need a raft of talented artists to do the detailed matte painting of backgrounds that’s part of the “gorgeous”, before you even get to characters which are ever closer to looking like live human beings, and game play that requires possibly more than one state of the art graphics card to display it and get FPS that makes the game playable. The process reminds me of nothing so much as making feature films, with the same “bet the company” when you release the product and pray it sells. I assume more current games on that line are the same yet more so.

        The trend to simpler graphics in games is likely also fueled by mobile. I have Android devices, and Google’s Play Store offers me a plethora of new releases every time I connect to get updates. I haven’t looked all that closely, but I don’t expect to see anything close to Bioshock or others of that breed – the mobile hardware doesn’t support it.

        >Dennis

  20. @Erik: Hm. How many more Moore’s Law doublings remain available, though?

    Intel’s cutting-edge release chips were 10nm last I heard, with R&D having a 5nm that was suffering troubles with quantum tunneling and heat dissipation before it could go to mass production.

    This has been biting for a while. An additional side-effect of this is the capital investment required. We are at a point now where a single machine used in a CPU fab may cost $500 million. The CEO of TSMC, one of the big “pure-play” fabs in Taiwan making CPU chips commented that if he were trying to build something like TSMC today, he couldn’t do it. The costs of building a fab were already astronomical. The cost of updating it with new gear as process geometries shrink is now possibly more than the cost of building it in the first place.

    It’s why very few companies have their own fabs, and even some that do are entering joint-ventures to share costs because the costs are out of the Ionosphere and in low earth orbit.

    Current effort is going wide, rather than fast, as everything moves to 64bit with vastly greater address space, and parallel processing to take advantage of multi-core CPUs which can have four or more instructions executed every clock cycle.

    >Dennis

    1. Buying old fabs or technology might make sense in some cases. For example, microcontroller manufacturers are probably completely fine with a process 1-2 orders of magnitude larger than high-performance options.

      Not everything involving silicon needs 10nm @ GHz.

  21. Troutwaxer Someone should write a utility which erases all FIrefox configuration material and removes Firefox prior to system shutdown. Then it would re-install Firefox as part of the boot process and provide an alternate system of bookmarks.

    I fail to see why.

    I’ve been using Mozilla code since Mozilla was still the internal name for an effort by Netscape. I started using Firefox when Netscape 7/Mozilla Suite would no longer get attention and Firefox got the nod. In the process, I learned more than a bit about the architecture.

    I have multiple Firefox profiles, customized for different uses. (Customization is mostly a matter of which extensions are installed.) When I create a profile, I run Firefox as “firefox -p”. This runs it in Profile Manager mode. Profile Manager lets me create a new profile, give it a name, and specify where it is created. On the Windows box, profiles all live under \Mozilla\Profiles\Firefox, and new profiles get created under descriptive names there. Shortcuts run specified profiles as “firefox -p profilename” It’s also possible to run more than one instance of Firefox simultaneously, with “firefox -no-remote -p”, as long as each instance is using a different profile because the first one to use a profile locks it. I make use of that.

    One thing I normally want is my standard bookmarks. Bookmarks/History are in an SQLite database called places.sqlite. Windows with NTFS5 and Linux both support hard links and symbolic links in the file system. I have a master copy of places.sqlite, as as a test, I symlinked it into test profiles. Each profile was looking at the same places.sqlite database, and my standard bookmarks appeared in all of them. I could even run more than one of them at a time with “firefox -no-remote” SQLite uses atomic commits, and all instances wuldn’t be trying to update it at the same time – only one would be actively writing to it – so it worked fine.

    I see no reason to blow out all profile information and recreate each boot. But having everything under \Mozilla\Profiles\Firefox means I can back up everything simply by archiving the Firefox directory.

    (On the dual-boot machine I had Firefox under Windows and Linux sharing the same profile, since each could see the other’s file system, but that’s another hack of a different nature.)

    >Dennis

  22. 1) I’m reminded of a project which put the whole computer into a PCMCIA form factor (though with a specialized port). I don’t recall its name offhand.

    As you mention, the intention was to have the body of a laptop be reusable. Others were theorizing putting them in racks suitable for on-demand computing like small website web hosting, because they were self-contained.

    2) Dell presently sells external boxes for video cards, making it seem quite possible to shift that entire “gaming beast” into just another brick to dock with. I see this the same way that optical drives became just another USB-connected brick.

    Multiple-video card breakout boards are in use because of cryptocurrency mining, acting like a dock for many video cards. Every other sort of card could work like this.

    3) A new (to me) motherboard form factor is out that openly asks its customers if they *actually* want expansion.. they just chop off that third of the board and keep a slot for one video card.

    I realized I gave up a floppy drive in my earlier system, and my next upgrade won’t even have PCI. These things got obsoleted, combined, or moved to a brick.

    Oh, and Canada made internet access a “basic service”, alongside phone service, with a maximum price and minimum bandwidth (suitable for video streaming). Imagine if they pushed for availability of a $50 computer (including keyboard and screen) in the same way. It would absolutely be your one computing unit.

  23. I find this idea of carrying computing power around and relying on “ubiquitous” peripherals to build an adhoc human interface wherever you are to be putting things exactly the wrong way round.

    Mind you, I like the “small box”. I have quite a few Raspberry Pi lying around, and for light server duty I like the SBC produced by pcengines a lot. These boxes come with a serial console (who needs graphics in a server) and an open source bios that does ipxe.

    But when I travel I do not want to use a hotel PC as a screen. And there are no two neighbouring countries in Europe that use the same keyboard layout. There are no screens or keyboards to plug in to when I am on a train or a bus. Or on a bench in the mountains.

    What I want is a portable human interface, that use the one thing that is ubiquitous here (internet access) to access the data and processing I need over the net.
    So a lightweight tablet, with detachable keyboard and on board LTE. I believe Purism had something in the pipeline there, but I do no longer see it on their site.

  24. Intel also is shipping “Compute Sticks”, flash-drive-sized entire computers. Some can run Linux. I haven’t tried one, but they plug into a HDMI port on a TV or monitor. I could see using one of these while on a trip where I have to fly somewhere, and I can literally put this on my keychain, saving precious space in my carry-ons.

    https://www.intel.com/content/www/us/en/products/boards-kits/compute-stick.html

    I just picked up a low-end Intel NUC to use as a 24/7, low-power-consuming, Debian-based home NAS / minidlna server. Using a low-end tower PC would take up more space in my office and burn more electricity; you’re right, this is the new low-to-mid-end PC.

    I could see replacing my media center PC (which is Mini-itx-based) with a more powerful NUC in future. I am thinking Intel designed the aesthetics intentionally, to make NUCs attractive as home theater devices, that won’t look out of place in your living room.

    1. >Intel also is shipping “Compute Sticks”, flash-drive-sized entire computers.

      Dave Taht mailed me one of those. I’d love to figure out some use for it, but I haven’t yet.

      It would be more interesting is it had an Ethernet port and I could shell into it without losing access to all mt tools.

        1. >Would a $10 usb-to-ethernet adapter solve that issue?

          Maybe, but it would be a kluge. I distrust such kluges – they often seem to choose the most inconvenient possible time to fail.

      1. @esr: It would be more interesting is it had an Ethernet port and I could shell into it without losing access to all my tools.

        The Compute Stick is intended to be plugged into a USB port on a big HDMI monitor, and Presto! It’s a full computer. Not having a USB port on the Stick itself doesn’t look like a deal breaker, if you have an Ethernet port on what you plug it into. It looks like the Stick should be able to detect and use it.

        >Dennis

        1. >if you have an Ethernet port on what you plug it into.

          You mean an Ethernet on the monitor?

          *Blink* I’ve never seen such a thing.

          /me looks around the back of his.

          Nope.

          1. @esr: Sorry, brain fart on my end.

            The Compute Stick seems to assume you will plug it into a big HDMI TV, which probably will have an Ethernet port the Compute Stick can use. No, a monitor is unlikely to have one.

            There are probably kludges around this, but they will be kludges.

            But I remember the days when you had consumer devices like the Commodore Vic20 and Commodore 64 that explicitly expected you use your TV as the display. This looks like a case of “What’s old it new again.”

            >Dennis

            1. One of my ongoing “idle thoughts” is the question of why someone like LG or Samsung doesn’t stick a couple gigs of memory in their 40-inch-plus monitors, install something like Xubuntu, and just kill MS. Even on the cheapest TVs it’s obvious they’re already running an operating system, not to mention (gag!) Android on other systems, so why not give everyone something useful?

              1. @Troutwaxer: Probably because they think they’re in the TV business and don’t want to be in the computer business. They aren’t interested in killing MS, and the last thing they want is the support headaches that will come with it. Adding that stuff also adds to the bill of materials, makes it cost more to make, and requires charging more when selling.

                And they wouldn’t kill MS anyway, with Xubuntu or any other flavor of *nix. (I’d use Lxde for that if I did it at all.) Linux is different, and most folks learn just enough about the OS to get the system to do what they need, then stop. They will not learn a new OS if what they are used to is Windows.

                If they were convinced to do it at all, it would be with embedded Win10, not *nix.

                >Dennis

                1. LXDE? It must have really grown since I last used it. (You can get a really beautiful desktop out of Xfce, but for some reason they ship it with the worst imaginable configuration – I’m not sure what’s up with that. Does anyone know the issues or politics involved?)

                  1. @TroutwaxerI got Lxde because it was the lightest weight desktop manager I could find, and the target was an ancient netbook with a 1.6ghz Atom CPU and a whopping 1.5GB RAM. Performance is more or less acceptable.

                    Lxde is sort of like XFCE with only one panel, but my needs were modest. I wasn’t concerned with eye candy, and mostly didn’t care about a beautiful desktop. I just wanted what I used conveniently accessible, and Lxde would let me do that.

                    I have no idea about the internal politics that govern the default configuration shipped with XFCE. But it’s been my experience that open source projects in general suck at UI. As long as I can reconfigure things to my liking, I’ll settle. (And UI is intensely subjective – you might run screaming into the night from what I consider a good setup.)

                    >Dennis

              2. Actually like the smartphone annoyance, our LG TV’s user interface keeps getting unwelcome changes to how it works and can be a pain at time. Although it has a TV and Satellite input, it’s also hooked up to an OpenVIX powered box with a 1Gig Hard disk which provides a much more stable UI and allows checking emails in the adverts. I’ve three OpenVIX TV boxes and OpenWebif allows access from any device such as the desktop here. The fourth monitor normally has TV on like it has now ;) Plex is another component of the setup which also suffers from unwelcome changes to it’s UI all too often.

            1. Yes, it exists, but I haven’t run into much in the real world that uses it. I can imagine some real world use cases where it would be handy; for example, a smart TV that was connected to a home theater receiver could get its network connection that way so you would need to run one fewer cable to the TV, or a dongle or stick “set-top” box (Chromecast, Fire TV stick, etc) could get its networking from the TV.

      2. Hmm? The compute sticks I used had a pretty decent wifi connection, no ethernet required. The atom cpus in it were quite lame as was the speed of the local flash, and video performance was poor. That said, the form factor was “right”, and with a bluetooth keyboard you could keep one on your car keychain.

  25. Devuan?

    What do you think of Systemd?

    (BTW, ibiblio.org is blocked by the Virgin Active wi-fi).

        1. >I was hoping for more meat.

          Not a battle I care to get into publicly yet. Certainly not until after I see how Devuan rides.

          I’m annoyed that I couldn’t put it on Cathy’s NUC. The kernel was too old to see the NIC in the device.

        2. Stance on systemd has joined religion and politics on the list of things not to discuss at Thanksgiving dinner.

  26. I think it’s a bit early to think people will use alternative screens as a replacement for dedicated computer monitors (whether desktop or laptop), mostly due to placement of said screens, but eventually I can see this happening as screens just become super ubiquitous. Your coffee table, windows, lamp shades, etc. can eventually be a screen. UI paradigms will have to shift again, and I think there’s going to be some sort of gesture based interface at some point to everything.

    I do think nice keyboards will continue to be marginalized and used only by enthusiasts. Look at the way young people use computers these days — I see kids typing on the software keyboards of tablets faster than most adults type on a regular keyboard. Though this thought makes me shudder, I wonder if the only physical keyboards left in 10-15 years will be high-end models used by enthusiasts. I think of it much like the command line vs. GUI — used to be that ALL computer users needed to learn how to use a command shell, whilst now it’s a small minority of specialists that even see the point. Command shells continue to get better, because they’re used by a small, highly-technical minority that demands excellence and needs them to get real work done, but the average Windows or Mac user has either never pulled up a terminal window, or only done so accidentally.

    One other minor quibble — are you aware that the Mac Mini has finally been updated to newer specs after years of not being updated? Still a terrible value, cost-wise, but it’s now up-to-date.

    One thing I’d love to see are specialized compute bricks that come pre-configured for certain server tasks that are easy to set up by mere mortals. Like a tiny Plex server that you just plug into the wall and configure from your phone. Of a print/file server that is configured the same way. Or a home security system server, etc. All built on open technologies, but packaged in a way that is accessible to more people that don’t have a nerd in the house.

    1. >Though this thought makes me shudder, I wonder if the only physical keyboards left in 10-15 years will be high-end models used by enthusiasts.

      Not likely. Most of the volume in the mechanical-keyboard market these days isn’t from old-school Model M fans but from gamers. Gamers know, or think they know, that mechanical keyboards give them a slight speed and control advantage over the dome-switch crap. A wave of respect for good keyboards has sort of rippled out from gaming, you can see this at sites like geekhack.org.

      >One thing I’d love to see are specialized compute bricks that come pre-configured for certain server tasks that are easy to set up by mere mortals.

      I don’t think this will happen. The problem isn’t building such things, it’s the after-sale customer support costs. Basically, you can either (a) offer support that’s good enough to be useful to J. Random Dimwit, (b) keep your unit price low enough for them to sell, or (c) make a profit on the sale price. Choose any two.

      1. Gamers know, or think they know, that mechanical keyboards give them a slight speed and control advantage over the dome-switch crap.

        I’m leaning towards “think they know”. The keyboard with the lowest latency on the market today is — wait for it — the Apple Magic Keyboard. Which you would probably be totally fine with, and even prefer, if you trained your fingers not to strike the keys so damn hard! :)

        A wave of respect for good keyboards has sort of rippled out from gaming, you can see this at sites like geekhack.org.

        I’ve noticed this too — if nothing else, modern gaming equipment has put mechanical keyboards within much easier reach of hackers and other discerning keyboarders, allowing them to discover or remember what made the mechanical key action so much better. Back when I bought my Das Keyboard 10 years ago, after an unfortunate experiment with the “gaming equipment” of the era that left my hands sore and cramped, I thought I was going to have to carefully preserve it in order to keep enjoying a keyboard of such quality. Today I have no such worries, as there’s a healthy variety of good boards from brands like “Ducky” and Razer available at retail.

        1. “with the lowest latency on the market today is … the Apple Magic Keyboard. Which you would probably … even prefer,”

          Thanks. I’ll pass.

          Tried using one of those for about 5 minutes when helping a client with his iMac. I don’t know or care about latency, but that keyboard looks like a Chinese-cheapified version of the IBM PCjr. As a torture device it is unrivaled.

          But keyboards are off the scale in terms of “user preference” being the deciding factor. So to each his own.

        2. Eep, the Apple Magic Keyboard? Color me a keyboard snob (and I’ll wear the badge with honor), but foo and botheration! It’s got the build quality of a rubber rat, short-travel squishy keys with less tactility than MX Browns, and it’s a gorram 60% layout to boot!

          If you’re after absolute minimum latency, sure, maybe. I’m no twitch gamer, though – I play games, but not usually ones where I’ll notice latency differences below about 100 ms.

          I can’t speak for ESR here, but what keeps me liking the Model M isn’t so much the high actuation force (which I actually would consider a drawback if not for the excellent keyfeel) as the sharp tactility, the total lack of squish in either the keys or the chassis and the fact that the keyboard stays where you put it. Also, the large size is a fairly good fit for my hands, while I trip over my own fingers on small layouts. Basically the Apple Magic Keyboard lacks everything I like about the Model M, which are all traded off to get a small, compact layout with low actuation force and short key travel, which are features I don’t value very much at all. (In fact, low-force, short-travel keys are a misfeature IMO, but folks with small hands or a very light touch may well disagree.)

          I’ve finally found a keyboard I like better – the Matias Tactile Pro – but even there it’s close. What I like better about the Matias is that the switches are even crunchier than the IBM buckling springs – the tactile bump feels sharper, even though the actuation force is somewhat lower (60 gf vs 75 gf), and the switch, being based on an Alps design, doesn’t have the reset hysteresis characteristic of both IBM buckling springs and Cherry MX Blues. The latter is probably only interesting for gaming, which I’ll admit is why I care about it. Doesn’t come up often, but is occasionally noticeable.

          1. > It’s got the build quality of a rubber rat, short-travel squishy keys with less tactility than MX Browns, and it’s a gorram 60% layout to boot!

            Short-travel squishy keys would be a dealbreaker for me even without all its other problems.

          2. >I can’t speak for ESR here, but what keeps me liking the Model M isn’t so much the high actuation force (which I actually would consider a drawback if not for the excellent keyfeel) as the sharp tactility, the total lack of squish in either the keys or the chassis and the fact that the keyboard stays where you put it.

            On this subject you are hereby authorized to speak for me. :-) “Sharp tactility” is a good way to put it.

          3. “Color me a keyboard snob…”

            My favorite of all time is the original Microsoft Natural keyboard. Yeah, that kinda makes me the opposite of a keyboard snob.

            “– the Matias Tactile Pro”

            Thanks for that. I have put one on my wish list.

            1. >My favorite of all time is the original Microsoft Natural keyboard. Yeah, that kinda makes me the opposite of a keyboard snob.

              Does it? I think that was actually a pretty good design, except for the keyswitches being sucky.

              I’d use something like a Natural with tactile switches. And probably like it a lot.

              1. “I’d use something like a Natural with tactile switches. And probably like it a lot.”

                Me too. Haven’t found one yet.

                The ergonomics of most keyboards are really bad. Especially for anyone with broad shoulders. The Natural was a respectable attempt at a fix.

                1. That has me half-thinking of trying to build a custom one… I have a box of Matias Click switches sitting here that was slated for either a Northgate OmniKey repair or a Sun Type 5 conversion, but…

                  1. >a Northgate OmniKey repair

                    /me salutes.

                    Second best keyboard ever after the Model M. And much harder to find.

              2. I’d prefer something like the Microsoft comfort curve but with tactile switches. Or some other “ergo” design that didn’t do tenting. (No tenting for me, thankyouverymuch.) But yeah…

                1. Noone so far has recalled that the layout of all keyboards is wrong too. We all should be using dvorak instead of qwerty.

        3. As someone who actually uses one of Apple’s butterfly-switch designs, let me just say… You have got to be kidding me! Jade is absolutely right: Apple keyboards these days have horrible build quality. The ‘J’ key on mine has already popped out, and I can’t get it to stay put! Overall, the keyboard as a whole feels flimsy and fragile. The lack of travel on the newer designs just feels… weird, too. It also hurts to use the keyboard for extended periods–but maybe that’s because I do strike the keys too hard. (Damn rubber domes…) Regardless, I wouldn’t trust that thing for gaming or typing!

          /me sighs

          I yearn for the days of yore when Apple actually made good keyboards, with Alps switches. I miss my old Apple IIGS ADB keyboard–one of the first ever made. That was a keyboard.

      2. Not likely. Most of the volume in the mechanical-keyboard market these days isn’t from old-school Model M fans but from gamers.

        I was including gamers under my definition of “enthusiasts”. I’ve met a lot of gamers over the last 2 years when I explored the insanity of building a custom keyboard. I was a bit irritated when I discovered that virtually all of the PCB’s out there for building custom were for either tenkeyless or even more minimalistic configurations. No respect among gamers for the numpad.

      3. But haven’t they done away with keyboards Arn’t you just supposed to talk to the box now. Something that got disabled on my phone straight away!

      4. Let me say, as a user who is not a programmer (the only programming language I’ve used this century is LaTeX), I go for Apple, who give me (a) and (c). It also helps that I prefer the Mac interface—I had Windows running on my Mac Mini inside an emulator for a while, because the client who wanted me to edit LaTeX manuscripts used software that was written for Windows, and I never got past bare tolerance—but customer support from people with actual knowledge is the biggest selling point. I hope Apple’s strategy remains viable for them for a long time.

    2. I think of it much like the command line vs. GUI — used to be that ALL computer users needed to learn how to use a command shell, whilst now it’s a small minority of specialists that even see the point.

      Used to be the same with editors too. There are stories from the 1970s of secretaries using Lisp to “customize” Multics Emacs without trouble. (Because they were customizing, and not programming, you see, it was a nontechnical task that they weren’t afraid to approach!). Then Emacs became a serious, highly technical editor pretty much for programmers only. Nowadays, you can forge a career as a highly-paid professional developer without ever leaving the warm cocoon of an IntelliJ-derived product to do your coding and testing, and a Web browser to check on CI builds, perform code reviews, and frobnicate JIRA tickets. Emacs is dimly remembered either as some ancient thing that ran on old DEC iron, or the alternative to vim that lost the war in the 90s.

  27. Kind of want to get an Intel nuc just for the skull. It’s a good excuse for setting up a media pc for movies and streaming games to.

    On the laptop end, I do think Razer could have a pretty nice triple screen laptop in five years with a triple display setup near the five pound region. If all the screens are at least 18 inches that would be good enough for a portable development machine. I think the guy next to me on the plane might be pissed if I pop those out for him.

    1. >Kind of want to get an Intel nuc just for the skull.

      That’s the Hades Canyon tricked-out-for-gamers version. Not what I have.

    2. You know what I miss? Lunchbox form-factor PCs like the old Compaqs. If you’re living in a college dorm or small shitty apartment (and have to move frequently because landlords keep driving the rent out of your reach), those could retain most of the expandability of regular PCs while still being a compact man-portable complete system requiring few to no wires or cables. They’d be great for gamers. Some of the bigger ones could even support multiple displays all in one case.

      Unfortunately the only ones I see nowadays are for sale only to government agencies, military services, and heavy industry, at a cost of tens of thousands of dollars a pop.

      1. Graphic user interfaces killed them. A screen that was big enough for the text-only world of 1990 won’t cut it now.

        Now we have laptops and all-in-one desktops to fill that niche. They work well for most people. But they lack the expansion capacity of the old lunchbox computers, so there are some edge cases that fall through the cracks. And they don’t look as pretty on your desk as a modern AIO. It would be possible to build a thicker AIO with a slot or two behind the screen, but then it wouldn’t be as thin…

        1. >Now we have laptops and all-in-one desktops to fill that niche. They work well for most people. But they lack the expansion capacity of the old lunchbox computers

          I was unfamiliar with the concept of an “all-in-one desktop” before this. I did some websearches, and….yikes! There’s just one mainboard inside the flatscreen enclosure?

          I don’t like it. I didn’t mind the NUC doing away with expansion-card slots; their time has passed. But at least if the NUC or monitor goes tits-up I’m not stuck with replacing the other half. And the fact that I can yank out the SSD and transplant it into another NUC gives me warm fuzzies.

          1. @esr: I was unfamiliar with the concept of an “all-in-one desktop” before this. I did some websearches, and….yikes! There’s just one mainboard inside the flatscreen enclosure?

            You’re just now aware of them? You need to get out more. :-)

            They are popular choices for the sort of folks that don’t fiddle with the hardware. An old friend of mine got one as an emergency replacement for a failed desktop. It fit neatly on top of her desk, and she plugged in USB keyboard and mouse and away she went. I provided tech support in getting her data off the failed machine. Fortunately, that was simple. She wanted a non-touch screen model, but that was out of stock at the retailer, so…

            I have a Dell AIO at the moment that was a pass along from a developer neighbor in my area. It decided it couldn’t boot from the boot disk. (The disk came up just file when I put it into a drive enclosure and accessed it from my desktop, but the AIO decided it wasn’t bootable.)

            I got a 120GB SSD I plan to put Linux on as a replacement boot disk, but it’s not going to live on my desk. It will slide sideways into the shelf below the desk where other systems live, and be accessed over my network via a remote desktop setup. The original 1TB SATA HD is in a USB3 drive enclosure awaiting a use case.

            >Dennis

          2. > I was unfamiliar with the concept of an “all-in-one desktop” before this. I did some websearches, and….yikes! There’s just one mainboard inside the flatscreen enclosure?

            It’s basically a reconfigured laptop.

            Also, perhaps you wondered if it could, or when it would, happen.. but this is the generation gap.

            1. >It’s basically a reconfigured laptop.

              Sure. But it feels like a worse idea to me because:

              (a) It’s labeled ‘desktop’, which has in my mind connotations of “you’re buying modularity”, and

              (b) the laptops I use support swapping out the 2.5″ slimline hard drive or SSD. That’s important, it means you can rescue your data if anything but the hard drive itself goes south. Looks like “all-in-ones” don’t have that option.

              1. @esr: the laptops I use support swapping out the 2.5? slimline hard drive or SSD. That’s important, it means you can rescue your data if anything but the hard drive itself goes south. Looks like “all-in-ones” don’t have that option.

                You need to actually look at one in the flesh. You certainly can do it. They are not exactly “designed for serviceability” like my HP desktop which makes it easy to pop the hood and fiddle with the innards, so it might be more of a PITA than a laptop, but it’s doable. The pass-along I have let me get the existing SATA HD out without extreme effort, and you can swap out existing RAM to upgrade to 8GB total. Pop the back cover and you can access anything.

                >Dennis

  28. What operating system will they run?

    Android is just about usable for a hand-held machine that you don’t do Real productivity work on but it’s architecture is app-centric and that’s to serve the app stores and place-on-homescreen-as-a-brand use cases, not to serve me as a user.

    This (containerised) architecture means that everything is silo’d vertically and only interoperable by an explicit choice of the app author. This means that Metcalfe’s law never takes hold and it’s not possible to combine tools or do anything that wasn’t foreseen by the original author.

    The current state of smartphone app interoperability is not even up to the quality of where Windows office suites were in the early ’90s when Microsoft started embarking on their effort to make their suite a bit more seamless.

    On the other side of the coin, there is not yet (AFAIK) a Linux distro that really works acceptably on anything other than conventional WIMP-mode peripherals with a big screen.

    Is there currently a once-in-a-generation opportunity for the Open Source community to provide an open architecture style platform that we had on the PC but never got to on mobile? How long do we have?

    1. “On the other side of the coin, there is not yet (AFAIK) a Linux distro that really works acceptably on anything other than conventional WIMP-mode peripherals with a big screen.

      Is there currently a once-in-a-generation opportunity for the Open Source community to provide an open architecture style platform that we had on the PC but never got to on mobile? “

      Debatable. Canonical made a well-funded effort at “convergence” of putting Ubuntu on a phone/tablet with a highly adaptable UI. It had a lot of good ideas, but no market.

      But for those of us who need a *real* computer, plugging my smartphone into a big display buys me nothing. But if that phone was secretly running a real distro…

      1. The problem with Canonical’s effort was that they started with the right idea and then suckered themselves into a convergent interface instead of a dual interface, and, to my understanding, eventually strayed so far from the original idea that they reached the point where they couldn’t even run the same code as normal ARM-architecture desktop Ubuntu in desktop mode anymore.

        My current phone outperforms a Raspberry Pi by a healthy margin. Every single mid-range phone these days should already come with a dock and with a desktop userland preinstalled. The problem is, Google doesn’t want Android to be anything more than a fancy terminal to their Web services with as much local functionality lobotomized as possible, Apple wants to get iDolaters to cough up money for both a computer *and* a phone as separate hardware items, and Microsoft and Canonical, despite wanting to deliver products that function both as a computer and a phone, and despite attempting to do so, are too stupid to actually deliver.

        1. If such a thing is ever to come to pass, the only player I can imagine pulling it off would be Samsung. The money and market penetration are all to be had in the phone space. I can’t think of anyone else with that kind of reach in that market. Apple could do it, of course, but their incentives are against it.

          1. Apple is really the only company that can do it. Look at your laptop. Virtually everything distinctive about it was a design decision made by Apple. Its form factor — till about 2011, virtually all laptops were PowerBook clones; after that, most became MacBook Air clones. The fact that it has a touchpad centered below the keyboard with the case serving as a wrist rest on either side — also a PowerBook innovation. The I/O port and disk drive options available — no one was putting USB into consumer hardware until Apple did it with the Mac. No one was doing USB-C until Apple made it the only port on MacBooks. Floppyless and later CD-ROM-less computers were marketed by Apple first, and then copied by everyone else. Face it — this is Apple’s industry, everybody else just plays in it.

            If Samsung does phone-desktop convergence (like they’re trying to do with DeX), it will be an interesting gimmick that only nerds and hobbyists would get any mileage out of. (Especially if they use — ugh — desktop Linux.) And it will be weak and useless for real work because non-Apple smartphone CPUs are not up to the task of desktop work. If Apple does phone-desktop convergence, it will change the world. And because Apple makes the fastest ARM parts in the world, the iPhone may end up becoming not just the most powerful smartphone, but the most powerful computing device on the market.

            And convergence won’t happen for most folks until Apple decides it will happen.

            1. That’s sad.

              Because no matter how well designed the boot is, having it stomping on your face forever is, well double plus ungood.

              1. Mistress Apple be like “You’ll take that Louboutin on your pig face and you’ll like it, bitch!” And you’d be surprised how many customers are into that sort of thing.

                  1. Well, these days not having an iPhone can actually negatively affect your social life. Search Twitter for the hashtag #greenbubbles. iOS color-codes message bubbles depending on origin: blue for iMessage, green for SMS. Some women will not date a man who texts them in green bubbles.

                    1. >Some women will not date a man who texts them in green bubbles.

                      If I were trolling for dates, I would consider that women with a selection criterion that shallow and trendoid are doing me a favor by excluding me.

  29. The first all-in-ones were the lunchbox computers: the Osborne 1, the Kaypro, and finally the Compaq lunchboxes and their clones. But those were meant to be maintained, and the Compaqs even had expansion slots.

    Apple started the modern trend with the original Macintosh; a packaged device that wasn’t meant to be user-repairable. It was even held together with non-standard screws to make it more challenging to open the box.

    The first iMac in 1998 was a return to an all-in-one package; that was the colorful desktop lump that looked vaguely like an old-school terminal but in brighter colors. It’s not surprising that it was the first major product introduction after Steve Jobs returned to the company; he was always a champion of the “computer as a black box” idea. Next they had the “Luxo lamp” G4 design with the screen mounted on a swivel base. Since then they have done a succession of designs that basically look like fat and heavy monitors; over time they have gotten thinner as packaging techniques have improved and are now not much thicker than a monitor.

    Windows all-in-one systems came much later. But they’re definitely a thing now.

    All-in-one computers can work well, but if anything breaks you’re usually looking at a huge repair bill because most of the components are non-standard. (You can’t just pop down to the local computer store or place an order on Newegg for a motherboard or a power supply; only the original design will fit.) Often the RAM and SSD are soldered to the motherboard so you can’t replace or upgrade them, even if you can figure out how to open the box. They typically use laptop CPUs so they’re slower than standard desktop systems, if they have optical drives they’re the thin and less durable ones made for laptops, and their performance is frequently limited by thermal throttling.

    One other downside of an all-in-one is that you have to replace the display whenever you replace the computer. Monitors have longer useful lifetimes than CPUs do, but with an all-in-one you’re forced to replace both on the same upgrade cycle, increasing the cost of upgrading.

    Like you, I’m not a fan of all-in-ones. I still like my tower boxes for my main desktop systems. I’d consider a NUC for some purposes, especially now that I have changed over to putting all my bulk storage on a pair of NAS boxes rather than having desktop systems with multiple disk drives.

  30. > The polygon arms-race will top out when our displays exceed the highest resolution and frame rate the human retina can handle

    Bah. You fail to understand the need of the young(ish) human male to measure his penis against others.

    1. Just because the polygon arms-race will end doesn’t mean that the dick-waving contests will. The dick-waving contests will simply move to another proxy.

      1. The market drives the technology, as long as hormonal males will pay for bigger number the market will provide them.

          1. I was going to argue that the Hellcat is not a car for teen-age dick-waving due to it’s sticker price, but then remembered that young(ish) includes 20-somethings as well, some of whom will have finished some college and landed a job with enough salary to afford a Hellcat. At least the Hellcat has better odds of getting its owner laid than an Nvidia GTX 1080 Ti.

            I will argue that the Hellcat exists at least in part because Chrysler engineers knew they could and because they knew it would be awesome. There is something quintessentially American about adding more horsepower to a muscle car that already has enough to kill you twice. It would have been a moral sin not to.

            1. When I lived in Portland, OR, the local Starbucks was flooded on afternoons with young Russians (teens or twenty somethings) whose parents were flush with the spoils of capitalism and who would do donuts in the parking lot with whatever excessively powerful sports car said parents could afford to buy them.

              The Hellcat Redeye is the sort of thing those kids would beg/demand their parents to buy them for Christmas.

              1. When I lived in central Missouri that happened on Friday and/or Saturday nights on Business Loop 70 down in Columbia.

                Only they weren’t Russians, they were Rednecks, and half the time they’d rebuilt the cars themselves.

                1. >Only they weren’t Russians, they were Rednecks, and half the time they’d rebuilt the cars themselves.

                  Yet more evidence, if it were needed, that rednecks > Russians.

            2. I shouldn’t have said “Young(ish)”, because it goes on until the testosterone stops being produced.

              At least in some men.

              At least the Hellcat has better odds of getting its owner laid than an Nvidia GTX 1080 Ti.

              Not really sure that’s the case any more.

              There is something quintessentially American about adding more horsepower to a muscle car that already has enough to kill you twice. It would have been a moral sin not to.

              Which is why someday you’ll have 16k monitors and video cards that will drive them at a 1280 hz refresh rate.

  31. HP Z2 Mini G4 workstation.

    Up to a 6 core processor (either Xeon or I? series), 32 GiB of ram. Two internal drives.
    Gets really pricey as you get up in specs.

  32. Way back in the mid/late 90s, Sun sent a sales team to the company where I worked to do a demo for the JavaStation. The audience was a bunch of tech guys. The sales guy handed off a device to be passed around so everybody could look at it, and it made it nearly halfway around the room before somebody took out a multi-tool and disassembled it.
    Much fun was made of the JavaStation around the water cooler later, some of which was deserved. Java, at the time, was a solution desperately seeking a problem, and baking that precept into hardware was completely daft. Java hasn’t exactly covered itself in glory since, but it does do what it set out to do and has found a home in deeply entrenched enterprise applications. The idea of a network appliance built to a common spec, however, has become an enduring constant today.
    (After all, a Roku device is just a WebTV set-top box connected to YouTube.)
    I spend quite some time at hospitals, and I’ve noticed a lot of Wyse labels on the small form factor PCs in exam rooms. Wyse is now a wholly owned brand of Dell, but I’m just old enough to recognize the name as one of the progenitors of the network computing paradigm.
    A couple of decades ago, I predicted to a geek friend of mine that 1) there will be an OS-wide adoption of dual-cursor protocols, and 2) my phone will become my computer. As it turns out, nobody is clamoring for using dual pointers even for radical edge cases (VR is excluded in this, and pinch-zoom doesn’t count), and my phone is a computer cosplaying as an accessory. For whatever reason, it has never been the interface that made the computer; it’s always been the network.

  33. It’s already to the point where my customers who don’t have heavy computing needs will probably end up getting a Raspberry Pi or similar once I run out of stock of used boards. The Pi 3 has sufficient power and memory for a substantial chunk of the people in the non-tech, non-game crowd.

    Historically the big advantage to the more modular full-size designs was that they’re modular. You don’t have to upgrade expensive parts all at once and you don’t have to replace the whole thing if one piece burns out. That is still their advantage. The greater modularity also means more competition for any particular component. So once you go past the very lowest end of the power requirements spectrum you get more computing power per dollar with the more modular, larger systems.

    But these days you don’t need more power unless you’re doing heavy number crunching or playing graphics intensive games (or want to run Windows 10). Many of the people I do maintenance stuff for are running 5-7 year old machines and not pushing them particularly hard. By the time they need replacing they still won’t be likely to need anything more powerful than the cheapest single-board system, so the low-cost and portability of such devices is likely to win.

  34. A couple of things about NUC fans: 1, do NOT try to de-crud one with compressed air–you can spin the fan so much you damage it.

    2. on some models, you can’t find a replacement–I have a Skull Canyon one, and damaged the fan that way (I didn’t find out I had probably caused it until afterwards, or I wouldn’t have put in a warranty claim!) When I contacted Intel, they said “ship us the entire unit back, but take out any drives or RAM you put in it.” I have no idea what they do, but they don’t put a new fan in them–they sent me an entirely new unit, new in retail packaging.

    1. >A couple of things about NUC fans: 1, do NOT try to de-crud one with compressed air–you can spin the fan so much you damage it.

      I appear to have escaped this fate. Since I aired out the fan it’s working and not screaming.

Leave a Reply to BobtheRegisterredFool Cancel reply

Your email address will not be published. Required fields are marked *