May 23

The low-down on home routers – how to buy, what to avoid

Ever had the experience of not realizing you’re a subject-matter-expert until someone brings up a topic on a mailing list and you find yourself uttering a pretty comprehensive brain dump about it? This happened to me about home and SOHO routers recently. So I’m repeating the brain dump here. I expect I’ll get some corrections, because at least one of my regulars – I’m thinking of Dave Taht – knows more about this topic than I do. But here goes…

If you’re looking to buy or upgrade a home router, I’ll start with some important negative advice: Don’t go near hardware with a Broadcomm chip in it. The current too-weak-to-thrive threshold for router hardware is <4MB flash or <32MB RAM; if you buy less than that your forward options will be seriously limited. And most importantly: Don’t trust vendor firmware! Always reflash your router with a current version from one of the major open-source firmware stacks.

If your prompt reaction is “I ain’t got time for that!”, then the Romanian, Bulgarian, and Russian cyber-mafias thank you for your contribution to their bot networks and promise they won’t do anything really bad with your router. But they will sell control of it to the highest bidder, all right.

Yes, it’s that bad out there. You’ll understand something of why by the time you finish reading this.

Continue reading

Apr 30

Friends of Armed & Dangerous 2019

Once again I will be at Penguicon and hosting a party for all friends of this blog. This coming Friday evening, room number not yet known, it will be posted at the con.

Those of you who participated in the design of the Great Beast may be interested to know that I expect to receive its successor at Penguicon – a Greater Beast built from a 64-core Threadripper chip. The machine might well be at the party.

UPDATE: room 507, 9pm, Friday

Apr 30

Spotting the wild Fascist

The term “fascist” gets thrown around a lot by people who have no actual clue what Fascism was about. I know what it was about because when I was about 11 or 12 I read Shirer’s The Rise and Fall of of the Third Reich and became fascinated by the question which has driven my study of politics and history for all of the fifty years since. Which is: how do we prevent the genocidal horrors of the Nazi regime from ever recurring?

In the process of trying to answer this question I have read deeply about Naziism, Italian Fascism, Francoite pseudo-Fascism, Marxism, Irrationalism, and several political tendencies related to these. I know their theory, I know their history, and I know what Fascists believed about themselves. Most of all I think I have a pretty firm grasp on how a revival of Fascism in the 21st century would look. And it’s not beyond the bounds of possibility, either…but if it happens, it’s not going to come from where most people currently throwing around the term “fascist” expect.

Hence, a field guide to spotting the wild Fascist. And avoiding false alarms.

Continue reading

Apr 28

Gun voodoo and intentionality

There’s a recent article about gun violence in Haiti that features the following quotes:

But the anthropological lesson from Haiti is that the truth is more complex. It isn’t just the technological lethality of guns that makes them dangerous: They also exert a power on human agency. They change us. It is both the technology and the symbolism of a gun that can encourage someone to shoot.

[…] There is a lesson to be gleaned from understanding the supernatural potency of guns. We cannot think about guns and people as separate entities, debating gun restrictions on one hand and mental-health policy on the other. The target of intervention must be the gun-person composite. If we are to truly understand and control gun violence, we need to accept that guns have potent technological and psychological effects on people – effects that inspire violent ways of being and acting in the world.

This article has come in for a great deal of mockery from gunfolks since it issued. Representative bits of snark: “Apparently, the ‘magic’ of a professorship can turn you into an imbecile.”, “Gun owners in US- approx 100 million. If this bozo was right, everyone would be dead.”, and a picture of an AR-15 with speech balloons saying “Pick me up…Shoot me at unarmed people…you are powerless.”

I’m probably going to startle a lot of my readers by asserting that the article is not entirely wrong and gunfolks’ dismissal of it is not entirely right. In fact I’m here to argue that almost the entire quoted paragraph is exactly correct, and the last sentence would be correct if it replaced the word “violent ways” with “both violent and virtuous ways”.

So keep reading…

Continue reading

Apr 17

Contributor agreements considered harmful

Yesterday I got email from a project asking me to wear my tribal-elder hat, looking for advice on how to re-invent its governance structure. I’m not going to name the project because they haven’t given me permission to air their problems in public, but I need to write about something that came up during the discussion, when my querent said they were thinking about requiring a contributor release form from people submitting code, “the way Apache does”.

“Don’t do it!” I said. Please don’t go the release-form route. It’s bad for the open-source community’s future every time someone does that. In the rest of this post I’ll explain why.

Continue reading

Mar 19

Am I really shipper’s only deployment case?

I released shipper 1.14 just now. It takes advantage of the conventional asciidoc extension – .adoc – that GitHub and GitLab have established, to do a useful little step if it can detect that your project README and NEWS files are asciidoc.

And I wondered, as I usually do when I cut a shipper release: am I really the only user this code has? My other small projects (things like SRC and irkerd) tend to attract user communities that stick with them, but I’ve never seen any sign of that with shipper – no bug reports or RFEs coming in over the transom.

This time, it occurred to me that if I am shipper’s only user, then maybe the typical work practices of the open-source community are rather different than I thought they were. That’s a question worth raising in public, so I’m posting it here to attract some comment.

Continue reading

Mar 08

Declarative is greater than imperative

Sometimes I’m a helpless victim of my urges.

A while back -very late in 2016 – I started work on a program called loccount. This project originally had two purposes.

One is that I wanted a better, faster replacement for David Wheeler’s sloccount tool, which I was using to collect statistics on the amount of virtuous code shrinkage in NTPsec. David is good people and sloccount is a good idea, but internally it’s a slow and messy pile of kludges – so much so that it seems to have exceed his capacity to maintain, at time of writing in 2019 it hadn’t been updated since 2004. I’d been thinking about writing a better replacement, in Python, for a while.

Then I realized this problem was about perfectly sized to be my learn-Go project. Small enough to be tractable, large enough to not be entirely trivial. And there was the interesting prospect of using channels/goroutines to parallelize the data collection. I got it working well enough for NTP statistics pretty quickly, though I didn’t issue a first public release until a little over a month later (mainly because I wanted to have a good test corpus in place to demonstrate correctness). And the parallelized code was both satisfyingly fast and really pretty. I was quite pleased.

The only problem was that the implementation, having been a deliberately straight translation of sloccount’s algorithms in order to preserve comparability of the reports, was a bit of a grubby pile internally. Less so than sloccount’s because it was all in one language. but still. It’s difficult for me to leave that kind of thing alone; the urge to clean it up becomes like a maddening itch.

The rest of this post is about what happened when I succumbed. I got two lessons from this experience: one reinforcement of a thing I already knew, and one who-would-have-thought-it-could-go-this-far surprise. I also learned some interesting things about the landscape of programming languages.

Continue reading

Mar 05

How not to design a wire protocol

A wire protocol is a way to pass data structures or aggregates over a serial channel between different computing environments. At the very lowest level of networking there are bit-level wire protocols to pass around data structures called “bytes”; further up the stack streams of bytes are used to serialize more complex things, starting with numbers and working up to aggregates more conventionally thought of as data structures. The one thing you generally cannot successfully pass over a wire is a memory address, so no pointers.

Designing wire protocols is, like other kinds of engineering, an art that responds to cost gradients. It’s often gotten badly wrong, partly because of clumsy technique but mostly because people have poor intuitions about those cost gradients and optimize for the wrong things. In this post I’m going to write about those cost gradients and how they push towards different regions of the protocol design space.

My authority for writing about this is that I’ve implemented endpoints for nearly two dozen widely varying wire protocols, and designed at least one wire protocol that has to be considered widely deployed and successful by about anybody’s standards. That is the JSON profile used by many location-aware applications to communicate with GPSD and thus deployed on a dizzying number of smartphones and other embedded devices.

I’m writing about this now because I’m contemplating two wire-protocol redesigns. One is of NTPv4, the packet format used to exchange timestamps among cooperating time-service programs. The other is an unnamed new protocol in IETF draft, deployed in prototype in NTPsec and intended to be used for key exchange among NTP daemons authenticating to each other.

Here’s how not to do it…

Continue reading

Feb 23

Announcing loccount 2.0 – now up to 74 languages

I just released the 2.0 version of loccount.

This is a major release with many new features and upgrades. It’s gone well beyond just being a faster, cleaner, bug-fixed port of David A. Wheeler’s sloccount. The count of supported languages is now up to 74 from sloccount’s 30. But the bigger change is that for 33 of those languages the tool can now deliver a statement count (LLOC = Logical Lines Of Code) as well as opposed to a line count (SLOC = Source Lines of Code, ignoring whitespace and comments)

To go with this, the tool can now perform COCOMO II cost and schedule estimation based on LLOC as well as COCOMO I based on SLOC.

The manual page includes the following cautions:

Continue reading

Jan 29

RISC-V is doing disruption right

I’ve recently become aware of RISC-V.

Verry innterresting.

Technical introduction here (somewhat out of date; hardware support is broader and deeper now, and I have seen video of a full Linux port running Doom), but the technicalia is not mostly where I’m going with this post.

I’m seeing a setup for a potentially classic disruption from below here. And not mainly because this instruction set is well designed, though it is. Simple, clean, orthogonal – it makes my compiler-jock heart happy; writing a code generator for it would be fun. If I needed to, but there’s already an LLVM back end for it.

And that points at what’s really interesting about RISC-V. whoever is running their product strategy has absorbed the lessons of previous technology disruptions and is running this one like a boss.

Continue reading

Jan 13

A martial artist looks at swordfighting in the movies

I was reminded, earlier today, that one of the interesting side effects of knowing something about hand-to-hand and contact-weapons-based martial arts makes a big difference in how you see movies.

Most people don’t have that knowledge. So today I’m going to write about the quality of sword choreography in movies, and how that has changed over time, from the point of view of someone who is an experienced multi-style martial artist in both sword and empty hand. I think this illuminates a larger story about the place of martial arts in popular Western culture.

Continue reading

Dec 24

Pessimism about parallelism

Massive concurrency and hardware parallelism are sexy topics in the 21st century. There are a couple of good reasons for this and one rather unfortunate one.

Two good reasons are the combination of eye-catching uses of Graphics Processing Units (GPUs) in games and their unexpected secondary uses in deep-learning AI – these exploit massive hardware parallelism internally. The unfortunate reason is that single-processor execution speeds hit a physics wall in about 2006. Current leakage and thermal runaway issues now sharply limit increases in clock frequency, and the classic way out of that bind – lowering voltage – is now bumping up against serious quantum-noise issues.

Hardware manufacturers competing for attention have elected to do it by putting ever more processing cores in each chip they ship and touting the theoretical total throughput of the device. But there have also been rapidly increasing amounts of effort put into pipelining and speculative execution techniques that use concurrency under the hood in attempts to make the serial single processors that programmers can see crank instructions more rapidly.

The awkward truth is that many of our less glamorous computing job loads just can’t use visible concurrency very well. There are different reasons for this that have differing consequences for the working programmer, and a lot of confusion abroad among those reasons. In this episode I’m going to draw some distinctions that I hope will help all of us think more clearly.

First, we need to be clear about where harnessing hardware parallelism is easy and why that seems to be the case. We look at computing for graphics, neural nets, signal processing, and Bitcoin mining, and we see a pattern: parallelizing algorithms work best on hardware that is (a) specifically designed to execute them, and (b) can’t do anything else!

We also see that the inputs to the most successful parallel algorithms (sorting, string matching, fast-Fourier transform, matrix operations, image reverse quantization, and the like) all look rather alike. They tend to have a metric structure and an implied distinction between “near” and “far” in the data that allows it to be carved into patches such that coupling between elements far from each other is negligible.

In the terms of an earlier post on semantic locality, parallel methods seem to be applicable mainly when the data has good locality. And they run best on hardware which – like like the systolic-array processors at the heart of GPUs – is designed to support only “near” communication, between close-by elements.

By contrast, writing software that does effective divide-and-conquer for input with bad locality on a collection of general-purpose (Von Neumann architecture) computers is notoriously difficult.

We can sum this up with a heuristic: Your odds of being able to apply parallel-computing techniques to a problem are inversely proportional to the degree of irreducible semantic nonlocality in your input data.

Another limit on parallel computing is that some important algorithms can’t be parallelized at all – provably so. In the blog post where I first explored this territory I coined the term “SICK algorithm”, with the SICK expanded to “Serial, Intrinscally – Cope, Kiddo!” Important examples include but are not limited to: Dijkstra’s n-least-paths algorithm; cycle detection in directed graphs (with implications for 3-SAT solvers); depth first search; computing the nth term in a cryptographic hash chain; network-flow optimization.

Bad locality in the input data is implicated here, too, especially in graph- and tree-structure contexts. Cryptographic hash chains can’t be parallelized because their entries have to be computed in strict time order – a strictness which is actually important for validating the chain against tampering.

There’s a blocking rule here: You can’t parallelize if a SICK algorithm is in the way.

We’re not done. There are at least two other classes of blocker that you will frequently hit.

One is not having the right tools. Most languages don’t support anything but mutex-and-mailbox, which has the advantage that the primitives are easy to implement but the disadvantage that it induces horrible complexity explosions and is nigh-impossible to model accurately in your head at scales over about four interacting locks.

If you are lucky you may get some use out of a more tractable primitive set like Go channels (aka Communicating Sequential Processes) or the ownership/send/sync system in Rust. But the truth is, we don’t really know what the “right” language primitives are for parallelism on von-Neuman-architecture computers. And there may not even be one right set of primitives; there might be two, three, or more different sets of primitive appropriate for different problem domains but as incommensurable as one and the square root of two. At the present state of the art in 2018 nobody actually knows.

Last but not least, the limitations of human wetware. Even given a tractable algorithm, a data representation with good locality, and sharp tools, parallel programming seems to be just plain difficult for human beings even when algorithm being applied is quite simple. Our brains are not all that good at modelling the simpler state spaces of purely serial programs, and much less so at parallel ones.

We know this because there is plenty of real-world evidence that debugging implementations of parallelizing code is worse than merely _difficult_ for humans. Race conditions, deadlocks, livelocks, and insidious data corruption due to subtly unsafe orders of operation plague all such attempts.

Having a grasp on these limits has, I think, has been growing steadily more important since the collapse of Dennard scaling. Due to all of these bottlenecks in the supply of code that can use multiple cores effectively, some percentage of the multicore hardware out there must be running software that will never saturate its cores; or, to look at it from the other end, the hardware is overbuilt for its job load. How much money and effort are we wasting this way?

Processor vendors would love you to overestimate the functional gain from snazzy new silicon with ever larger multi-core counts; however else will they extract enough of your money to cover the eye-watering cost of their chip fabs and still make a profit? So there’s a lot of marketing push out there that aims to distract capacity planners from ever wondering when those gains are real.

And, to be fair, some places they are. The kind of servers that live in rack mounts and handle hundreds of thousands of concurrent transactions per second probably have their core count matched to their job load fairly well. Smartphones or embedded systems, too – in both these extreme cases a lot of effort goes into minimizing build costs and power budgets, and that’s going to exert selective pressure against overprovisioning.

But for typical desktop and laptop users? I have dark suspicions. It’s hard to know, because we’ve been collecting real performance gains due to other technology changes like the shift from spinning-rust to solid-state mass storage. Gains like that are easy to mistake for an effect of more CPU throughput unless you’re profiling carefully.

But here’s the shape of my suspicion:

1. For most desktop/laptop users the only seriously parallel computing that ever takes place on their computers is in their graphics chips.

2. More than two processor cores is usually just wasteful hotrodding. Operating systems may be able to parcel out applications between them, but the general run of application software is unable to exploit parallelism and it is rare for most users to run enough different processor-hungry applications simultaneously to saturate their hardware that way.

3. Consequently, most of the processing units now deployed in 4-core-and-up machines are doing nothing most of the time but generating waste heat.

My regulars include a lot of people who are likely to be able to comment intelligently on this suspicion. It will be interesting to see what they have to say.

UPDATE: A commenter on G+ points out that one interesting use case for multicores is compiling code really quickly. Source for a language like C has good locality – it can be compiled in well-separated units (source files) into object files that are later joined by a linker.

Dec 16

The blues about the blues

Some kinds of music travel well – they propagate out of their native cultures very readily. American rock music and European classical music are obvious examples; they have huge followings and expert practitioners pretty much everywhere on earth that’s in contact with civilization.

Some…don’t travel well at all. Attempts to imitate them by people who aren’t native to their home culture seldom succeed – they fall afoul of subtleties that a home-country connoisseur can hear but not explain well, or at all. The attempts may be earnestly polished and well meant, but in some ineffable way they lack soul. American blues music and to a lesser but significant extent jazz are like this, which is all the more interesting because they’re close historical and genetic kin to rock.

Why am I thinking about this? Because one of the things that YouTube’s recommender algorithms make easy (and almost inevitable) is listening to strings of musical pieces that fit within what the algorithms recognize as a genre. I’ve noticed that the places where its genre recognition is most likely to break down are correlated with whether the genre travels well. So whatever I’m noticing about that distinction is not just difficult for humans but for machine learning as well, at least at current state of the art.

Most attempts at blues by non-Americans are laughable – unintentional parodies by people trying for the real thing. Not all; there was an older generation of British and Irish musicians who immersed in the form in the early Sixties and grokked it well enough to bring it back to the U.S., completely transforming American rock in the process. There are, for some reason, a small handful of decent blues players in Holland. But elsewhere, generative understanding of the heart of the blues is so rare that I was utterly gobsmacked when I found it in Greece.

I don’t know for sure, not being a home-country connoisseur, but I strongly suspect that Portuguese fado is like this. I have a pretty good ear and readily synchronize myself to different musical styles; I can even handle exotica like Indian microtones decently. But I wouldn’t go near fado, I sense a grave risk that if I tried any actual Portuguese fado fan would be politely suppressing a head-shaking he-really-don’t-get-it reaction the same way I usually have to when I listen to Eurojazz.

And Eurojazz players have a better frequency of not ludicrously failing than Euro blues players! Why? I don’t know. I can only guess that the recognition features of “real” jazz are less subtle than for “real” blues, and imitators are thus less likely to slide into unintentional parody. But since I can’t enumerate those recognition features this remains a guess. I do know timing is part of it, and there are uses of silence that are important. Eurojazz tends to be too busy, too slick.

If it’s any consolation to my non-American readers, Americans don’t automatically get it either. My own beloved wife, despite being musically talented, doesn’t have the ear – blues doesn’t speak to her, and if she were unwise enough to try to imitate it she would doubtless fail badly.

One reason I’m posting this is that I hope my commenters might be able to identify other musical genres that travel very poorly – I want to look for patterns. Are there foreign genres that Americans try to imitate and don’t know they’re botching?

And now a different kind of blues about the blues…

There’s an unacknowledged and rather painful truth about the blues, which is that that the primitive Delta versions blues fans are expected to revere are in many ways not as interesting as what came later, out of Chicago in particular. Monotonous, repetitive lyrics, primitive arrangements…but there’s a taboo against noticing this so strong that it took me over forty years to even notice it was there, and I might still not have if I hadn’t spent two days immersed in the rootsiest examples I could find on YouTube.

I found that roots blues is surrounded by a haze of retrospective glorification that (to my own shock!) it too often fails to deserve. And of course the obvious question is “Why?”. I think I’ve figured it out, and the answer is deeply sad.

It’s because, if you notice that later, more evolved and syncretized versions of the blues tend to be more interesting, and you say so, you risk making comparisons that will be interpreted as “white people do it better than its black originators”. And nobody wants that risk.

This came to me as I was listening to a collection of blues solos by Gary Moore, a now-deceased Irishman who played blues with both real heart and a pyrotechnic brilliance you won’t find in Robert Johnson or (one of my own roots favorites) John Lee Hooker. And found myself flinching from the comparison; took me an act of will to name those names just now, even after I’d been steeling myself to it.

Of course this is not a white > black thing; it’s an early vs. late thing. Recent blues players (more likely to be white) have the history of the genre itself to draw on. They have better instruments – Gary Moore’s playing wouldn’t be possible without Gary Moore’s instrument, you can get more tone colors and dynamic range out of a modern electric guitar than you could out of a wooden flattop with no pickups. Gary Moore grew up listening to a range of musical styles not accessible to an illiterate black sharecropper in 1930 and that enriched his playing.

But white blues players may be at an unfair disadvantage in the reputational sweepstakes forever simply because nobody wants to takes the blues away from black people. That would be a particularly cruel and wrong thing to do given that the blues originated as a black response to poverty and oppression largely (though not entirely) perpetrated by white people.

Yes, the blues belongs to all of us now – it’s become not just black roots music but American roots music; I’ve jammed onstage with black bluesmen and nobody thought that was odd. Still, the shadow of race distorts our perceptions of it, and perhaps always will.

Dec 04

The curious case of the missing accents

I have long been a fan of Mark Twain. One of the characteristics of his writing is the use of “eye dialect” – spellings and punctuation intended to phoneticize the speech of his characters. Many years ago I noticed a curious thing about Twain’s eye dialect – that is, he rendered few or no speech differences between Northern and Southern characters. His Northerners all sounded a bit Southern by modern standards, and his Southerners didn’t sound very Southern.

The most obvious possible reason for this could have been that Twain, born and raised in Missouri before the Civil War, projected his own border-state dialect on all his characters. Against this theory I could set the observation that Twain was otherwise a meticulously careful writer with an excellent ear for language, making that an unlikely sort of mistake for him. My verdict was: insufficient data. And I didn’t think the question would ever be resolvable, Twain having died when sound recording was in its infancy.

Then I stumbled over some fascinating recordings of Civil War veterans on YouTube. There’s Confederate “General” Julius Howell Recalls the 1860s from 1947. And 1928-1934: Recollections of the US Civil War. And here’s what jumped out at me…

Continue reading

Nov 27

SRC, four years later

Four years ago, I wrote an entire version-control system in a 14-hour burst of inspiration. It’s a small, lightweight tool designed for solo single-file projects that allows several histories to coexist in a single directory – good for /etc files, HOWTOs, or that script collection in your ~/bin directory.

I wasn’t certain, at the time, that the concept would prove out as a production tool for anyone but me. But it did. Here are some statistics: Over 4 years, 21 point releases, 644 commits, 11 committers. Six issues filed by five different users, 20 merge requests. I know of about half a dozen users who’ve raised their hands on IRC or in blog comments. Code has about quintupled in size from the first alpha release (0.1, 513 lines) to 2757 lines today.

That is the statistical profile of a modest success – in fact the developer roster is larger than I realized before I went back through the logs. The main thing looking at the history reveals is that there’s a user community out there that has been sending a steady trickle of minor bug reports and enhancement requests over the whole life of the project. This is a lot more encouraging than dead air would be.

Of course I don’t now how many total users SRC has. But we can base a guess on fanout patterns observed when other projects (usually much larger ones) have done polls to try to measure userbase size. A sound extrapolation would be somewhere between one and two orders of magnitude more than have made themselves visible – so, somewhere between about 200 and 2000.

(There seems to be something like an exponential scaling law at work here. For random open source project X old enough to have passed the sudden-infant-death filter, if there’s an identifiable core dev group in the single-digit range you can generally expect the casual contributors to be about 10x more and the userbase to be at least 100x more.)

SRC has held up pretty well as a design exercise, too. I’ve had complaints about minor bugs in the UI, but nobody bitching about the UI itself. Credit to the Subversion developers I swiped most of the UI design from; their data model may be obsolete, but nobody in VCS-land has done better at UI and I was at least smart enough not to try.

2.7KLOC is nicely compact for an entire version-control system supporting both RCS and SCCS back ends. I don’t expect it to get much larger; there are only two minor items left on the to-do list, neither of which should add significant lines of code.

Today I’m shipping 1.21. With gratitude to everyone that helped improve it.

Nov 22

Contemplating the cute brick

Some years ago I predicted that eventually the core of your desktop PC would morph into a physically tiny compute engine that would merge with your smartphone, talking through standard ports and cables to full-sized peripherals like a keyboard and (a too large to be portable) flatscreen.

More recently I examined the way that compute bricks – small-form-factor fanless PCs running low-power chips – have been encroaching on the territory of traditional tower PCs. Players in this space include Jetway, Logic Supply, Partaker, and Shuttle. Poke a search engine with “fanless PC” to get good hits.

I have a Jetway running production in my basement; it’s my Internet-facing mail- and web-server. There’s a second one I have set up with Devuan that I haven’t assigned a role to yet; I may use it as a backup host.

These compute bricks are a station on the way to my original prediction, because they get consumers used to thinking of their utility machines as small compute nodes attached to human-sized peripheral hardware that may have a longer lifetime than the compute node itself.

At the lowest end of the compute-brick class are little engines like the Raspberry Pi. And right above it is something slightly different – bricks with a fan, active cooling enabling them to run the same chips used in tower PCs.

Of course the first machine in this class was the Apple Mac Mini, but it dead-ended years ago for reasons that aren’t Apple’s fault. It was designed before SSDs were really a thing and has spinning-rust-centric design assumptions in its DNA; thus, it’s larger, louder, noisier and waaay more expensive than a Jetway-class brick. Apple must never have sold very many of them; we can tell this by the fact that the product went four years between refreshes.

On the other hand, a couple days ago I dropped in a replacement for my wife’s aging tower PC. It’s an Intel NUC, a brick-with-fan, but unlike the Mac Mini it seems to have been designed from the start around the assumption that its mass storage would be SSD. As such, it achieves what the Mac Mini didn’t quite; it opens a new front in the ephemeralization wars.

Continue reading

Nov 18

Stop whining and get the job done

I’ve been meaning to do something systematic about losing my overweight for some time. last Thursday I started the process by seeing an endocrinologist who specializes in weight management.

After some discussion, we developed a treatment plan that surprised me not at all. I’m having my TSH levels checked to see if the hypothyroidism I was diagnosed with about a year ago is undertreated. It is quite possible that increasing my levothyroxin dose will correct my basal metabolic rate to something closer to the burn-food-like-a-plasma-torch level it had when I was younger, and I’ll shed pounds that way.

The other part is going on a low-starch, high protein calorie-reduction diet, aiming for intake of less than 1500 calories a day. Been doing that for nine days now. Have lost, according to my bathroom scale, about ten pounds.

I’d have done this sooner if I knew it was so easy. And that’s what I’m here to blog about today.

Continue reading