Three easy pieces

I’m back from vacation – World Boardgaming Championships, where this year I earned laurels in Ticket To Ride and Terra Mystica..

Catching up on some releases I needed to do:

* Open Adventure 1.3: Only minor bugfixes in this one, it’s pretty stable now. We gave 100% coverage in the test suite now, an achievement I’ll probably write about in a future post.

* ascii 1.18: By popular demand, this can now generate a 4×16 as well as 16×4 table, This is especially useful in conjuction with the new -b option to display binary code points.

* Things Every Hacker Once Knew: With new sections on the slow birth of distributed development and the forgotten history of early bitmap displays.

Published
Categorized as General

18 comments

    1. >EXPN “earned laurels”

      Means I was among the top finishers in the event. How many laurels are issued depends on the size of the tournament. I was 4th of 200-something in TTR, 6th in 62 of TM. Your laurela are recorded, they print a summary on your badge every year.

  1. This may just be me being an impudent young whippersnapper, but ascii seems a bit dated to me in a way that most command line tools aren’t (even a lot of the ones that work much the same now as they did four decades ago). Most DEs these days have a graphical character map with full Unicode support, and character maps, IMHO, benefit from GUIzation more than, e.g, text editors. About all I can see that ascii has over the typical GUI character map is a larger library of alternate names for non-alphanumeric printable characters within ASCII, but that doesn’t seem a very compelling advantage to me.

    Obviously, of course, if you’re adding features by popular demand, it would seem that there is a non-trivial set of people that do use it enough to file feature requests, but I’m not sure what the draw is, particularly given that it seems not to support Unicode.

    1. Jon Brase,

      It isn’t just you. “The terminal” is an outmoded concept and has been since the first bitmapped displays became available. While a character-cell display was tolerable for the Latin alphabet, at least for illiterates who could do without ligatures, em and en dashes, and the other ways our writing fails to conform to the ideal of fixed-width disconnected boxes, it completely falls down in the face of e.g. Arabic or Devanagari, where ligatures form an essential part of the writing system. Terminals, real and simulated, only stuck around as long as they have for hysterical raisins, and there is a movement in the OSS community to develop a new standard for TUIs that isn’t bound to VT100, and even to remove terminal support from the kernel and delegate it to a systemd component.

  2. @Jon Brase:

    Embedded (and thus efficiency, in computation, storage, and communication) is enjoying an IOT renaissance.

  3. Embedded (and thus efficiency, in computation, storage, and communication) is enjoying an IOT renaissance. Size, bandwidth, and power — all important in some domains. Urdu, not so much.

  4. What is the current availability of hardware and software to synchronize the clocks of widely separated computers to an accuracy small compared to the round trip time – which is to say, the availability of hardware and software to put computers on GPS time to an accuracy of a few milliseconds?

    Can this be done on windows?

    What is the behavior of android clocks. Do android phones have accurate GPS time available?

    1. >the availability of hardware and software to put computers on GPS time to an accuracy of a few milliseconds?

      A well-tuned NTPsec implementation can do that.

      1. It can?

        Surely this requires the assumption that the time for a packet to travel from Ann to Bob is the same as the time required for a packet to travel from Bob to Ann, which is not generally the case, and is unlikely to give you the same result as a packet that travels from Ann to Madelyn, then Bob.

        1. >Surely this requires the assumption that the time for a packet to travel from Ann to Bob is the same as the time required for a packet to travel from Bob to Ann

          It does. This assumption of delay symmetry is the principal weakness of the algorithm. The fact that it works as well as it does can be taken as a measure of the deviation from symmetry across all links.

    2. >the availability of hardware and software to put computers on GPS time to an accuracy of a few milliseconds?

      A well-tuned NTP implementation getting time from a local Stratum 0 source such as a GPS can easily achieve that. Even the inexpensive USB GPS I designed myself supports accuracy on the close order of 1ms.

      >Can this be done on windows?

      Probably, with significant pain and hassle. I wouldn’t want to be the one to try it.

      >Do android phones have accurate GPS time available?

      Android clocks get time from the carrier network. Those normally use time from rubidium-crystal clocks with a jitter orders of magnitude below 1ms.

  5. @esr
    “A well-tuned NTP implementation getting time from a local Stratum 0 source such as a GPS can easily achieve that. Even the inexpensive USB GPS I designed myself supports accuracy on the close order of 1ms.”

    Do you think this would be precise and robust enough to do network delay tomography using your home system (using SYN-ACK and RST packets), e.g., for discovering network topology or even geolocalization of hosts?

    1. >Do you think this would be precise and robust enough to do network delay tomography using your home system (using SYN-ACK and RST packets), e.g., for discovering network topology or even geolocalization of hosts?

      I designed the Macx-1 specifically as a sensor for network tomography of a slightly different kind – I wanted to be able to measure actual NTP skew from UTC at hundreds of locations so we’d know what the real-world error budget looks like. RFC5905 aims at “a few tens of milliseconds”; if that’s so, 1ms accuracy should be quite sufficient for error mapping.

      I don’t know about discovering network topology or geolocation. The latter seems implausible to me; variation in network latencies seems to happen mostly inside routers, so it would be tough to get a read on physical distance in the links. And if you want network topology, mining routing tables in the core seems like a much less effortful way to do it.

  6. @esr
    “The latter seems implausible to me; variation in network latencies seems to happen mostly inside routers, so it would be tough to get a read on physical distance in the links. ”

    That is obviously true, but a lot can be done to improve the estimates. It is just that I have always found this a fascinating technological trick:

    Leveraging Buffering Delay Estimation for Geolocation of Internet Hosts
    http://www.eecs.qmul.ac.uk/~steve/papers/geolocation.pdf

  7. @esr –

    a tiny typo in TEHOK: Section “The early, awful days of bitmapped displays”, 2nd paragraph, the sentence starting with “It was not generally that the Alto had ….”.

    I believe you want to put the word “known” between “generally” and “that”. Or something with a similar meaning.

    Edited to add – reading the rest of that paragraph reminded me that my first personal computer (i.e., one that I actually had good title to, and had paid for with my own $$$) was a “classic Mac” in 1988 – and yeah, the display was one of the things that made that otherwise near-toy worth owning.

  8. Referring to low-res displays from ‘What Every Hacker Once Knew’… what about Japan-only microcomputers, like NEC PC-88 or Sharp X86000? They had noticeably larger resolution than microcomputers popular on the west, and as Wikipedia claims here, this was caused by much higher detail level required by far east alphabets to be legible. Sadly, Wikipedia gives no reference to trustworthy source and I’ve failed to find any in a short time. And all these computer appeared noticeably before 1990 or 1992 time border you have mentioned. And even before Macintosh, as NEC is said to be offering 640×480 monochrome back in 1981.
    I’d love to know, why none of these are mentioned in your article? Too exotic for 1980’s westerner? Too far away from actual Hackerdom? Not common enough back then? Fitting for monograph, but not for an historical overview? Other reason?
    I’m far too young to remember any of the things mentioned, and I’m not a hacker myself – more a computer history geek feeding his curiosity.

    1. >I’d love to know, why none of these are mentioned in your article?

      All of the reasons you cite, plus especially the difficulty of gathering retrospective evidence from Japan.

      I too have heard claims about hi-res appearing earlier in Japan, driven by the problem of kanji legibility. I find these legends plausible, but as you note discovering enough about what and when is difficult.

      1. Still, for the sake of historical accuracy, you might call Macintosh ‘first in the west’ or ‘first or European and American market’, if it doesn’t blur out the whole image too much.

Leave a comment

Your email address will not be published. Required fields are marked *