Things Every Hacker Once Knew: 1.4

New version 1.4 at:

http://www.catb.org/esr/faqs/things-every-hacker-once-knew/

New content in this one is an expanded section about outboard modems, their descendants in today’s technology, and the curious survival of the Hayes AT command set.

I had actually received a couple of previous requests to add material on the Hayes AT convention, but rejected them on the grounds that it had no relevance to current tech. This turned out to be not quite true!

Once again I emphasize that this document was not written as a nostalgia trip, but rather to assist retrospective understanding by younger hackers so they can make sense of the fossils and survivals still embedded in current technology.

The response to this document has been remarkable. I’ve received a flood of feedback and gratitude in my mailbox, often from people much more sentimental about the old days than I am.

I invite everyone who values this content to contribute at my Patreon page; this is exactly the kind of thing I couldn’t do if I couldn’t pay my Internet bills or had to get a $DAYJOB, and I’m currently in my sixth month of operating without institutional funding. $5 or $10 a month from enough people could fix that.

Your dollars will also go to fixing critical infrastructure, so please give generously – the civilization you save could be your own.

Published
Categorized as General

63 comments

  1. “Anybody who worked with this stuff had to keep around a bunch of specialized hardware – gender changers, DB-25-to-DB-9 adapters (and the reverse), breakout boxes, null modems, and other stuff I won’t describe in detail because it’s left no traces in today’s tech.”

    Hmmm. And today’s job for me was to fix the pan-tilt-zoom (PTZ) camera control had stopped working on our test rig. We got it working Wednesday in a couple of hours fiddling, to prove that the shiny software based video management system we are planning to buy would work with the 10 year old cameras, but it broke twice since then. Turns out the video encoder reverts back to RS-422 when powered off and on. We need two wire RS-485, so control just dies. This wouldn’t necessarily happen with every device. The differences between RS-422 and RS-485 (both in 2 wire and 4 wire balanced comms variants) are ridiculously arcane and device dependent. RS-232 is easy in comparison.

    The aim here is to replace the CCTV head end for over 200 installed cameras spread over 40 kilometres of road, mostly 10 metres in the air. Replacing them all, or rewiring/reconfiguring is not an option. I’ll obviously be onto the supplier on Monday morning, not so much to say “what’s wrong with your product?” as to proudly announce “I found an edge case with this particular camera model and wiring! Do I get credit on the fix?”

    Believe me there is a lot of serial still out there. No traces in today’s tech? I wish.

    1. >Believe me there is a lot of serial still out there. No traces in today’s tech? I wish.

      It’s not today’s tech. It’s yesterday’s tech still in use. And did you, like, actually use a gender changer or anything like that?

  2. Brand new, out of the box device. It is today’s tech. RS-485 is still in common use (as are various token ring based networks) because you can run them long distances on copper, way longer than Ethernet will give you without a local switch or media converter. Sometimes what is new tech for desktops or offices is not suitable for field applications for a long time. 500 metres away, with RS-485, I can do with a two twisted pair copper wire straight to the device. No local switch, which if it were there, would need to be hardened to 65degC, be in a cabinet that doesn’t exist, and communicate over fibre that doesn’t exist. It’s still current to me. And yes, I have a null modem cable with me at all times. For switch configuration, amongst other things. Most switches have web/ssh turned off by default. Console is it.

    Fortunately, I guessed the correct wiring from long experience (no thanks to the manuals, which were hopeless, but I’m used to that) and didn’t have to break out the CRO to work out where the signals were. The shiny network device will need to be fixed though. It failed to save its own settings.

  3. It was only recently that there was a bug in the SMS protocol, where someone could text message AT commands to your cellphone and take over that way. Don’t know if it ever was completely removed, but the entire baud/AT stack was still present in the cellphone firmware somewhere. Hate binary blobs. That stuff should have been stripped out. Time for an OpenSource phone firmware?

  4. Eric, it is nice that you are documenting the revisions. Is there somewhere I can get the different versions for the sake of running diff over them?

  5. Yes, a lot of scientific and industrial equipment of rwcent design still talks over RS-232, 422, and 485. Old-school serial is dead simple and reliable.

    Open source baseband firmware — yeah, right. Someone will write one — Fabrice Bellard probably already has — but it will be illegal to incorporate into consumer products. Aside from the FCC’s concerns about being able to program radios to exceed their approved frequency or wattage limits, the NSA and law enforcement are unwilling to give up their backdoors into your cellphone — especially given who’s in the White House now.

  6. I know of high-end electric meters that are still coming off the assembly line with RS-232 and RS485 ports and 56k modems on board. It’s still very much a thing.

  7. The company we buy mass flow sensors from is just now introducing a new product with an Ethernet port…alongside a version that speaks RS-485. They both use the same base MODBUS transmission protocol, too.

  8. RS-422 and 485 measurement devices, and even current loop ones, are still used in a lot in industrial settings and seeing new deployments regularly.

    The reasons are that for low speed data acquisition from temperature, pressure, flow rate etc. sensors they offer fast enough bandwidth; cabling is *extremely* cheap; the distances between units can be very long, and even longer if one uses repeaters and RF converters; and they offer simple, basic networking that allow you to easily add up to 256 data-acquisition devices, each one with up to 8 sensors, per one or two copper pairs. Connect then one or more of those serial data-acquisition networks to a small RS-232/422/485 server box with 16 or more serial ports on it, set that one on your network, write a simple application sending and reading easy to understand ASCII commands, and you’re ready to keep tabs, or even control, an entire factory. And that’s not even considering PLCs.

    Nothing newer comes close. :-)

    Hence I think adding a paragraph mentioning RS-422/485 would make sense. This is technology that for a long time yet isn’t going to go away.

    PS.: If you’d like to see what’s available out there in terms of industrial serial usage, good starting points are googling for “RS-485 I/O modules”, “6B modules” and “Serial Device Servers”.

  9. @Alexander Geig

    But, though it’s in specialized use still, I don’t think RS-422/485 was ever something “Every Hacker Once Knew”. It was something that was limited to people who did things in its specialty area knew, like current loop (often with HART), or IEEE-488 (GPIB, HP-IB). All of those still have their specialty uses, and all of them poked a little into the mainstream at some point (RS-422 ports on Macs, the current loop option on original IBM PC serial cards, IEEE-488 ports on things like the Commodore PET), but they were never something that was general knowledge.

    RS-232, on the other hand, really was an “every hacker” item.

  10. I still interface with RS-232 devices on a regular basic. Specifically, scale indicators.

    A scale indicator sends a 12-volt excitation line to a load cell (or multiple load cells), and receives a millivolt-per-volt signal line back, which it converts, via it’s internal calibration values, to a weight.

    Nowadays, I just throw an RS-232 to Ethernet adapter on it, then have it transmit via UDP to whatever PCs need to receive the continuous weight output. If I need to send the scale indicator commands (like ‘zero the scale’ or ‘tare off the current weight’), the adapter has its own IP address and a specific UDP port it monitors. Anything that comes in on that port gets sent, via RS-232, to the scale indicator.

    I still have my kit with my serial cable, null modem adapter, gender changers, DB25-to-DB9 adapters, 5-volt injector adapter, etc. I haven’t touched it, except when I’ve moved, in ~10 years.

    I have 2 development scale indicators sitting on my desk, one with a serial cable to my PC (I still look for a COMM header on motherboards when I buy one for myself), and both with an RS-232 to Ethernet adapter.

    I’m still forced to maintain my soldering-iron skills, so I can make a little cable to go from the header inside the scale indicator to the Ethernet adapter.

    I don’t think serial is dead, but it’s dead to the hobbyist/hacker for reasons outlined in the document.

  11. @ esr
    “And did you, like, actually use a gender changer or anything like that?”

    I carry a gender changer in my kit and use it once or twice a year.

    Serial modems are going away as backup in favor of cellular modems which connect to the device via ethernet (rather than serial.)

  12. @esr, RE: Patron

    I noticed your post in Patreon regarding ski, and at first mistook it for a reimplementation of SkiFree, which I first encountered sometime before the age of 10 (they share a skiing theme and man eating yeti, but on second look appear to be otherwise very much different). The guy who wrote that game originally wrote it in Fortran as a VT-100 game for VMS, then reimplemented it as a graphical game for Win16 (thus in C++). He apparently had been inspired by a game for the Atari 2600, which seems (from its Wikipedia page) to have lacked the Yeti. Do you know how/if your version (which I’ll call “Unix ski”) is related to VMS ski / SkiFree or the Atari game? Had the original implementer of Unix ski encountered either one? Do you know if a common anscestor dated between the Atari game and the Unix and VMS games that the Yeti could be traced to? Given that I remember SkiFree from childhood, Unix ski unleashes a fair amount of nostalgia for me.

    I do notice an erratum in your changelog for ski:

    6.12: 2019-02-02

    1. >The guy who wrote that game originally wrote it in Fortran as a VT-100 game for VMS

      The VMS version in Fortran must be the one my game is derived from – I corresponded with the author of the C version I found and he mentioned having originally written it in Fortran on a PDP-10. Here’s what he told me:

      On 15 May 2006 Mark Stevans wrote:
      >Just wanted to thank you.  Very nice port, love the full-color
      >graphics.  Originally it was FORTRAN on DEC-10's, ported to C around
      >1981.  Your Python version should keep Ski! viable for another 10-20
      >years, but I might try to put up a Ruby version for the heck of it....
      

      My Yeti is just a renamed Abominable Snow Monster.

      My guess is that DEC-10 ski was a simplified port of SkiFree. I don’t know how it relates t the Atari2600 game. I’ve written the author at his last known address to ask.

  13. rjr, I deal with scale indicators on a regular basis, as well. We just released our very first product that talks native Ethernet to a scale indicator about two months ago. We buy RS-232 scale heads by the dozen. Since we use our own embedded controller hardware, speaking RS-232 natively to it is dead easy.

    Some of the newest scale indicators speak Ethernet natively, but they’re quite a bit more expensive than those that don’t. I expect that price to come down. The one we use (Rice Lake 880) streams readings and accepts commands over a simple Telnet connection.

  14. > My guess is that DEC-10 ski was a simplified port of SkiFree.

    Well, it wouldn’t be a port of SkiFree, as that was the Windows reimplementation (from about 1991) of that author’s VMS ski (copyright 1985). The fact that VMS ski was written in Fortran suggests a possible relation to DEC-10 ski.

  15. I don’t buy that external modems sold better because of their blinkenlights. I and most people I know learned the hard way that internal modems in PCs were much less reliable due to the high level of RF noise that usually exists inside the PC case.

    1. >internal modems in PCs were much less reliable due to the high level of RF noise that usually exists inside the PC case.

      I’ll add that.

      Maybe you don’t think the blinkenlights were important, but the only time I owned one I missed them quite a lot.

  16. Hi eric. You write: “Also, most hackers learned to interpret (at least to some extent) modem song – the beeping and whooshing noises the outboards made while attempting to establish a connection. The happy song of a successful connect was identifiably different from various sad songs of synchronization falure.” I feel inspired to go slightly off topic here…

    There is a great deal of effort now going into “Data Visualization” using shape and color, or outright art, to communicate information. Business “dashboards” link to SQL and report sales per square foot or labor costs as a function of population density or whatever.

    I do NOT see much current effort into the audio presentation of data in the fashion you describe. This is particularly surprising to me, as part of the generation growing up with Dr McCoy on Star Trek waving a sensor over an injured patient (or a dead one, if wearing red), listening to the beep pattern, and announcing the interpretation/diagnosis. (If the beep was more or less monotone, “He’s dead, Jim!”)

    Do you have insight into why audible pattern communication of status is less common and inactive now than we might have guessed in 1970?

    1. >Do you have insight into why audible pattern communication of status is less common and inactive now than we might have guessed in 1970?

      I think it’s obvious. It’s because higher-bandwidth visual UIs are so much cheaper now, relatively speaking. This means that a lot of craft and resources have gone into learning how to write them that in an alternate universe might have gone into elaborating audio interfaces.

  17. @Pouncer – I think the fundamental problem with audio is that it’s one-dimensional and time-dependent, and therefore cannot be transcribed to print media. Learning to interpret it therefore requires dedicating time to listening to training data, rather than simply reading and looking at examples. Video (especially if the time dimension of video is being used to represent something other than time) has the same problem to a lesser extent, but video is three-dimensional and used for things that two-dimensional visualization cannot be used for.

  18. @esr:

    … the curious survival of the Hayes AT command set.

    Ahh, grasshopper, it is only curious to those who do not understand that the selection of the ASCII characters “A” and “T” was no accident.

    Your statement that “Also, most hackers learned to interpret (at least to some extent) modem song – the beeping and whooshing noises the outboards made while attempting to establish a connection.” hints at the nasty technical dance required to discover the capability of the modem at the far end without provoking it into hanging up if it wasn’t very capable.

    But with the rapid proliferation of ever faster modems, and ever faster connected computers, there was a burning need for some sort of technical dance on the DCE-DTE side as well as on the PSTN side. If your shiny new 56K modem scores a decent connection over the PSTN, that won’t help much if your communications is going at 56K, but you’re still restricted to 9600 on the RS-232 line.

    Of course, there were cheezy workarounds. Commands to change the baud rate were fine until one side or the other got reset or otherwise forgot the rate.

    But the “AT” prefix was an elegant fix to the problem of baud-rate negotiation. It let the DTE choose the baud-rate and unambiguously communicate that chosen rate to the DCE.

    When an “A” is received, the bit pattern allows the modem to determine the baud rate, and make a partial determination of the number of data bits (7 or 8) and parity (even, odd, or none). The “T” seals the deal, locking in the number of data bits and the parity.

    The bit pattern of the “A”, when transmitted asynchronously with a leading “0” start bit, contains the shortest low time that any character could have. In that respect, it is the same as any odd ASCII code. But it also simultaneously contains the longest low time that any odd 7-bit ASCII code could have. Due to the nature of asynchronous serial communication, where an idle line is in the high state, “A” is an ideal character to define the baud rate. The single-bit start bit means the modem cannot choose a baud rate that is too low; else the start bit is too narrow for the chosen rate. The 5 bits of low time during bits 1-5 of the 0x41 code mean that the modem cannot choose a baud rate that is too high; else five bits of low would be a framing error (would extend the low signalling past the time when a high stop bit would be required to occur at the faster baud rate).

    After bit 6 is transmitted high for the “A”, you have to look at the next 3 bits:

    789
    000 — Oops, framing error — no stop bit. Try again.
    001 — 8 bits even parity
    01x — 8 bits no parity, or 7 bits even parity.
    10x — 7 bits no parity, followed by a new start bit
    11x — 7 bits no parity, or 8 bits odd parity

    In the case of 01x and 11x, the x could be high for a subsequent idle line, or low for the start bit of the next character (e.g. if the “AT” were sent by machine rather than by a human). In the case of 10x, the 1 is a stop bit and the 0 is a new start bit and the x will be the LSB of the next character (which damn well better be zero if we’re expecting the character sequence “AT”).

    The “T” following the “A” is of odd parity, and removes the length/parity ambiguity for the 01x and 11x in the chart above, and also provides a great sanity check that the detected rate is correct, with at least two high times of a single bit width, two low times of a single bit width, and one low time of three bits width (including the start bit).

    This is not merely a historical artifact — cellular modems are being deployed today that use this command set as originally intended — to let a serially connected DTE choose the baud rate, data bits, and parity it wishes to use in communicating with the modem.

    The fact that that some of those cellular modem can also be controlled via SPI, I2C, or a parallel bus is not a sufficient justification to change this working command set.

    One other small note on your guide: it should perhaps state more clearly that RS-232 does not itself define or mandate the use of asynchronous start-stop serial communication.

    1. >Ahh, grasshopper, it is only curious to those who do not understand that the selection of the ASCII characters “A” and “T” was no accident.

      That was … fscking brilliant. And I had no idea. This was certainly not common knowledge back in the day. Where did you learn it?

      Gonna take some thought how to summarize it.

  19. Forgot to mention this:

    “AT” commands also work lower-case, with the parity detection ambiguity reversed…

  20. I’m a bit disappointed that none of the research I did on Lisp keyboards in the comments in the other thread made it in. The most relevant bits to stuff that’s actually in the FAQ: On the Stanford keyboard, bit 8 was “Control” and bit 9 was “Meta” (This is well-attested in the Jargon File entry for “Bucky Bits”), not “Super”, and neither had anything to do with the extra characters, which were generally assigned to low values replacing unused ASCII control characters. Whatever the provenance of the specific number 8000, it’s clear that what the Space Cadet keyboard could generate thousands of were not glyphs, nor were the modifier bits included in the (8-bit, regardless of in-memory word size) “fundamental character” set used in strings and files.

    I did finally find a possible basis for the “over 8000” number. The “Bucky Bits” jargon file entry describes separate bits for right control and meta. So that’s six modifiers, with a position-agnostic super and hyper. The Lisp Machine also did not support any modifiers (including meta) with lowercase letter codes – they were automatically transformed to uppercase (but control-shift and meta-shift were equivalent to hyper). So, with 159 characters (graphical and otherwise) in the lisp machine character set and presumably available from the keyboard, and six modifier bits that could be applied to all but 26 of them, the number of combinations is 8,538.

    > Because many later languages (Java, Python, etc) copied C’s low-level lexical rules for compatibility reasons // It might be worth mentioning that there’s at least *some* movement away from leading-zero octal, with Python 3 and Perl 6 forbidding it despite breaking backwards compatibility with earlier versions of those languages. “may never be entirely eradicated” overstates the matter – maybe in twenty or thirty years it will be obscure enough to be judged worth deprecating from C.

    > “Another significant problem was that an RS-232 device not actually sending data was undetectable without analog-level monitoring equipment.” // Is this true? While looking up something else for this I found some indication that the line was kept high for idle, and going low was associated with “break”.

    > Ctrl-] (GS) is rge exit character from telnet, but this also has nothing to do with its ASCII meaning. // Typo, “the”

    > DEL > Under Unix variants, sometimes a SIGQUIT interrupt character. // I’m absolutely certain this should be SIGINT. I didn’t argue it strenuously the first time I posted this because I assumed it was a mere typo.

    > The Control modifier on your keyboard basically clears the top three bits of whatever character you type… // It might be worthwhile to mention that the VT220, and some modern terminal emulators, generate all of the non-letter control characters with ctrl-2345678, despite this not matching any obvious bit pattern. I’ve occasionally been bitten by ctrl-4 SIGQUIT. This also doesn’t explain why DEL is by convention written as “^?”. I *think* I saw some piece of code somewhere implying this is because it uses some equivalent of “(x+0x40) & 0x7F” to get the character to be printed. It doesn’t seem to have been possible to actually generate it by typing ctrl-?, which would instead generate ^_ in accordance with the masking rule.

    The bit about the shift key is still confusing. Maybe should be something like: “Very old keyboards used to do Shift just by toggling the 32 or 16 bit, depending on the key; this is why the relationship between small and capital letters in ASCII is so regular, and the relationship between numbers and symbols, and some pairs of symbols, is sort of regular if you squint at it. The ASR-33, which was an all-uppercase terminal, let you generate some punctuation characters it didn’t have keys for by shifting the 16 bit; thus, for example, Shift-K (0x4B) became a [ (0x5B).”

    On “common knowledge at the time”, I think the problem with dismissing things as “fascinating but obscure trivia” means too much exclusion of the real facts behind things that were already forgotten or inaccurately known, and reduces its value as a historical document. There’s also the fact that the intersection between, say, the Lisp and Unix hacker spaces seems to have been tenuous enough to cause some things to have been lost in translation (like your own “astonishment” that their version of the control key set a bit rather than doing ASCII-style masking) – maybe there was never really anything every hacker once knew, just some things some hackers once knew, and other things other hackers once knew, all worth preserving. I think arbitrarily drawing lines around “provides too much historical context” runs the risk of the document being better described as “Things Eric Once Knew”.

    1. >I’m a bit disappointed that none of the research I did on Lisp keyboards in the comments in the other thread made it in.

      Your effort was noble, but a couple of issues defeated it.

      1. In a synoptic document with this topic, I can’t justify the space to describe 9-bit keyboards in detail. I’m not even sure I should have mentioned the Space Cadet – its claim to having been common knowledge at the time is questionable and my belief that it was may be an artifact of where I lived and who my friends were. It’s hard to know in view of the prominence the artifact later achived through the Jargon File.

      2. A lot of what you discovered was about the Stanford keyboard, not the Space Cadet.

      3. Some of what you think you’ve figured out is plausible but not strongly confirmed in my mind. Some is contradicted by things I think I remember, one of which is that the set of available extension characters on the Space Cadet was quite a bit too large to fit in the meta-control range. Therefore I’m disinclined to add assertions about any of it.

      >there’s at least *some* movement away from leading-zero octal, with Python 3 and Perl 6 forbidding it

      I already mention Python 3; I’ll add Perl 6. Thanks.

      > I’m absolutely certain this should be SIGINT.

      Changed after some Googling.

      >It might be worthwhile to mention that the VT220, and some modern terminal emulators, generate all of the non-letter control characters with ctrl-2345678

      I think this fails the common-knowledge filter.

      > and the relationship between numbers and symbols, and some pairs of symbols, is sort of regular if you squint at it. The ASR-33, which was an all-uppercase terminal, let you generate some punctuation characters it didn’t have keys for by shifting the 16 bit; thus, for example, Shift-K (0x4B) became a [ (0x5B)

      That’s good, I’ll take it.

      >maybe there was never really anything every hacker once knew, just some things some hackers once knew, and other things other hackers once knew, all worth preserving. I think arbitrarily drawing lines around “provides too much historical context” runs the risk of the document being better described as “Things Eric Once Knew”.

      I think the answer to this is worth a blog post.

  21. “Is this true? While looking up something else for this I found some indication that the line was kept high for idle, and going low was associated with “break”.”

    The problem is that an RS-232 receiver with nothing connected floats high, thus signaling an idle line. BREAK is an unusual condition.

  22. RS-485 is pretty esoteric, but RS-422 (and the trouble of making it interoperate with RS-232. Not to mention the Appletalk physical layer, later called Localtalk, which no one wants to remember) was well-known to Mac hackers.

    Also obscure: the legal reason typing +++ATH
    was obnoxious.

  23. That’d be because on a real Hayes modem, a delay before and after +++ was required for it to have an effect, but that was patented.

  24. The young whippersnapper will now ask a question:

    From what I’ve read, the delay required by Hayes modems was a full second. Why on earth would they use a delay that long? Now, granted that I’m used to gigahertz processors and megabit-per-second+ communication lines, but even with the baud rate of the orignal Hayes modem, a full second on either side seems quite inefficient: You could transmit 600 characters in the time that takes.

  25. Jeff Read on 2017-02-03 at 10:42:02 said:
    > the NSA and law enforcement are unwilling to give up their backdoors into your
    > cellphone — especially given who’s in the White House now.

    You really, honestly believe that *HILARY* would be less likely to spy on the American public[1] than Trump?

    Even after 8 years of Obama?

  26. I worked with a new design using RS-232 and RS-422 within the last couple of years. I don’t know the reason it persists, but my guess in that particular market was inertia – things interfacing with the new product use RS-232 or RS-422.

    Oddly, I also have worked with a bogus RS-232 mode that customers wanted: multi-drop RS-232, where the TTL RS-232 voltage conversion chips have no spec for rise/fall times on enable/disable. Fun!

  27. Jon Brase: “but even with the baud rate of the orignal Hayes modem, a full second on either side seems quite inefficient: You could transmit 600 characters in the time that takes.”

    Uh, no. The original Hayes Smartmodem was a 300 baud modem, so in the time delay you needed, you could only send 60 characters (10 bits per character, two seconds worth of delay). But that’s not a particularly important measure, because you only needed the delay to put the modem into command mode, and at that point, you weren’t particularly interested in transferring data. The delay made it much more difficult to accidentally drop into command mode by sending the wrong stuff at the modem.

  28. >Learning to interpret it therefore requires dedicating time to listening to training data, rather than simply reading and looking at examples.

    Compare with the art and science of auscultation (listening to the patient’s innards) in the days before MRI or even X-ray. Learning to do this was a big part of medical education. There were specialty techniques such as thumping on one part of a patient’s body while listening at another and getting an idea of what was going on in between — much like the way that petroleum geologists use explosions and geophones today, except that instead of a supercomputer crunching the data, it was the mind of a skilled physician.

    Of course, doctors do still use stethoscopes today, but I gather that much of the fine diagnostic art has been lost.

  29. Just reading the improved version vs the original. It occurred to me that once upon a time there was Kermit and it’s various XMODEM/YMODEM/ZMODEM etc. friends.

    I have now forgotten absolutely everything to do with them, but at the time they were a way to transfer data from one computer to another regardless of underlying physical layer for the transport protocol and IIRC regardless of big/little-endian-ness of the end devices. I know back in the late 80s/early 90s I used them a lot and there were a huge number of standard scripts that could be used with them.

    IIRC while FTP was originally defined as a part of the TCP/IP stack it owed a good deal of its design, in particular the sliding transmission window bits to kermit and friends.

    Regarding external vs internal modems. One critical reason why people liked the external sort was that the internal sort tended to be winmodems with a DSP and blob that only ran under dos/windows. If you were trying to use any other OS than that you couldn’t use the modem.

    There’s quite a lot of Ethernet header that doesn’t make a huge amount of sense unless you recall how it started, and – for that matter – how the IEEE 802 committee and subcommittees worked.

    Once upon a time you could (for example) end up doing stuff with Novell’s IPX packets and have to figure out if they were Ethernet, Ethernet II, SNAP or Novell Raw formats (or rarely 802.2 LLC format) and it was very important that every device on the network agreed on what they were using.

    A lot of things tended to be sized by whether they needed one or two Ethernet packets to transmit and to expand up to the size of the minimum Ethernet packet size (46 data bytes min, 1500 bytes max)

  30. Also once upon a time there were other DNS competitors too. In the UK academic world things were address backwards compared to DNS. So that, for example, the University of Cambridge’s mainframe was uk.ac.cam.phx and (I think) the famous coffee pot cam was uk.ac.cam.cl.pot but I may have got that slightly wrong.

    Also something that may be worth recalling because it still causes confusion. X windows reverses the idea of client and server because the client device is providing a service to the (remote) host. In fact there’s probably a bunch of stuff we used to do semi-automatically WRT to X windows that lie around in various linux distros in /etc/X11 and which are now legacy stuff that no one uses anymore or cares about until it soehow breaks and you can’t get a graphical UI on your Linux distro

  31. @esr

    > > Ahh, grasshopper, it is only curious ….

    > Gonna take some thought how to summarize it.

    Don’t – just transcribe this exchange (as lightly edited as you must) as “Appendix A” to the FAQ, or something like that….

  32. “Regarding external vs internal modems. One critical reason why people liked the external sort was that the internal sort tended to be winmodems with a DSP and blob that only ran under dos/windows. If you were trying to use any other OS than that you couldn’t use the modem.”

    This was a fairly late development, about 1996 or so, IIRC. I switched back and forth between internal and external modems, mainly whichever I could get the best deal on. (I got in on the Fidonet deal for the USRobotics 2400 baud modem when they first came out. $500! Smoking deal. That modem transported all of Usenet into Houston for a year or so, hooked up to an AT clone running Microport System V/AT, the 286 version.)

    1. >[Winmodems were] a fairly late development, about 1996 or so, IIRC.

      Maybe a year or two earlier, but yes. I remember that the first time I had to deal with these appalling turds was after my mother could have had home DSL installed but didn’t, so I was asked to try to put together a cheaper solution.

      I still think whoever invented the Winmodem should have been strung up by his thumbs and lashed with nettles. It wasn’t just the Windows lock-in that offended me; the performance of those things was terrible and they’d eat your CPU in operation. Cheap, stupid designs full of fail…

      Now, after it doesn’t really matter, you can get modems with a hardware DSP again. The way to find them is to search for “gamer” modems, because the MMORPG crowd figured out that software DSP was hurting their latency.

  33. @Jay Maynard:
    > Uh, no. The original Hayes Smartmodem was a 300 baud modem, so in the time delay you needed, you could only send 60 characters (10 bits per character, two seconds worth of delay).

    Gaak! I’m showing my youth. I’m too used to working with networking at a layer where the 8-bit byte is the smallest unit used.

  34. > That was … fscking brilliant.

    As they say, necessity is a mother.

    > And I had no idea. This was certainly not common knowledge back in the day.

    Sure you did. And it was. At least the fact that there was a pain point of modem dip switch configuration that magically disappeared one day. But that was a long time ago, so you may have forgotten.

    > Where did you learn it?

    tl;dr — I implemented Hayes-style autobaud in an embedded modem some time in the 2003-2004 timeframe.

    I first implemented automatic baud rate detection around 1984 or so. I wasn’t interested in anything except 8-N-1, so a simple carriage return sufficed. Also, at that time, the hardware I was working with had a USART (like a UART, but supports synchronous as well) and it was fairly easy to just set it up at a high baud rate, waiting for a simple transition for a “sync” character, and then to record data and then look in the data for transitions.

    Around 1996, I was working in the embedded group at AMD, and reimplemented it in the monitor for little 186 boards using a UART. That was a bit more painful, but not too bad. But that was still just a user pressing carriage return in order to get a prompt, not a machine sending a long command stream at a single go.

    In 2001, I started writing firmware for Silicon Labs, for their line of embedded modems. They didn’t have autobaud, which I found odd. But there were obvious technical challenges related to reliably detecting the “AT” and then seamlessly switching to UART to the correct frequency, and doing so without garbling a character on the transition from the high baudrate used for detection to the actual baudrate the data was coming in.

    Since the modem’s powerful (heh! 40 MHz) DSP was essentially idle when not in a call, it occurred to me that I could use those wasted CPU cycles and simply run a software UART whenever the modem was in off-line (not in a call) command mode. Then the transition from detecting the “AT” to processing the rest of the command would be seamless, and the modem could simply stay in software UART mode until it was commanded to place a call — at which point the DTE should just shut up and wait for awhile, so the modem could switch from the software UART to the hardware UART when no communication was taking place.

    All I wanted was a tiny bit of hardware support. The UART FIFO had 8 entries of 13 bits each — bits for the data, parity, and error and status codes. I suggested that we could add a counter, and have a mode where the UART counted up until either the counter would overflow 12 bits or until a line transition occurred, whichever came first, and then push the 12 bit count and the state of the line during the count into the FIFO. Then software could reassemble the time deltas into characters.

    I presented this after I’d been there awhile, during a time when we were doing an all-layer silicon change to fix a few hardware issues. (ROM changes are a single layer and not too painful; all layer changes mean that Pandora’s box has been cracked open, and only discipline keeps all manner of mischief from being added. Absolutely everything has to be retested, all the way down to zapping it with thousands of volts.)

    I was told “No, this is only for embedded products. Those guys know what speed they are going. It’s a useless feature.” I argued in vain that it was an exceedingly minor hardware change — essentially a 12 bit counter and some control logic, and that we didn’t have to actually implement autobaud detection at that point in time, but it would be nice to enable it. No dice.

    Fast-forward a year or so, and a large Japanese customer told them that they liked everything else about the chip, but autobaud was a must-have mandatory feature, and they were going to buy a gazillion, but were going to go with our competitor unless they had a chip that did what they wanted three months later.

    Chip schedules aren’t like software schedules, and 3 months to delivery was extremely aggressive. It gave me about two weeks to implement and test the autobaud feature, but at least it was a priority, so I got the hardware support I needed and otherwise got left the fuck alone.

    So naturally, I turned to Python. I coded up a testbench that would spit out data with directed and pseudo-random variations on starting time, baud rate, parity, bits/char, upper/lowercase, jitter, clock drift, etc., and wrote the initial algorithm in Python. Then I iterated, as I previously described here.

    It worked great and we sold a gazillion of them.

    1. >At least the fact that there was a pain point of modem dip switch configuration that magically disappeared one day. But that was a long time ago, so you may have forgotten.

      That may have been slightly before my time – which for this and most other purposes can be considered to have begun in the fall of 1976. This is one reason the document doesn’t try very hard to cover common knowledge before 1975; I wasn’t a direct witness then.

      My early months coincided with VDT time sharing on minis falling to a price level that was affordable by a department at an Ivy League university – I mean, I can actually remember noticing the DECwriter printing terminals being replaced by what I think were VT52s, or possibly VT55s. I think this put me a crucial year or two ahead of ubiquitous VDTs.

      Two years later I owned an ADM-3A – I had to set DIP switches on that, for sure. I went from a 300bps acoustic coupler to an early Hayes. I do remember having to futz with dip switches on modems, but that was a few years later when I was managing Telebit Trailblazers for a UUCP installation.

  35. I don’t think the AT command set was implemented until 1981.

    Of course, earlier than that, modem technology seemed to be moving at a glacial pace, so the baud rates weren’t changing very fast.

    1. >I don’t think the AT command set was implemented until 1981.

      That is correct – it’s one of the dates I pinned down while researching this. I must have gotten mine within a few months of first ship.

  36. >[WinModems were] Cheap, stupid designs full of fail…

    Yep. They did have one, and only one, nice feature, which never got the uptake I expected (and likely that the designers expected). Since the DSP stuff was being done by the main CPU rather than in custom hardware, it made it much easier to add extra whizzy features. For example, almost all of the WinModems had voice and fax capability (for digital answering machines, voice mail systems, and the like). This was typically implemented with special AT commands.

    I even went so far as to write some “voice BBS” software, the code for which is now lost in the sands of time. That idea took off with a resounding thud. I still don’t understand what happened there — it was the heyday of the voice chat lines (1-900 numbers mostly), and a bank of cheap WinModems would have made it possible to run such a thing out of your house.

    1. >I still don’t understand what happened there

      I think I do. Voice chat lines are real-time interactive, but voice BBSes (or any other voice store and forward) aren’t. The revealed preference of most people is to bail out of voice store and forward and even from real-time voice to text messaging – so the capability for voice BBS arrived just in time for it to get steamrolled by the Internet.

  37. > 2. A lot of what you discovered was about the Stanford keyboard, not the Space Cadet.

    Yes but the document seems to mention the Stanford keyboard, and I was pointing that out (control and meta as bits, no super) in contrast to its description of it (meta and super).

    And I did link to, and point out information from, a Lisp Machine manual.

    > Some is contradicted by things I think I remember, one of which is that the set of available extension characters on the Space Cadet was quite a bit too large to fit in the meta-control range. Therefore I’m disinclined to add assertions about any of it.

    I’m not quite sure what you mean by “meta-control range” here. The special characters I referred to being in octal 200-236 are non-printing control characters (null, tab, etc), not extended graphical characters. The Lisp Machine keyboard did not have the front/greek symbols, either on the keyboard or in the documentation. The original Space Cadet keyboard itself had, to all appearances, 203 graphical characters: Normal/Top/Front for 50 keys, an extra for two keys (ceiling vs floor brackets), 50 presumably available lowercase latin and uppercase greek letters, and space. This is 75 more than the exactly 128 in the Lisp Machine character set, which has 97 unused positions.

    1. >Yes but the document seems to mention the Stanford keyboard

      Huh? I did not intentionally mention it anywhere.

      I’m not quite sure what you mean by “meta-control range” here.

      128-160. I thought that at one point you were claiming the printable extensuion chracters all fit there.

  38. > Huh? I did not intentionally mention it anywhere.

    Not directly, but it best fits the description of “9-bit keyboard with two modifier keys to set bit 8 and 9”.

    > 128-160. I thought that at one point you were claiming the printable extension chracters all fit there.

    No (and part of my confusion at ‘meta-control’ is that I thought we’d established that their control is not our control). But in the Lisp Machine documentation I had linked, 0-31 were the only printable extension characters, which your objection likewise applies to. I can only speculate on how the other printable characters (present on the original Space Cadet keyboard but not the Lisp Machine) fit in, and my speculation was that the “over 8000” number did not include them. Since, I have looked some more for ITS, MacLisp, or LM-2 documentation, but I couldn’t find any indication anywhere of an actual character set with the full set of glyphs on the original Space Cadet Keyboard.

    And what I did find suggests that these other characters never really existed except on the keycap labels.

    https://trac.common-lisp.net/mit-cadr/browser/tags/system-46/lmio/kbd.lisp suggests that only the handful of greek letters that this character set defined were actually supported.

    https://github.com/ivoarch/fonts/blob/master/lispmfont/README Likewise says “for instance, we don’t know how greek “mu” ever looked like on these machines. Seems it never got implemented”

    1. >And what I did find suggests that these other characters never really existed except on the keycap labels.

      That listing does something else more fundamental. It shows that the keyboard shipped a 32-bit status word. Theerefor it doesn’t belong in the discussion of 9-bit bytes at all. That’s what you should have pointed out, rather than getting lost in the details.

      I’ll remove that ‘graph.

  39. > I don’t buy that external modems sold better because of their blinkenlights. I and most people I know learned the hard way that internal modems in PCs were much less reliable due to the high level of RF noise that usually exists inside the PC case.

    Three reasons external modems sold better, in addition to your reliability argument:
    1) Infernal* modems tend to be “Winmodems”, which is to say dumb modems that require a driver specific to Windows to be able to function. If you wanted to use them with *nix, you tried to stay clear of them, as manufacturers generally didn’t provide drivers for anything else.
    2) External modems can easily have a full hardware reset performed simply by sliding the power switch off and back on, or pulling the AC adapter’s jack out and reinserting it if there’s no power switch. Infernal modems could only truly be reset by shutting down the computer (and powering off its power supply, or pulling the power plug if the PS had no physical switch, because the “power” button on the computer didn’t actually turn the whole computer off). As that would have inconvenienced an entire office of people using that server, it needed to be a last resort.
    I literally lost count of the number of times I was able to walk someone through the reset process in seconds, then…
    2) Don’t disparage the blinkenlights — After doing the HW reset, I’d ask my customer whether the “TR” (Terminal Ready) light was lit. If it wasn’t, I’d know there was no point in me trying to dial in, because either there was no getty on it or there was a communications problem preventing the getty from talking to it. At this point, I could then walk them through running the script previously uploaded to their system (either by me or one of my co-workers) that would get all existing programs on the tty killed, then (re-)enable the getty, or if the script weren’t present, walk them through many of the individual commands it would have done for them, and once that trusty TR lit up and stayed on solidly, I was usually good to go. (Doing a cold boot of the server to clear the serial port was under 1% of modem problems.) An infernal modem can’t give that kind of status out without some program talking to it to emulate the lights.

    ___
    *Infernal is not a misspelling of “internal”; it’s my opinion, born of hard experience, of the damned things.

  40. @Patrick Maupin
    > As they say, necessity is a mother.

    What you describe is a very cool feature of the AT bit sequence, but one your own employer didn’t think was worth taking advantage of. That suggests that you may have been incorrect in the initial statement:

    > the selection of the ASCII characters “A” and “T” was no accident.

    Everything I’ve ever seen on the subject indicates that AT was short for ATtention, and the only mentions of autobaud was that it was, in fact, a happy accident rather than a deliberate design decision.

  41. What about Interrupts/IRQs, serial addresses, ProComm/Telex for dialing in to BBSes (and..uh..other things). I remember setting up WWIV and screwing around with PCBoard back in the day..

    1. >What about Interrupts/IRQs, serial addresses, ProComm/Telex for dialing in to BBSes (and..uh..other things). I remember setting up WWIV and screwing around with PCBoard back in the day..

      Relevance filter. Are there any reasons other than sentimentality for a reminder of this stuff? How can knowing it help today’s young hackers?

  42. I’m a Computer Assisted Instruction (CAI) alumnus, and even we deaf kids could hear the ASR-33 Teletypes churning out the spelling and arithmetic drills via acoustic modems. Technically the KSR-33, since they lacked the paper tape and punch. For decades I thought that ours was the pilot program, but in researching it for Wikipedia I found that it actually began in 1965. Curious to think that Ubuntu Linux can trace its lineage in some measure to those servo-mechanical precursors. Provenance matters.

    1. >1982 – ZX Spectrum and Commodore 64 launch year

      Relevance filter. What traces have these left that are interesting today?

      The low barrier for this is the trace the AR-33 left, that Unix device names still have “tty” in them. If old tech hasn’t had even that much lingering impact, I leave it out. Nostalgia is not enough.

  43. “Relevance filter. What traces have these left that are interesting today?”

    Cheap computing for the masses. A lot of folks didn’t jump into getting their own computers until the price dropped below $100.

    1. >Cheap computing for the masses.

      Sorry, that’s just not good enough. When I do relevance filtering for this document, I’m not asking “what is generally informative about computing history”. I’m specifically trying to identify facts and dates that are helpful knowledge for younger hackers – things they might tip over , for example, when trying to understand why old code is written the way it is.

  44. I think the mention of old DOS “Code Pages” are worth mentioning. You still come across traces of that in old file formats and language convert. Pre-Unicode stuff.

    The Japanese had their own data entry system, but I don’t remember much about it.

  45. I don’t know how common this knowledge is/was, but the ASCII field separator control character still survives as the default SUBSEP used by awk to simulate multidimensional arrays.

Leave a Reply to pouncer Cancel reply

Your email address will not be published. Required fields are marked *