The response to this piece has been remarkably broad and positive. I have to note, though, that I didn’t write it as a nostalgia trip – I don’t miss underpowered computers, primitive tools, and tiny low-resolution displays.
At least people did notice that it isn’t a you-kids-get-off-my-lawn grumble. I think it’s good for younger hackers to know these things, but it’s no fault of theirs that the technological context has changed so much that they don’t absolutely need to to get work done. In fact it’s a sign of progress.
Yes, you’ll occasionally trip over old tech for which forgotten common knowledge is important – and RS-232, in particular, is still important in niche applications. But the real reason to remember these things is less tangible, and unfortunately difficult for many people to talk about without sliding into sentimentality.
In any kind of craft or profession, I think knowing the way things used to be done, and the issues those who came before you struggled with, is quite properly a source of pride and wisdom. It gives you a useful kind of perspective on today’s challenges.
The real reason I wrote this is to encourage that kind of perspective.
Updated version here. With: more about the persistence of octal, current-loop ASR-33s, 36-bit machines and their lingering influence, ASCII shift, a bit more about ASCII-1963, and some error corrections.
Speaking of mini reminds me of an important if not widely learned lesson. By no means confined to computing but present in computing. My best friend watched with puzzlement as a business case was made for sharing an Alpha Micro with the office applications so the lab could buy one.
All the users and all the applications would have been better of if the office manager had been given an Apple 2 and a small petty cash budget.
My RS232 LED box showing red or green LEDs based on the connection.
UUCP – favorite of Telebit Trailblazer modems with trellis encoding that could do 9600 baud!
You never mentioned EBCDIC.
Or hollerith cards – who needs over 72 characters when you are doing Fortran?
I lost an old Anderson Jacobson Daisy Wheel printer when a storage unit was flooded. I had to swap a chip when one failed.
Winchester disks. S100 bus. 8 inch floppies, 5-1/4 flippies. Interleave (Steve Gibson’s spinrite!).
Dual cassettes and Basic with peek and poke (after hand assembling code).
>You never mentioned EBCDIC.
Gotta draw the line somewhere. It wasn’t common knowledge.
The main challenge of writing a document like this is deciding what to leave out.
s/36-bit word naturally divides into 6 3-bit fields/36-bit word naturally divides into 12 3-bit fields/
It’s worth noting that 3-wire RS-232 cables commonly wrapped the handshake lines around to fake them out, and this worked for just about every case where the handshake line wasn’t actually needed to throttle output or signal actual readiness.
(Expanding a bit on ESR’s response…)
> You never mentioned EBCDIC.
> Or hollerith cards – who needs over 72 characters when you are doing Fortran?
This stuff seems specific to IBM. Much of hackerdom (from what I can tell) lived in the DEC minicomputer world–and later, the MITS Altair/Apple II/Commodore microcomputer world. Neither of those really used EBCDIC or Hollerith punch cards.
> Winchester disks. S100 bus. 8 inch floppies, 5-1/4 flippies. Interleave (Steve Gibson’s spinrite!).
> Dual cassettes and Basic with peek and poke (after hand assembling code).
This is (almost) all microcomputer stuff. It doesn’t seem like this was universal knowledge among the minicomputer set.
It seems to me there are at least two, and possibly three, distinct sets of hackers. First, you have the DEC minicomputer hackers, running stuff like Unix and ITS and what have you. This is the set to which ESR and RMS once belonged, among others. About concomitant with the DEC hackers, we have the hackers who used IBM’s stuff. Jay Maynard might know more about that. Then, in the mid-to-late 1970s, we started seeing microcomputer hackers, first on the MITS Altair, then on the Apple II and finally on Commodore’s stuff–guys like Steve Wozniak (who designed the Apple II) and Andy Hertzfeld (who cut his teeth on the Apple II and later wrote much of the early Macintosh Toolbox). Each of these groups had different sets of knowledge they would consider “common,” though there was probably quite a bit of overlap.
>It seems to me there are at least two, and possibly three, distinct sets of hackers
I think “three” is correct. In “Things Every Hacker Once Knew” I’m concentrating on the common knowledge of what was then the DEC minicomputer hackers. And the reason for that is very simple: that culture ended up winning. The artifacts and folklore of today’s computing reflect it far, far more than they reflect the other two.
Somebody could write a master’s thesis on the details, but in outline it would look like this: First, the mainframers ended up as a backwater because they never fully made the jump to interactive computing. Then, the already-established minicomputer culture could have been disrupted by the micro guys, but then a funny thing happened – the micro guys discovered that they needed the mini tradition.
Two indicators of this were the principal architect of VAX VMS getting hired to write Windows NT and Linus launching Linux, both in the early 1990s.
ENQ and NAK were used in FTS-0006 (YooHoo/2U2) and in the Xmodem protocol, like the name on the tin. See http://ftsc.org/docs/fts-0006.002
I wrote BBS software once.
Somewhere around here. I have a 1990s-ish printout of the Jargon File. On fanfold, of course.
…and that TRS-80 Model 100 in the other room still works…
One thing that I lament at work is that we never document our failures. So when a newby comes to work for us, he spends the next couple of years re-making all the mistakes that the rest of us made – simply because he doesn’t know.
It’s important to understand, to some extent, the old technologies and remember why we don’t use them anymore.
You’re repeating a refrain that I’ve heard in an Aeronautical Engineering context.
Back when one designed an aircraft with a slide rule (I’ve actually *seen* one of these, never used one) one had to retain a great deal more domain knowledge in the mind, and be more adept at “back of the envelope” calculations.
Having all that domain knowledge cooked into a programming library has at least the potential to delay learning, or hide aspects of good design.
tl;dr: Never underestimate the value of sweat equity.
So, did some old fogie show you a list like this when you first became a hacker? Do you still remember what it contained (or would have contained)?
>So, did some old fogie show you a list like this when you first became a hacker? Do you still remember what it contained (or would have contained)?
I was going to say “no”…but at the time the original Jargon File from ’72 had a not entirely dissimilar impact on me.
@esr “First, the mainframers ended up as a backwater because they never fully made the jump to interactive computing.”
In his book “Who Says Elephants Can’t Dance”, Lou Gerstner has a different explanation. He says that IBM was (my words) addicted to their huge mainframe profit margins. This had the effect of disempowering IBM customers’ Chief Information Officers, because the departmental minicomputers (and later personal microcomputers) undercut the costs of the centrally-operated mainframes to the extent that the IBM customer CIOs couldn’t compete inside their own companies!
In the early 1990s I was working in a DOE National Laboratory, supporting a collaboration with IBM to adapt clusters of RS/6000s to scientific calculations. I saw the Perl camel book on many desks in upstate New York IBM development labs, and so thought that I really ought to buy IBM stock when at the time it was bottoming out in the $40s.
Unfortunately I was newly married and didn’t have much available cash. That’s got to be the biggest financial regret of my life!
>In his book “Who Says Elephants Can’t Dance”, Lou Gerstner has a different explanation.
This seems to me less like a different explanation from mine and more like an explanation of my explanation.
You’re saying that IBM never fully made the jump to interactive computing because they perceived it to be opposed to their business interests. That is interesting.
@esr:
As to why mini culture won:
1) While micro culture was independent of mini culture, it seems to have had links to it: I’ve heard that CP/M borrowed elements from TOPS, and a lot of the computational heavy lifting at big micro software houses seems to have been done by minis (or even micro startups, ISTR that MS wrote and debugged Altair Basic using rented time on a PDP-10 timeshare with an Altair emulator while the company was still a two man team).
2) The Unix branch of the mini tradition was in the habit of writing software in HLLs, and of being a bit freer with their sources even before OSS per se was a thing. As decently powerful micros became affordable for individuals, mini guys started porting mini software to them, because they wanted Unix at home. I think Linus is more an example of this than of micro guys realizing they needed mini culture.
3) Mini culture built the Internet. When micros became networked, it was on the infrastructure and culture built by the mini people.
>As to why mini culture won:
I think all these observations of yours are true. Your 3 is pretty close to my original point.
For concrete examples of micro guys realizing they needed mini culture, I give you Steve Jobs naturalizing Unix at NeXT and Apple, and Bill Gates’s Microsoft hiring Dave Cutler to write WinNT.
@esr:
> This seems to me less like a different explanation from mine and more like an explanation of my explanation.
>You’re saying that IBM never fully made the jump to interactive computing because they perceived it to be opposed to their business interests. That is interesting.
Well, I think he’s talking more about the scale of IBM products than the sort of work they were doing.
What seems to me the obvious way to take advantage of Moore’s law in manufacturing computers, but which no one seems to have actually done, at least at the whole-system level,(resulting in old companies getting outcompeted by newer companies operating on a smaller form factor time and again) is, every two years, to introduce a new model at your largest form factor, and, every two years, to rerelease each existing model at a lower size and price point. About the closest thing anybody is actually doing to this is Intel’s tick-tock cycle.
Thanks for the nice history, Eric.
In my day job I’m a field engineer for a networking company and I still carry various kinds of serial cables, including a null-modem cable, with me at all times. Networking gear is still frequently hooked up to a ordinary 56K modem, with the output of the modem going into the router or switch’s console port via some kind of serial-to-RJ-45 port arrangement, and some devices are smart enough to use a modem to “phone home” if they can’t hit their T-1 line (or whatever they’re connected to.) My work laptop still has a standard serial port which I use frequently. It also might interest you to know that Juniper networking gear is built on top of BSD.
I’d really appreciate a diff with the previous version.
>I’d really appreciate a diff with the previous version.
There are notes on what has changed in the version history.
> Then, the already-established minicomputer culture could have been disrupted by the micro guys, but then a funny thing happened – the micro guys discovered that they needed the mini tradition.
The other factor is that when the Internet became widespread making distributed development possible, the micro hackers’ earlier tradition of ‘freeware’ software was basically superseded. Every Windows power user knew for a fact that a ported FOSS application would have far superior quality compared to the average “freeware” and “shareware” solution – this was already true in the late-90s and became only more so in the 2000s.
Early 68k-based micros (most notably the Amiga and Atari ST) are also worthy of note as a separate culture in the late-80s and early-90s. Those were really the first multimedia computers (to use a modern term), so their concerns were rather different from the mini hackers’ – their architecture based on the Motorola 68k CPU and integrated custom hardware initially made these systems far more effective at all sorts of ‘soft-realtime’ multimedia tasks compared to contemporary x86. Ultimately, they declined due to a combination of technical stagnation, and (notably) corporate mismanagement in the case of Amiga. (The contemporary Apple Macintosh was not nearly as open to ‘homebrew’ developments, nor especially notable as a multimedia machine.) Still, a small upgrade to the 68010 or 68020 would have enabled these micros to support a Unix-like OS even earlier than the Intel 386 did. Moreover, our hacker culture still has much to learn from these folks’ focus on highly-reliable performance in multimedia tasks – current Android phones and tablets (based on Linux) are still nigh-unusable for low-latency apps, compared to the Apple iPhone!
> and could generate over 8000 distinct 9-bit characters.
This statement is obviously absurd. To my understanding the “characters” generated by lisp machine keyboards were not bytes (though obviously they did not represent all possible combinations of an 18- or 36-bit word either) – something that’s still true in Emacs today, whose native “character” type is an at-least-28-bit integer with 22 bits for the (unicode etc) character value and six modifier bits.
>This statement is obviously absurd. To my understanding the “characters” generated by lisp machine keyboards were not bytes
This was already corrected in the 1.3 draft. That will issue shortly.
DEL is described as “Under Unix variants, sometimes a SIGQUIT interrupt character.” – I think you mean SIGINT here.
On “dumb” vs “smart” terminals, this seems to have been a sliding scale historically. The ADM-3 was actually *marketed* as a dumb terminal – in contrast to LSI’s more expensive “smart” terminals. As I understand it, the distinction between “smart” and “dumb” in that era was either the ability to insert/delete lines/characters, or the ability to send data from the screen back to the host… someone who was actually alive back then might have more insight though.
On “Glass TTYs” – the Dec VT05 [1970] and Datapoint 3300 [1969] could both “home” the cursor (moving to the top left of the screen) and move it in four directions, which strictly speaking ought to be enough for a “truly 2-dimensional display”, though it’s not as nice as being able to move it to an arbitrary position with a fixed-size command.
>As I understand it, the distinction between “smart” and “dumb” in that era was either the ability to insert/delete lines/characters, or the ability to send data from the screen back to the host… someone who was actually alive back then might have more insight though.
I was alive back then. In fact I owned an ADM3A for a couple years in the late ’70s – used it with a modem for remote access to college computers from my off-campus apartment.
I don’t think the distinction was as systematic as you propose.
One of Wikipedia’s definitions matches my recollection, which is that “smart” terminals could do cursor addressing with one essentially-fixed-size control sequence. Technically, then, the ADM-3A was “smart”, though it was called “dumb” as a marketing ploy. While this might sound nuts, bear in mind that these were close to the peak years of the Volkswagen “soft sell”; I remember thinking that LSI’s marketing seemed to be positioning the 3A as the Volkswagen Bug of VDTs. Which wasn’t crazy; it was cheap (enough for me to own one) and the clamshell back made it look a little beetle-ish.
(In reality, by 1975-’76 you couldn’t sell an actual “dumb terminal” at all, at least not in the part of the tech world I inhabited.)
Another of Wikipedia’s definitions roughly matches your “ability to send data from the screen back to the host”. I never ran into that definition live, but this may be because all my peers’ terminal time was spent using full-duplex char-by-char interactive connections to minis; block-send terminals with local editing were only something I knew existed, not something I used.
“ability to insert/delete lines/characters” – never heard that as the distinction either, and am rather more sure I would have as this capability mattered considerably more for text editing at the low speeds we were using. (My 3A talked to a no-shit acoustic-coupler modem at a whole 300bps).
>which strictly speaking ought to be enough for a “truly 2-dimensional display
You might think so, but remember line speed. In 1969-1970 the available 110bps corresponded to a speed of roughly 11 characters a second. That meant that the statistically average cursor move using these primitive on an 80×24 screen, 65 chars, would take 6 seconds to execute! Thus, nobody ever tried doing incremental 2D display with these.
BTW, I still remember this much about this sort of thing mainly because I was the termcap/terminfo database maintainer for a while in the early Nineties. There is more I have forgotten.
UPDATE: Looking at this again I’m going to revise “by 1975-’76” to “by 1977-’78”. The stronger version may be true but that was a little too soon for me to be a direct witness. My first working experience with VDTs was in the fall of 1976 – though I had actually played with a primitive vector-graphics terminal hooked to a Univac mainframe in 1969.
@guest:
> Still, a small upgrade to the 68010 or 68020 would have enabled these micros to support a Unix-like OS even earlier than the Intel 386 did.
Well, there was A/UX.
guest,
The Tandy Model 16, a 68000-based machine, ran XENIX — well before the Intel 386 even existed. External MMUs were available even for the lowly 68k. Some architectures (I think the Lisa was one) ran two 68ks in tandem, the second serving as an MMU for the first.
Once microcomputer CPUs started resembling minicomputer CPUs architecturally (32-bit address space, MMU), it made sense to leverage the large mini knowledge base of “what worked and what didn’t” in order to speed time to market. This happened sooner than most people think: the Model 16 was available in 1982.
> Having all that domain knowledge cooked into a programming library has at least the
> potential to delay learning, or hide aspects of good design.
“The main benefit of JavaScript toolkits is they condense to fifty or so kilobytes what would otherwise take several hundred bytes to do.”
@esr “You’re saying that IBM never fully made the jump to interactive computing because they perceived it to be opposed to their business interests. That is interesting.”
Not at all. Evidence that tends to disconfirm your narrative includes the entire RS/6000 workstation product line, which was notable at the time for its impressive floating point performance compared to contemporary Sparc and Alpha machines. It ran AIX, a System V derivative extended with a fantastic disk virtualization layer that still hasn’t been duplicated anywhere else.
Also, the NSFNET backbone router nodes ran IBM hardware (RS/6000s again). It doesn’t get much more interactive than the core of the Internet!
Even the mainframes were highly interactive. The sorts of things we’re seeing now in terms of system image virtualization through Xen/KVM/VMware were not only widely available in mainframe land, but were the only way to get certain things done even at the individual user level. In a lot of ways, web forms are a reinvention of the old 3270 fill-in-fields-hit-submit interface.
No, the mainframe business cratered because they were artificially expensive and therefore couldn’t compete on a fiscal basis within commercial enterprises.
> That meant that the statistically average cursor move using these primitive on an 80×24 screen, 65 chars, would take 6 seconds to execute!
It’d be a poorly designed application that spends all its time randomly jumping to arbitrary locations on the screen, though.
Actually, you’ve prompted me to go back and check (originally I was looking for anything about tab delay requirements); reading its manual again, the VT05 did support direct cursor positioning (the code was ctrl-N, followed by 32-offset characters in the same way as the ADM-3a or VT52). So that makes the VT05 one of your “smart terminals”, pushing the year back to 1970. I was thrown off because I’ve seen the exact phrase “glass teletype” explicitly used in reference to the VT05 in multiple places.
>It’d be a poorly designed application that spends all its time randomly jumping to arbitrary locations on the screen, though.
Doesn’t take a high percentage of 6-second juimps to maje an update unusably slow, either.
Trust me, nobody ever tried this. Well at least not under Unix; it would have left traces in termcap/terminfo.
>I was thrown off because I’ve seen the exact phrase “glass teletype” explicitly used in reference to the VT05 in multiple places.
Yeah, I can’t imagine any way for that to be right.
Minor date point – you have the date for the first mass-market 64-bit PCs as around 2007, but AMD’s Athlon64 was released in 2003. It was somewhat high-end at the time, but more “upper range gaming PC” than “expensive workstation”. I bought one somewhere around then, and I seem to remember it wasn’t especially expensive.
I did a rundown of lots of uses of ASCII control characters in this page of the File Formats Wiki:
http://fileformats.archiveteam.org/wiki/C0_controls
As I recall, but I might be wrong, it was over forty years ago, Kidall based CP/M on DEC’s RT-11 (or maybe RSX-11). This was where the “C:” drive notation came from. Now that I think about it, I believe that CP/M had a pip command that was also lifted from DEC.
>I believe that CP/M had a pip command that was also lifted from DEC.
You’re quite right. I first used PIP under TOPS-10, and recognized it when I saw it under CP/M.
Note also that Apple was shipping 64 bit PPC970s as the G5, in 2003, same time as AMD.
Expensive, but definitely mass market, and “a PC” in at least the general sense; not sure if “Intel” is or is not implied in context.
While RS-232 was used quite a bit, it could only drive a device about 5-50 feet away. Some equipment could go farther, some only the minimum. It depended on the voltages available to the drivers. Most PCs used +/- 5 volts, but you could see +/- 15 volts on some equipment. IIRC, DEC PDP-11s had +/- 12 V, since the memory cards needed those voltages. While you could use only 3 wires for connecting a terminal via RS-232 to a PDP-11, many early modems required some of the other pins to be active. Modems provided by Bell companies were very particular.
If you needed to hook up terminals on the other side of a building, there was an alternative – the 20 mA current loop “standard”. Most DEC printers could use it. Many VDTs could use it. It used a large, flat Mate-n-Lock connector on DEC printers. Some VDTs used “spare” RS-232 pins or had a separate connector for it. You could drive a printer or a terminal several thousand feet away if needed. I had a customer site that was in two buildings, one across a street from the computer room. The customer got permission to run a pipe below the street and had a bunch of shielded twisted pair cables pulled for us to use.
I still have a small tool kit with some RS-232 and 20 mA current loop connectors, pins, pin extractors, crimpers, etc. I also had a set of cheat sheets for which pins needed to be connected for a bunch of different kinds of terminals, printers, and modems, but it’s long gone.
Regarding EBCDIC, “we” got bit by it 2 weeks ago.
Yeah, really.
We get a largish data dump once a month from some crusty old dinosaur somewhere that goes through an EBCDIC->ASCII gateway and gets dumped on a file system, then “our” code tries to bulk load it into a “modern” data warehouse.
Only the *test* data they gave us had \r\n as the line terminator. Production data had of the odd characters in the first 8 ASCII (I don’t recall what it is and don’t want to turn on my work computer back on. I’ll check tomorrow if I remember).
So yeah, I get “have to draw the line somewhere”, and I really didn’t have to know *about* EBCDIC to fix it, but putting 1 and 1 and 1 together got me 4 in pretty quick order.
Only the *test* data they gave us had \r\n as the line terminator.
I ran into that building a Ruby backend for a website. If you edited it would say \r or \n. Then on the second edit it would say \r\r and \n\n, etc. It took me almost a year to nail that bug completely. (I would have fixed it earlier, but it was a weekend project with lots of fits and starts.)
I’ve run into \r\n’s before. Eeeevil.
> You might think so, but remember line speed. In 1969-1970 the available 110bps corresponded to a speed of roughly 11 characters a second. That meant that the statistically average cursor move using these primitive on an 80×24 screen, 65 chars, would take 6 seconds to execute! Thus, nobody ever tried doing incremental 2D display with these. <
I remember full screen editing on my Wyse terminal being unusable at even 1200bps, *barely* tolerable at 2400bps and then Eureka – 9600bps to the rescue :-)
There is one aspect of ASCII history you didn’t mention, which was admittedly rather peripheral (a failed attempt by non-hackers to make hackers accommodate them), but no more so than 1963 ASCII. This is the ISO 646 variants.
That concept was to reduce several ASCII punctuation characters (12 to be exact) into second-class citizens so that foreign symbols could borrow their codepoints, to accommodate non-American users without breaking the 7-bit barrier. This then led to the demand that programming languages provide ways to avoid using these second-class symbols, and thus to the trigraph feature of C which hackers hate.
For example, there was a British variant almost identical to ASCII, except that the octothorpe “pound” symbol was replaced by the “pound sterling” currency symbol.
Of course, by declaring that the first class citizen status of the backslash and others was non-negotiable, in environments that couldn’t be eight bit we were declaring that accented letters, the German SZ, any currency symbol besides the dollar, and others could have no “citizenship” at all….
There are some lines in the Jargon File’s ASCII section that only make sense in the context of that failed effort:
> The inability of ASCII text to correctly represent any of the world’s other major languages makes the designers’ choice of 7 bits look more and more like a serious misfeature as the use of international networks continues to increase (see software rot). Hardware and software from the U.S. still tends to embody the assumption that ASCII is the universal character set and that characters have 7 bits; this is a major irritant to people who want to use a character set suited to their own languages. Perversely, though, efforts to solve this problem by proliferating ‘national’ character sets produce an evolutionary pressure to use a smaller subset common to all those in use.
In actuality, the 7-bit nature of ASCII, combined with the fact that computer hardware was pretty much always 8-bit underneath, made things much simpler.
If ASCII’s place in history had been taken instead by something 8-bit like Windows 1252 or CP437, things would be messy because programming languages would depend on mathematical symbols ASCII had no room for, such as U+00F7 for division. There would be no large band of free codepoints for the alternate history’s analog of UTF-8, or even ISO 8859-N where N != 1.
I remembered it backwards.
The data in DEV was bad:
head -1 .dat | od -c
0000000 (stuff deleted because privacy)
<…>
0001000 (stuff deleted because privacy)
0001020 0 3 5 0 2 I L 1 7 \r 032 \n
But in prod we got:
[uapp418p TMP]$ head -1 FILENAME_REDACTED | od -c
0000000 (privacy again)
<…>
0001020 0 6 6 2 1 \r \n
Baud barf isn’t necessarily random. If the configurations are closely related, you can recognize patterns in the transformation. Half/double speed being one of the more understandable ones. Being able to recognize common misconfigurations was a not uncommon skill.
The value of this document is great just for explaining “why does octal even exist” to people who would otherwise never have any idea or chance to gain one. I’d like to see the grand “All the history rendered trivial” version, but not so much as to stand under the hammer of maintaining it.
Obscure ASCII control codes live on today in some deep black magic financial applications like credit card processing, there’s even an ISO standard (or several) for them which I’m grateful to have forgotten.
Just to say it: early PC serial ports were almost always +/- 12v powered (I never saw 5v); and the implementation of the control and handshake lines was often nearly random. Even boards with identical controller chips might or might not have traces carry those control lines to the connector. the RI Ring Indicator line was lovely when you had it but it was common enough that it wasn’t connected that most software listened to the modem for a “ring” signal.
This probably does not fall under the “Things every hacker once knew”, but it is some old meta-knowledge:
GitLab.com melts down after wrong directory deleted, backups fail
https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/
Gitlab had 5 backup methods working, they all failed. Maybe they forgot to test their backups.
One of the comments refers to The Tao of Backup (http://www.taobackup.com/testing.html)
https://forums.theregister.co.uk/forum/1/2017/02/01/gitlab_data_loss/#c_3091383
I was wondering whether this was from the same Master as has been covered in these pages?
>I was wondering whether this was from the same Master as has been covered in these pages?
No, but they probably studied at the same monastery.
Nice link, explaining why CR is ^M, and ESC is ^[
https://garbagecollected.org/2017/01/31/four-column-ascii/
> Maybe they forgot to test their backups.
You can test a backup all you want and it won’t mean anything if you don’t test the restore.
@ESR: Thanks.
@Random832
“You can test a backup all you want and it won’t mean anything if you don’t test the restore.”
Actually, you should rebuild the running system from backup, periodically.
> You can test a backup all you want and it won’t mean anything if you don’t test the restore.
Every year, we have a Disaster Recovery exercise, the premise of which is that our HQ building (and therefore the Network Operations Center and all servers therein) have been destroyed by ${DISASTER}. We then connect via VPN to a site far from HQ, restore critical servers from backups, do what’s necessary to get them going again (frex: last year I had to make a hosts file entry because the host name for a DB one application needs to talk to wasn’t resolved correctly in our DR environment’s DNS) and our QA people confirm the applications perform normally.
How about mentioning the big-endian / little-endian distinction?
I wonder if early excitement about “virtual memory” deserves a mention. We used to have to know more about that than we do now. Understanding that the physical memory space did not need to align exactly with the addressable memory space was important.
Who recalls the way some PDP machines managed memory addressing such that the physical memory was *larger* than what the CPU could address, i.e. where virtual memory was smaller than physical memory?
What about the post-internet, pre-WWW contenders such as WAIS, Gopher, Archie, Veronica? There should at least be a mention of FTP and Telnet.