My title is, of course, a reference to the 1984 paper End-to-End Arguments in System Design by Reed, Saltzer, and Clark. They enunciated what has since become understood as perhaps the single most central and successful principle of the design of the Internet. If you have not read it, do chase the link; it well deserves its status as a classic.
The authors wrote mostly about the design of communications networks. But the title referred not to “network design” but to system design, and some of the early language in the paper hints that the authors thought they had discovered a design rule with implications beyond networking. I shall argue that indeed they had – there is a version of the end-to-end principle that applies productively and forcefully to the design of (non-networked) software. I shall develop that version, illustrating it with a case study from experience.
To apply the end-to-end principle to software design, we need a way to state it that is general enough to be lifted out of the specific context of network design. The network-design version is this: intelligence belongs at the network endpoints, not in the pipes. Trying to make the pipes “smart” duplicates functions like error correction and receipt acknowledgment that the endpoints are going to have to do anyway to ensure end-to-end integrity. Trying to make the pipes smart also introduces tricky failure modes when the smarts inevitably go wrong.
Now I’m going to tell a story about a software failure. As I write, I have spent most of the last two weeks spinning up a replacement for a widely-used social-computing service that irreperably crashed on us. The replacement involves a service daemon I named “irkerd”, which is expected to relay notification requests submitted as simple JSON objects on a listening socket to Internet Relay Chat channels.
To do this, irkerd (which is written in Python) relies on a Python IRC library designed to speak the client end of the message protocol defined by RFC2812. The library is pretty good; without it, the irkerd implementation would have taken much, much longer. In turn, irkerd has stressed the library in ways it hadn’t been stressed before. I’ve been contributing fixes and patches back to the library, and the maintainer has shipped a couple of point releases as a result.
Yesterday, just as the irkerd codebase was stabilizing after some early problems with thread safety, some of my test users on the irker chat channel began reporting a new fatal bug – a Unicode decoding error being thrown from deep inside the IRC library. Investigation and an email query to the maintainer revealed that this was the result of a recent design decision and a consequent change to the library internals in the previous day’s point release.
Previously, the IRC library had made no assumptions about the character encoding of the chat data it received from IRC servers. It simply passed those strings as uninterpreted payloads of events made visible by the library to the calling application (in this case irkerd). Because irkerd is a sending relay rather than an interactive client, it just threw that received chat data away. The only received traffic it cares about is the IRC server’s responses to login and message-transmission commands, which are plain ASCII.
In this point release, the maintainer changed the library to perform UTF-8 decoding early in the processing of the chat strings, so the event payloads would be guaranteed Unicode. Which was fine and dandy until a server shipped irkerd a chat line with a bad continuation byte. At that point the early UTF decode threw an exception from deep inside the library, crashed one of irkerd’s main threads, and hung the daemon.
The library maintainer thought he’d be doing calling applications a favor by performing UTF-8 decoding so they don’t have to. What he did instead was introduce a new fatal failure mode on bad chat data – a particularly annoying one for irkerd, which only sees that data because the protocol requires it to, and really wants to just ignore the chat lines.
Assumptions about character encoding properly belong not in the IRC library but in the calling application. There are several reasons for this, but they all come down to two points: (1) only decoding exceptions raised locally can be handled locally, and (2) how to handle them is a policy decision that the application must make – so the library should not try to pre-empt it.
The library maintainer violated a software version of the end-to-end principle. It reads like this: in any software data path, whether networked or not, interior components that can be indifferent to the nature of the data they are handling should remain indifferent. Every assumption introduced while handling data implies new exceptions and failure cases; to minimize your failures, minimize your assumptions.
I will finish by noting that I wrote this essay because web searches on “end-to-end arguments” and “end-to-end principle” suggested that the above software-generalized version of it may not have been written down before – all the references I could find were about network design. I believe that this is one of those folk theorems that every sufficiently experienced software system designer eventually learns without necessarily becoming conscious about it. I hope that by writing it down, so it can be learned more rapidly and explicitly, I will have helped improve the practice of software design.
Or, to put it another way, if you don’t have to interpret data, keep your fingers out of it.
s/Seltzer/Saltzer/
And thanks for the link to the paper. I was mistakenly under the impression that the end-to-end principle had come from another person.
In this case there seems to be a much simpler argument. RFC 2812 says: “No specific character set is specified. The protocol is based on [a series of octets]” (section 2.2). A library that forces a particular encoding is a partial implementation of the protocol, and will not be able to talk to other clients that legitimately don’t use UTF8.
In type-theoretic terms, you should use universal functions that abstract over types whenever you can.
“Assumptions about character encoding properly belong not in the IRC library but in the calling application. ”
Actually, they belong in the JSON decoder. What if you use an encoding that’s not 7-bit clean, i.e. where the byte 0x22 might mean something other than a closing quote in some context? (No, “escape it with a byte 0x5C” is not an acceptable answer. This would mean that the JSON _itself_ is no longer valid text in _any_ encoding.)
Sorry, I misread the issue, and thought it referred to client data rather than IRC data.
However, from the IRC library point of view, I don’t see how converting the data to a unicode string is fundamentally different from changing “:foo!bar@baz PRIVMSG #quux :blah blah blah” to some kind of event object. It all falls in the category of handling the protocol. The real problem is that it did it _wrong_ – the fact that message content may be any sequence of bytes (except null, CR, and LF) is a defined aspect of the IRC protocol, so any client application absolutely must do something sensible when receiving “invalid” data, whether that is to translate it to a fallback character, or (as most real IRC clients do) fall back on a different encoding such as latin-1 or windows-1252.
It occurs to me that some of these principles apply to a lot of self-healing systems such as ecological ones.
For that matter, the constitutional government framework in the states, at least as originally envisaged, was also based on pushing responsibility, decision making, aka “smarts” as close to the endpoints as possible, with the center/national level only handling things that could not be handled at a more local level.
IRC is clearly an application layer 7 construct. So how about just following the basic rules given the OSI model? Wow, do we now get to have an OSI vs TCP/IP debate in 2012?
Is this related to the principle that advises scientists not to round off quantities in the middle of the problem?
>Is this related to the principle that advises scientists not to round off quantities in the middle of the problem?
Yes, I think it is.
@patrioticduo Character encoding is actually a layer 6 concern in the OSI model. And while IRC is a typical “hybrid 6-7” protocol like many internet protocols, when you’ve got a separate protocol library component (i.e. the distinction between IRC library and calling application), that’s about as close to a layer 6-7 separation as it gets in the real world. If the calling application wants to make policy decisions about how to handle encoding, it should do so by telling the library what to do, rather than by receiving a raw byte array rather than a text string.
with a frequency of about one such interchange in every million bytes passed.
Just read the paper, and wow, that’s a nasty bug.
I’d be careful about applying this principle too widely to system design though. At first, I thought it translated into mission critical control systems applications quite well. We tend to report back what’s happening without any translation in PLC/SCADA (apart from consequential error suppression) because the most important element in any control system is the *operator*, who needs the best information you can give them – the classic end to end example, as the authors state “their owners were forced to the ultimate end-to-end error check: manual comparison with and correction from old listings.”
But then I thought further. Control systems have hard-wired interlocks to prevent the (very expensive) hardware from doing things it shouldn’t. So bearing temp sensors for instance will be wired into the control circuit for motors, even though the logic controller is checking for that as well. Technically that’s a violation of the end-to-end principle, something in the pipeline is blocking requests that the end user application thinks is valid. But you find me an engineer who will tell you that’s not a good idea, and I’ll show you a dickhead.
Of course, there are times you don’t care about that. So you put in an explicitly activated bypass that tells the hard-wired system “ignore those alarms”. This is routine in tunnel ventilation for instance, where you can set a “run to destruction” mode for critical infrastructure like fans, which says in effect “ignore overheating warnings, keep moving air till you quit, we don’t care about the cost”.
On the other hand, I wish the original developers of NFS had read this, that would have saved me a lot of heartache at college dealing with a filesystem that pretended to be local when it wasn’t…
So as a general principle, I guess I would have to say, it depends.
@ESR On further reflection, I’m actually very confused at how you can talk about separation of concerns on the one hand, and yet think that anything _but_ the part that deals directly with a network protocol should be dealing with raw byte streams rather than character strings. That the IRC library failed to do its job properly does not mean that converting the wire protocol to high-level structures wasn’t part of its job.
What should an IRC library do if it’s meant to be consumed by a language that _only_ has unicode strings? Should it send a byte array that no text processing can be done on? Maybe Python isn’t such a language, but they certainly exist.
>What should an IRC library do if it’s meant to be consumed by a language that _only_ has unicode strings?
Fail. RFC2812 tells us there are cases where no other result is possible.
Just for fun and a bit off-topic, the trend in the industry is to describe the interlock bypasses I described as “fire mode” for tunnels. I still prefer “run to destruction” because it stops people from being blase about turning it on willy-nilly. It’s also more accurate. I’d love to know what they call it in the nuclear power game (I think I can safely assume their coolant systems have similar protocols) if anyone knows.
@Random832 – Reading through RFC 1459, you can immediately see that the IRC protocol was written with little regard (if any) for the OSI model. Section 8.4 makes it all too obvious that the application layer is doing far more than it ever should have been. But that is par for the course with the great majority of older TCP/IP applications. The result is that people write “libraries” that support an “application” based upon an RFC that condemns them to future pitfalls of the type ESR has just run into. Over the years, I have always wanted to see the IETF draw a line in the sand for most if not all of these poorly conceived applications “messaging” systems. The goal being to force a total rewrite around new standards that properly respect the layered approach. Alas, we’re nowhere near it. The primary problem is in ensuring that each layer provides any and all required signaling down and up. Unfortunately, the IETF fails to establish RFC’s that provides such signaling standards. The result is a panoply of applications, each with its own collection of RFC’s, all defining and redefining the same signaling from layer 4/5 and up. I blame RFC3439 for it. Where it essentially discounted the idea of layering for the singular straw man argument that layers may not signal each other properly. Oh but if the IETF had defined an interlayer signaling protocol then that argument would have been made moot. The networking industry has developed on a large scale what should have already been done inside the host. It hasn’t happened and ESR just felt the pain of it. The same thing is repeated countless times over the entire planet Earth.
You can have a lot of fun with gaming out these sorts of problems, at least it’s fun till you’re explaining your reasoning to the coroner. For instance, electric solenoid locks (the ones that go “bzzz”), what should they do when the power cuts out? For safety’s sake, they generally drop back to unlocked. You don’t want people locked inside a burning building. In prisons though, you might think differently… perhaps.
That one isn’t a violation of the end-to-end rule, the end application (the lock itself) is deciding what to do without assuming some other system will tell it.
@patrioticduo Internet protocols generally don’t care about the 6-7 separation – and rightly so: looking at the actual definition of the layers, the distinction is more or less between an application and a library, and a “layer 7 protocol” would be more properly referred to as an API than a protocol. So, really, internet protocols are a description of both the layer 6 protocol and the layer 7 data that goes on it (but not the layer 7 protocol itself)
Sounds to me that the real problem was the the library author changed the functionality of the library and somehow expected all his clients to not notice, or to be sufficiently robust to deal with it.
I am a fan of the brittle software principle, which is the opposite of promiscuous input, strict output principle. I think functions should be super strict on their inputs, and break on invalid data, not somehow massage it into a form that is kind of sort of what is wanted. The sooner a failed assumption breaks your code the better off you are.
For example, all that truthiness stuff in javascript is just a bug waiting to happen. Personally I write my code to deliberately break when it gets an unexpected input. I find bugs way sooner and way more easily that way.
A related thing is something I started doing recently. It used to be that I always initialized variables when I declared them, even if the initialization was arbitrary. I don’t do that anymore. I’d much rather the compiler croaked on the DFA discovering a path where my variable is used before being initialized than trick it with an arbitrary and wrong initialization.
Of course that assumes you declare your variables and give the compiler enough info for a good solid DFA, which, AFAIK, Python doesn’t. I am a fan of strict typing.
@Jessica Boxer – It’s not reasonable do that with untrusted input, and you especially can’t do that if you “break” by crashing the application entirely when receiving a single bad message. Which is why, you know, HTTP has 400 and 500 responses instead of bringing down the server. And doing this over a stateful protocol is worse (though it is fortunate that IRC is at least defined so that message boundaries can be determined regardless of bad message content, since messages cannot contain a carriage return).
My complaints about firewalls, and the hard-to-diagnose issues they causes by violating end-to-end, are at this point a running joke among my co-workers… but still. “Hey, let’s implement layer 7 policy at layer 4 while pretending to be at layer 3! Just think how much trouble that will save!”
Sigh.
Filesystems have a similar problem. You can’t trivially copy files from one Windows filesystem to another because the source filesystem’s file name might not be valid in the destination filesystem (or might irreversibly change even if a valid alternative representation can be found). Unixish filesystems have no trouble with this because they define filenames as byte strings where only ‘/’ and ” are meaningful, and stick to that definition despite decades of opposition from people who apparently enjoy working around storage layers with unpredictable implementation- and even locale-specific behavior.
This is a concept Unicode-everywhere fanboys have trouble with. “Filename” and “Unicode string” (or even “ASCII string”) are not interchangeable types. This can cause problems for languages, libraries, or programs that incorrectly assume their Unicode text string objects and methods are appropriate tools to use on filenames. That assumption is just as wrong as the assumption that doubles can be used to store amounts of a currency, phone numbers, or US zip codes.
If a language has only Unicode strings, a protocol library for that language should provide a byte-array-to-unicode-string conversion function that an application can use on the strings it cares about (and knows the encoding of and can handle errors for). If application code is never going to look at many of the strings, it’s all cost and risk with no benefits to the application if some library layer is trying to guess how to convert everything.
IRC is a terrible protocol to begin with, and even worse for robots. RFCs and OSI and application-specified policy implemented in layer 6 is nice, and all, but the problem is harder than a binary “convert everything or not” switch can solve. Sometimes the encoding of message text varies from channel to channel or even user to user on a server depending on the people chatting in the channels.
@Random832 – It’s not reasonable do that with untrusted input,
There is a big difference between a polyamorous arrangement and guy who sleeps around.
Which is to say, accepting a wide range of inputs doesn’t make you a whore.
Let me ask you this: imagine early web browsers had croaked on badly formed HTML rather than doing a crappy job rendering it. Would we be better off or worse — given how easy it would have been for all those sloppy HTMLers to fix their code.
@Zygo:
> This is a concept Unicode-everywhere fanboys have trouble with. “Filename” and
> “Unicode string” (or even “ASCII string”) are not interchangeable types. This can cause
> problems for languages, libraries, or programs that incorrectly assume their Unicode
> text string objects and methods are appropriate tools to use on filenames.
A prime example of this assumption that affects an entire desktop environment:
http://bugs.kde.org/show_bug.cgi?id=165044
Basically, it is simply impossible to handle a file name that is not valid UTF-8, even though it may well be a valid Unix file name as a byte string. The developer decision is “we’re not going to fix this, because any filesystem that presents such names is broken”. Except that it’s not.
I haven’t looked at that library, but I have a hunch I know at least one part of the thinking that prompted the library change.
In Python 3.x, strings are unicode. End of story. Full stop. There are things called “bytes” but they are different and arguably not as functional as strings.
The schism between Python 2.x and 3.x is slowly healing as things are being added back into 3.x to make supporting both with a single codebase easier. The snowball effect is starting to happen in earnest. More stuff is available for python 3.x, so people want to make their own stuff work on python 3.x.
In the code I’ve had to write that conditions strings, e.g. the gettoks() function in this module, I made the following decisions:
– For Python 2.x, convert it to a string. If it’s unicode, convert by ENcoding TO UTF-8.
– For Python 3.x, convert it to a string. If it’s bytes, convert by DEcoding FROM UTF-8.
– In both cases, use the ‘replace’ option to the encoder/decoder, so that bad bytes simply translate badly instead of throwing an exception for something upstream.
Like it or not, we’re being dragged kicking and screaming into a Unicode world, but in a very uneven fashion. In general, one of the things I love about Python is the exception-on-any-weird-thing-you-didn’t-expect, BUT… given the current state of Unicode vs. ASCII things, when dealing with this sort of plumbing infrastructure, it’s arguable that 99% of the potential clients would much rather see somewhat recognizably garbled data than an exception, which is why I override the default “throw an exception if you can’t convert it” with “give a replacement as best you can, but just give me the string, dammnit.”
@ESR Fail. RFC2812 tells us there are cases where no other result is possible.
I have no idea what this response could possibly mean. No other result than what? What cases?
>I have no idea what this response could possibly mean. No other result than what? What cases?
No other result than failure. RFC2812 permits chat strings that aren’t valid UTF-8.
I have been following this principle for a long time not because I knew the principle but because I am a Lazy Programmer (TM). If the spec doesn’t require something to be done I don’t implement it because it *might* be useful. Another way of thinking about it is to avoid speculative programming. Less code is better – you don’t have to write it, debug it, document it or support it.
Heh. Didn’t see Jessica’s comment before my post. But yeah, looks like I’m not the only one who agrees with Postel’s law.
And, arguably, a comm library should be liberal in what it accepts from the channel, even if that means forcing the library client to be liberal in what it accepts from the library, and should be strict in what it sends to the channel, even if that means forcing the library client to be strict in what it sends to the library.
In other words, the library throwing an exception based on not liking the data it gets from the network is horribly bad, but the library throwing an exception because it doesn’t like what you’re asking it to send to the network is acceptable.
There is no such thing as a byte sequence that cannot be converted to a Unicode string – if it’s not valid UTF-8, interpret it as a different encoding, or replace the invalid bytes with question marks. Unless RFC2812 discusses bytes with values outside the range of 0 to 255, I don’t know how it could define a case where it is impossible to convert something to Unicode via the (common in actual IRC clients) mechanism of trying UTF-8, then trying Latin-1 or windows-1252 if UTF-8 fails.
@Zygo:
“Filesystems have a similar problem. You can’t trivially copy files from one Windows filesystem to another because the source filesystem’s file name might not be valid in the destination filesystem (or might irreversibly change even if a valid alternative representation can be found). Unixish filesystems have no trouble with this because they define filenames as byte strings where only ‘/’ and ” are meaningful, and stick to that definition despite decades of opposition from people who apparently enjoy working around storage layers with unpredictable implementation- and even locale-specific behavior.”
Oooh boy. This is going to be a fun one. First, if you take to be just the on-disk representation of a filename, I will give you that you cannot safely copy the binary-blob which is the filename from one filesystem to another. However, this is true even when looking at Unix systems.
Let’s look at NFS. NFS (say, v3) has the same property. Filenames are a binary blob plus a length. This means that if you create a filename using NFSv3, your system will always be able to find it using exactly the same binary blob. Assuming that your keyboard and display setup match, what you typed in for the create operation will be displayed correctly when you perform an ‘ls’ operation.
Now let’s add character sets into something. In your binary blob, if your first character is 0xF0, what does that mean?
For ASCII it’s invalid. For EBCIDIC is is ‘0’. For the old DOS code page 437 its the ‘?’ character. For ISO 8859-11 it’s ‘?’ and for ISO 8859-13 it’s ‘š’ and for UTF-8 it means it’s the start of a 4-byte “Ancient Sym,CJK” (according to Wikipedia).
When (not if) you have multiple systems which default to different filename encodings, it can be nearly impossible to actually *use* the filenames. If somebody from accounting wants to know where the parts list for a project is and you tell them a filename, there’s no reason to think that they’ll ever see anything close to that when they go to find it. If you want to find a file in /var/log associated with a particular application, the system needs to be able to convert between what you press on the keyboard and the binary blobs stored on disk in order to find the right file. Heaven help you if you want to find all filenames which contain the letter ‘q’ in either uppercase or lowercase on the filesystem.
Daniel mentioned above RFC 2812 which appears to allow any arbitrary binary blob to be set as a message. This is a terrible idea. Not that something wasn’t specified, but that there wasn’t a way to even negotiate a format. One day I want to log into IRC using an EBCDIC machine.
There are two ways of handling this reasonably well. The is the way this was handled by CIFS/SMB (and about the only time you will ever hear me say something nice about this protocol). In early versions they specified USC-2 (2 byte characters). Boom. Now you know what’s going across the wire, and what it means (for a certain value of meaning, I suppose).
The other approach is the one taken by HTML, where you basically start the document with a known character set, and early-on you can specify a directive which indicates what character set should really be used. This allows theoretically any character set to be used through negotiation. NFSv3 doesn’t handle this well at all.
Regarding storage, I’d point out that there’s no reason to asume that a filesystem in UNIX is providing you 1 byte per character. A compressed filesystem might decide to go to 6-bit characters as a form of space-savings. Of course, the filesystem driver will have to do the expansion so the OS can do useful stuff, of course.
Ultimately, character sets have to be specified somewhere. There are many cases where you can treat the binary blob as a binary blob. There are also many cases where you can’t. This *must* be specified as a part of the communications between two subsystems, either in-band or out-of-band, regardless of whether a network is involved. If you don’t … I get a headache.
@Random832:
Sure there is, if by “convert” you mean an easily reversible process that maps characters correctly without guessing (aka “heuristics.”)
Yeah, there’s the rub. What if the “invalid bytes” contained necessary information?
The point is that doing the right thing is easy. Until you interface with other systems in the real world. At which point you sometimes (unfortunately) have to delegate figuring out what the “right thing” is upwards to the library client. Which is its own can of worms.
@Garrett re EBCDIC… IRC doesn’t _really_ not specify an encoding. It just _under_specifies it. It’s filled with references to ASCII, and numerous explicit byte values. Oddly enough, it’s POSIX that’s written to be theoretically usable on an EBCDIC system.
@Patrick Maupin Sure there is, if by “convert” you mean an easily reversible process that maps characters correctly without guessing (aka “heuristics.”)
Once you rigorously define how the guessing process works, it’s not really guessing anymore. For this application there’s no obvious need for the process to be reversible. Now, the filename case _does_ need to be reversible, and I’ve toyed with the idea of a special unicode mapping for non-valid-UTF8 filenames before [for example, map any invalid bytes to a private use range, and escape any actual private use characters in that range with the PU character that 0x00 would be mapped to] I’ve heard Mac OS X uses URL-style encoding for invalid bytes, which could be made reversible simply by also applying it to the percent sign.
“At which point you sometimes (unfortunately) have to delegate figuring out what the “right thing” is upwards to the library client. Which is its own can of worms.”
Or you could _do_ it in the library, and let the client _tell_ the library what to do. Since IRC is one a very few protocols that handles text but provides no character encoding information, an IRC library is a very logical place for a “convert to UTF8 with transparent fallback to latin1” routine to live.
As I mentioned above, it’s entirely possible to convert a sequence of bytes that isn’t valid UTF-8 to a unicode string.
At a minimum, such a library could simply abandon the idea of UTF-8 entirely, and interpret _all_ incoming IRC data as latin-1. Your conclusion that languages that use unicode for strings aren’t allowed to have IRC libraries was absurd. There are numerous options. Another one would be to have a special event just for not-valid-UTF-8 messages. Or you could silently (or noisily) drop incoming messages that aren’t valid UTF-8.
Or you could have a list of translation functions, and run each of them in turn until one of them succeeds – the default one is for UTF-8 with a fallback to either latin1 or question marks, you supply others in your library for a client to choose to replace it or add to the list, or the client could supply its own.
There are _lots and lots of options_ that don’t involve forcing the client to consume byte arrays it can’t do text manipulation on in the normal case. Just because one specific library chose the worst possible one doesn’t mean it’s not a valid function for an IRC library in theory.
@Ltw. The navy nuclear program has two things. One is “emergency run”, basically a button on the controller that says “I don’t care that you’re on fire, keep running.” The other is “battleshort” which defeats every automatic shutdown for the reactor, meaning the core will keep fissioning until either shutdown by an operator or it slags itself. It also has the side-effect of killing the CO’s career. I doubt any civilian plant has a battleshort switch, but there are so many ex-navy personnel in the civilian sector that I’m sure the phrase pops up.
The stupid way, by conflating “text string” with “bag of bytes”. That’s a serious logic bug in IRC. Encoding is an inherent property of textual data; if a program handles text strings without being aware of their encoding, that program is broken. If you make ANY assumptions about encoding that are not everywhere accepted, somebody somewhere is going to see mojibake. That should be drilled into programmers’ heads until they go back and change it, but legacy encoding-unaware programs will be here from now until Doomsday, forcing the rest of us to integrate well with them as your hand was forced on irker, causing headaches and mojibake barfage for everyone.
@Garrett:
No, let’s not, because it causes everything that happens beyond that point in your post. The fact that some widely deployed filesystems have already made that mistake does not make it suddenly stop being a mistake.
Sure, you can make some byte strings friendlier for humans to type and display. For example, you could assume it’s valid UTF-8 or keep just the ASCII characters and display the rest as ‘\NNN’ or just turn the filename into “0x” followed by two hex digits per byte. None of those things are the filename, they are a (possibly lossy) transformation of the filename, which must be transformed back before it can be used. If you have decent UI tools, or a portable encoding for filenames like that used for URLs or email header fields, you can solve the problem without breaking the filesystem. The filesystem is storage, not presentation, and when developers confuse the two design failure becomes inevitable.
USC-2 is one way to solve the “store byte arrays in Unicode strings” problem, provided that your string libraries don’t parse the data and squeal “OMG endian hints!” and flip all the byte pairs around, or strip various non-printing hint characters because they’re not semantically significant in some UTF-8 string contexts. A lot of unhelpful bug-ridden cruft tends to intrude into your software when you start conflating text strings with variable-length binary identifiers.
@Random832:
I think we’re mostly in violent agreement on the right answer. Which in Python, as I pointed out earlier, will unfortunately vary according to whether it is 2.x or 3.x.
Agreed, and as I pointed out earlier, this is probably true for 99% of potential library clients, for which whatever you decide to do (besides throwing an exception) on incoming data will be fine.
Agreed.
Sure. I’ve done that plenty of times. Or another option, given that it’s in Python, is to try a default conversion (e.g. from UTF-8), and if that doesn’t work, do the fallback dance, but return a subclassed string that contains an attribute with some out-of-band type information, indicating that you did the fallback.
If you do this, the 99% of the apps that don’t care will be fine (because a subclassed string is still a string), and the 1% of the apps that care can just figure out if it’s one of those “special” strings by checking whether it’s an instance of the special string, and/or looking for the special out-of-band type information attribute.
I think the proper application here of the end-to-end argument is that since the IRC protocol doesn’t make any guarantees about character encoding, figuring it out is none of the IRC library’s business. The values it passes in and out should be raw byte arrays, and figuring out what they mean is always the caller’s responsibility. Even if a byte array can be interpreted as valid UTF-8, that doesn’t mean that it’s intended to be, and it’s the Wrong Thing for an IRC library to impose that assumption.
@Daniel Franke:
I agree completely, except…
Except that it’s damned difficult to follow that prescription in Python 3. Especially in a library that’s trying to be compatible with both Python 2 and Python 3. You either make things byte in Python 3 (with its own set of problems) vs. strings in Python 2 (which difference causes issues for the clients if they are trying to work with both), or you bite the bullet and figure out how to always deal with standard “strings” in the language.
This is one of the reasons Python 3 has been slow to take off, and has been one of the major bones of contention associated with the language.
Maybe in 10 years when nobody uses Python 2, it will be obvious that “bytes” are the right thing to use in Python 3 for an application like this, but in Python 2, the obvious thing was “strings.”
My current embedded project perfectly illustrates this. I need an API on a PC, talking to a box that has to decode it and talk to a device. The top layer (optional) logs the data so I see what the requests and responses are.
I did design documents, so I wrote a fairly comprehensive specification for a layer if they want to implement full tax-authority detailed audit checking as the next layer before anything moves over the wire to detect any hiccup in the request. That isn’t there yet due to the nature of the project – some of the things we work with would break if we actually did this, and in actual practice, the data sent does the right thing anyway and the fields that would be checked would be redundant.
I have a trivial “over the wire” since it is USB which is reliable in the sense that I don’t have to bother with heartbeats, checksums, timeouts, or whatever. If it gets moved to bluetooth that can be added SEPARATELY and I have a similar side document. Or they can just use something like PPP. Or even ZModem. In any case the “interface” is the open/read/write/close model. (connect/read/write/disconnect to be more specific).
The only concession to the link is I include the length in the marshaling. So I can read the length of the packet, then know I’ve got all the bytes (I write all at once but the driver can split).
At the bottom end, I reconstruct the data, but only at the lower levels do I bother with checking – either the data should be correct from the top layer, or an error is only something that can be detected easily at the bottom. The code is clean, simple, direct.
If bytes are to be displayed to humans, they need to have an encoding specified and need to be valid for that encoding.
In the beginning, everything was assumed to be seven bit ascii, and everything worked because it was seven bit ascii. Then we proceeded to handle more characters, with no consistent way of specifying the new encodings, and often no way of specifying the new encodings at all.
But this should only be a problem when the time comes to display the string to humans. Let the human handle the error:
“Oh damn, this filename is some japanese gibberish which my system cannot display, and my keyboard cannot type”.
Happens all the time. Joe six pack is used to it.
“But this should only be a problem when the time comes to display the string to humans. Let the human handle the error:
“Oh damn, this filename is some japanese gibberish which my system cannot display, and my keyboard cannot type”.
Happens all the time. Joe six pack is used to it.”
Only because Joe Six Pack has been searching for tentacle hentai.
> Only because Joe Six Pack has been searching for tentacle hentai.
Good thing for Joe Six Pack that google’s automatic spelling correction works so well, then.
So this principle, put quite succinctly, then states that the reason I can get the process ID of a daemon called ‘food’ when I do something like;
ps -ef | grep food | grep -v grep | awk '{ print $2 }'
Is that I can count on the shell itself, grep and awk to not mangle the text output by ps along each step of the pipe chain.
@Morgan Greywolf – and if you grep for “a.b”, how should it decide whether 0x97 0xC3 0xA1 0x98 matches? Text tools _not_ using a text encoding to work with text data simply isn’t a reasonable thing to expect.
@Morgan:
Of course the whole Unix thing breaks completely (not on every system, but on all the ones I use) when you want to try to do something like:
#! /usr/bin/env ./myscript myparams
apparently because neither the shell nor env claim responsibility for breaking apart ./myscript from myparams. This software analogue of “that’s more than my job’s worth” is a very useful starting point to avoid overengineering, but it only gets you so far. Eventually, _somebody’s_ got to step up to the plate and do the job.
In this regard, Unix has been out-Unix’d by…. Microsoft. Their Windows PowerShell provides a similarly powerful set of tools, all of which handle typed, structured data objects rather than raw brittle text.
Programmers need to start learning some humanities. HIGHER LEVELS OF ABSTRACTION. Information is not all just bags of bytes; it has structure, meaning, and context, which is just as important — if not more so — to preserve and manage as the sequence of bytes in the bag.
@Patrick Maupin the funny thing is – DEMOS, of all unixes, actually added new code to handle that – and even to handle “#!/my/interpreter $* stuff”, which does exactly what it sounds like it should – in the kernel. Logically, doing this _should_ be the kernel’s job, because that’s what does any interpretation at all on that line.
@Jeff Read
“Their Windows PowerShell provides a similarly powerful set of tools, all of which handle typed, structured data objects rather than raw brittle text.”
Yes, they handle XML objects. As XML objects are trees, handling them requires a Context Free Grammar. Writing rules in a CFG is “non-trivial”.
Actually, human beings are extremely bad at writing context free grammars. I have seen several projects using CFG rewriting rules to handle text conversions. They were all abandoned after the one person who understood them left.
On the other hand, “brittle linear text” streams are handled with Regular Languages, aka, Regular Expressions. RegEx are not easy, but mere humans can actually handle them.
This “More Powerful” MS feature reminds me of the Access Control Lists “more secure” security of MS. That was not the security success MS had dreamed of.
Quick sidetrip: if you work with XML and you do *not* know about XMLstarlet, you should go look at it.
“raw brittle text”
Text isn’t “brittle”. It’s the assumptions made by programs imposing HIGHER LEVELS OF ABSTRACTION that can be brittle.
Obviously I don’t want to have to reimplement utf-8 (say) from scratch all the time. But if a system or toolset doesn’t allow access in some form, at some level, to the underlying bag of bytes, I know which way I should bet on when the “brittleness” is going to come bite me.
>But if a system or toolset doesn’t allow access in some form, at some level, to the underlying bag of bytes, I know which way I should bet on when the “brittleness” is going to come bite me.
Yup. Experience has not been kind to previous attempts to replace the text stream. Architecture astronauts try this on every couple of years; it never ends well.
@Winter “Yes, they handle XML objects. As XML objects are trees, handling them requires a Context Free Grammar. Writing rules in a CFG is “non-trivial”.”
Good thing they don’t work on the XML itself, then, but on the deserialized object. Downside is you can’t write a truly _general_ utility without relying on the user to pass in the bit where it deals with the objects (it’s equivalent of “grep”, called “where” after the SQL keyword, requires the user to pass in a function that accepts whatever type of object and returns boolean)
@darrin “But if a system or toolset doesn’t allow access in some form, at some level, to the underlying bag of bytes, I know which way I should bet on when the “brittleness” is going to come bite me.”
As opposed to getting bit when _someone_ got too comfortable with how they _assumed_ the text was structured, and therefore didn’t handle when a process name has got a space (or a tab, or a comma, or a newline) in it.
/home/eviluser/bin/blah\n1 food – oops. Guess there was a good reason pgrep is a C program that actually looks in /proc itself.
@Random832
What you “should” do is take the XML tree and compile it into a new tree. Just as is done with, say, C code to assembler.
As few people can write such a compiler on-the-fly, people indeed serialize the tree, run “tree augmented” regex on the stream, and then build a new tree.
Think a C compiler build completely in regex.
So, in the end the XML objects of MS have the same power as linear byte streams with regex’. Except that they have additional overhead.
@Winter – You’re missing the point: XML serialization is just a _mechanism_ for passing high-level objects around. I actually am pretty sure that in most cases it’s passing the objects around directly within a single process space, rather than as an XML-serialized string.
Powershell’s “select” and “where” functions are equally as powerful as any functional language’s “map” and “filter” functions. The novelty is building an interactive shell around them rather than a mere programming language. The pipe is just the symbol for function composition on functions that take a sequence as both input and output.
@ESR “Experience has not been kind to previous attempts to replace the text stream. ”
Except, it has been replaced. The “raw byte stream” you are proposing/demanding is precisely a [stealth] replacement for the text stream. You _cannot_ properly handle UTF-8, and therefore recognize “á” as a single character, without being willing to in some way reject bytes that are not valid UTF-8. And this was a problem for East Asian languages long before it ever became a problem for European languages (an even worse problem before, in fact – UTF-8 solved half of it, at least, with its disjoint set of lead and trail bytes and no escape sequence states)
@random832
I suspect that if it is powerful, it is complex, if it is simple, it is not powerful.
The idea of XML powershell is that you have more power than with regex’. I have serious doubts wherher this power is available in practice.
Given a DNA sequence, I can use (a kind of) regex’ to convert it to proteins. That is a byte stream conversion. (The Blast algorithm is a HMM regular grammar)
The equivalent conversion on XML would be DOCX to ODF. I do not think powershell could do this even in principle. However, it is theoretically possible in a real CFG like lisp.
@Winter I honestly have no idea where you’re getting this “xml” thing from. It’s like thinking the map and filter functions in lisp or scheme or haskell operate on S-expressions.
PowerShell is utter garbage. This “HIGHER LEVEL OF ABSTRACTION” causes major breakage in successive OS versions. It suffers from the same problems as any object-oriented API — successive versions of the API return objects that look less and less like the previous versions.
> This “HIGHER LEVEL OF ABSTRACTION” causes major breakage in successive OS versions. It suffers from the same problems as any object-oriented API — successive versions of the API return objects that look less and less like the previous versions.
This, of course, is exactly the problem that COM and its successor NET quite successfully solved, but the solution relies on compilation. The compiled code running on a newer API is guaranteed to receive the sort of objects that the compile time API would have given it.
So if you write a scripting environment based on NET, you lose the solution, and once again suffer the major suckage that COM and NET was in large part designed to solve.
True in a world where everyone uses US-ASCII.
Not true in the real world in 2012 where variable names and config-file keys can be in Thai.
Text ALWAYS has an encoding.
ALWAYS.
Omitting it is like omitting units from your measurements.
So we are already at the point where a higher level of abstraction than “bag of bytes” MUST be used for text, because even the ability to examine a bag of bytes with the “Mark I eyeball” is not a given, unless you are willing to make sweeping anglocentric cultural assumptions.
Under certain circumstances (like debugging encoding errors), a bag-of-bytes view on the metadata-adorned text stream is justiifiable and necessary, but that doesn’t justify shipping around unadorned bags of bytes and calling them text streams.
@JAD
And that is exactly what you need to convert tree objects (XML, Higher Abstraction Objects) into other tree objects: Full blown compilers. Compilers which themselves are inflexible and difficult to construct.
The fact that most admirers of PowerShell have no idea what the importance of XML is shows how little they understand where the power is draining away.
@Winter ONCE AGAIN, I have _no_ idea where you got this “Powershell is XML-based” stuff from. (including your very first statement “…they handle XML objects”, which at the time I took you at your word for what I thought you meant, but it now seems to be turning out that you meant something entirely different and completely absurd)
P.S. And an array/list/sequence of a struct or class, which is the kind of thing that Powershell pipelines actually work with in the cases Jeff Read was describing, is A) definitely a “higher abstraction object” and B) definitely NOT a tree [Bonus: C) is something that any reasonable serialization of can be described by a regular grammar]. I don’t, therefore, know what you think you mean by “tree objects (XML, Higher Abstraction Objects)”
Also, JAD wasn’t (unless I miss my mark) referring to compiler technology in the abstract sense of transforming trees to other trees when he said it “relies on compilation”, he was referring to the fact that COM uses something like symbol versioning, which is not available when you use its “look up by name on demand” facilities generally used in interpreted languages.
> he was referring to the fact that COM uses something like symbol versioning, which is not available when you use its “look up by name on demand” facilities generally used in interpreted languages.
Exactly so.
> Think a C compiler build completely in regex.
There was actually a project to make a simple Lisp compiler entirely as a series of text transformations. It’s startlingly simple; in two pages of fairly readable code, it parses the s-expressions into an AST, converts the AST to a series of virtual machine operations, and converts those into assembly code. Of course it couldn’t use regular expressions, since Lisp isn’t a regular language, but parsing expression grammars are almost as simple, and do the trick nicely:
http://www.vpri.org/pdf/tr2010003_PEG.pdf
(This would have been a great thing to publish on April Fools’ Day.)
@Random832:
I view it as more of an upgrade than a replacement. Like moving from an 8 bit processor.
Yeah, but it’s always been the case for programs that “properly” handle ASCII that they had to reject bytes that weren’t valid ASCII for the context. This sort of rejection is nothing new. The only new thing is that a whole slew of incompatible standard and de facto standards for the upper ASCII codepages are (fairly rapidly, at this point) being replaced by UTF-8. In point of fact, UTF-8 is the new ASCII.
I think UTF-8 actually solved pretty much all of it (except for a few mopping up actions among the holdouts). Except for very specialized CPU-bound applications with gobs of connectivity bandwidth, there is no good reason to put 16 or 32 bit unicode on the wire. But if you _do_ decide to do that, yes, you will need to explain up front that’s what you’re doing. But the mechanism for that explanation is fairly well standardized now.
@Jeff Read:
Sure it is. esr didn’t say “ASCII”. He said “text stream.” There have, over the last few decades, been lots of incompatible text streams, and we all got along fairly well, shimming in translators where needed. UTF-8 is, in fact, a text stream, and is rapidly rendering all the others, except for its ASCII subset, obsolete.
Which is not news to anybody participating in the discussion. Nor is it news that, historically, the encoding information often got separated from the actual text. What appears to be news to you, though is that, even when this happens text streams are less brittle than most alternatives.
Which works most of the time for most people in most places in the world. (Not necessarily so well on Mars, though.)
No, “bag of bytes” is still where it’s at. If anything, even more so now than 10 years ago, for the simple fact that the encoding of a “bag of bytes” is becoming more standardized by the minute. Unlike metric vs. Imperial, there are no serious holdouts in the UTF-8 vs. anything else battle. The people you might suspect would be holdouts are fine, because all their text was already in UTF-8, and they didn’t have to lift a finger.
>In point of fact, UTF-8 is the new ASCII.
Indeed. The fact that UTF-8 contains multibyte encodings is irrelevant to a lot of the classic text-stream tools for the simple reason that UTF-8 \n has the same implied line-delimiter semantics as ASCII \n and is the same code point.
>No, “bag of bytes” is still where it’s at. If anything, even more so now than 10 years ago, for the simple fact that the encoding of a “bag of bytes” is becoming more standardized by the minute.
Also true. Once again, the Unix tradition’s brute simplicity outlasts competing attempts at excessive cleverness.
While we are discussing character sets, I’d like to bring up one issue in your capacity as Jargon File maintainer — it’s ASCII entry. Near the end, it reads “The inability of ASCII text to correctly represent any of the world’s other major languages makes the designers’ choice of 7 bits look more and more like a serious misfeature as the use of international networks continues to increase (see software rot).’
But actually, the fact that ASCII is 7-bit while all modern computers are 8-bit has in fact been a saving grace. If one 8-bit extension of ASCII had been the standard from the very start, things could have been more convenient in the short term for several popular human languages, but Unicode would break much more than it does today.
By the way, the Jargon File has character set issues of its own. The pages are written in ISO 8859-1, but bear meta tags claiming to be UTF-8. Result: lots of substitution characters on Firefox.
@Patrick Maupin “I think UTF-8 actually solved pretty much all of it (except for a few mopping up actions among the holdouts). ”
What I _meant_ was that UTF-8 works better for certain specific uses such as text searching _when done by code that doesn’t understand it_. You can use an encoding-ignorant fgrep for UTF-8 in a way that you can’t for any historical East Asian encoding. Such encodings could have one character match the second byte of one character and the first byte of the next, or in some cases by a pair of two ASCII characters, which no longer exists even for pure byte-string searching in UTF-8. However, certain characters will still be seen as two, three, or four ‘characters’ by something that doesn’t understand UTF-8.
@ESR “Indeed. The fact that UTF-8 contains multibyte encodings is irrelevant to a lot of the classic text-stream tools for the simple reason that UTF-8 \n has the same implied line-delimiter semantics as ASCII \n and is the same code point.”
And if you tell “cut” to give you the first ten characters of each line, what does that mean? In fact, in modern times, cut – along with wc – has separate options for bytes vs characters (along with another one to make it not print a partial character at the end of a field/line) for precisely this reason. If you specify the latter, it absolutely does have to understand what encoding the data is in.
Interestingly, cut uses -c (the historical option) to mean characters, and -b (a new option) to mean bytes, whereas wc uses -c to mean bytes and a new option -m to mean characters.
@Patrick Maupin: “The people you might suspect would be holdouts are fine, because all their text was already in UTF-8, and they didn’t have to lift a finger.”
How many bytes can be between “a” and “b” to match the regular expression “a.b”, or the filename glob pattern “a?b”? The answer for UTF-8 is “up to four”, and the answer for ISO-2022 is “up to eight”. The answer is only “one and only one” when you only speak English and don’t care about other languages, or when you are using an ISO-8859 encoding. Of course, GNU grep _isn’t_ a holdout, so this discussion is hypothetical. However, I believe GNU cut is one: it documents both options, but they both select bytes.
@Random832:
Yes, there are still _tools_ to be upgraded. But as far as the _people_ go, there are people who want upgraded tools, and people who won’t even notice once the tools are upgraded.
Because the right answer is “one character.” Which in the ASCII subset of UTF-8 is “one byte.”
The small (and shrinking) subset of people who regularly use “extended ASCII” (other than UTF-8) always had to deal with one system or another not understanding their encoding, so they’re mostly busy migrating to UTF-8 as fast as possible, for any system that interchanges data with the wider world.
“Because the right answer is “one character.” Which in the ASCII subset of UTF-8 is “one byte.””
That doesn’t matter at all. Does, or does not, the byte sequence 97 C3 A1 98 match the regex “a.b”? And since such a tool clearly does have to have translated from UTF-8 (or whatever encoding) to do its work, what should it do on encountering the byte sequence 97 E1 98? Your position is that printing “a?b” is universally wrong here – that it must not only accept but also _emit_ invalid UTF-8.
@Random832:
I don’t believe this is true. I think the default that the tool operates on (these days) should be UTF-8, but for historical reasons, users should be able to override the default.
Sure. If by that you mean “the exact opposite of my position, which I stated a couple of times”
– that it must not only accept but also _emit_ invalid UTF-8.
There are 3 possibilities when confronted with bad data. Panic, silently replace with whatever seems “reasonable”, or leave the data intact. I believe that panicking and replacement should certainly both be user options, and that at a higher level, the decision to leave things intact can be made.
I never, ever made the case that a lower-level tool should emit bad data on its own, and I did, in fact, make the opposite case, if you would bother to read what I wrote.
I don’t get this “a.b” grep pattern vs. 0x97 0xc3 0xa1 0x98 example.
The byte sequence isn’t valid UTF-8 — it contains a trailing fragment, one valid character (U+00E1, small a acute), and then another fragment. With the right preceding bytes, it could be two valid characters, but no prefix or suffix could make the last byte legal.
And unless “a.b” is something other than U+0061 U+002E U+0062, I don’t see how it could match the above in *either* an ISO 8859-bigot grep *or* a Unicode-bigot grep.
However, there are other byte sequences that do present real issues. Such as 0x61 0xF4 0x8F 0xBF 0xBD 0x62 (U+0061 U+10FFFD U+0062) – the stress test to see if you are dealing with a Unicode grep. (U+10FFFD is a private use character and the highest planned to ever appear in Unicode.)
And overlong forms (0xc0 0xa7 = kind of like U+0027) are really bad news, used by crackers to exploit inconsistent Unicode handling. Ideally, a Unicode grep would return whatever result “fails safe” in the surrounding security context – but that’s not possible to determine. Silently normalizing overlongs is about the worst you could do.
Unfortunately, a shell script not written in a paranoid manner will see exit status 2 (error) as the same as exit status 1 (successful run with no match). And that may be just what a cracker passing overlongs wants.
The problem is sometimes we are _handed_ a file with such an omission, and need to deal with it somehow (or sit and curse the darkness, take your pick). The point is it’s not the _text_ that’s brittle (even given this omission) in such a case, it’s tools that make incorrect assumptions about what’s been omitted, and don’t allow users to recover from faulty assumptions.
If something is screwed up somewhere, then determining whether the text is misencoded (or just in a different encoding than the one assumed/advertised), or whether it’s the text-handling tools that have invalid Higher Level Abstractions baked in, oftentimes the Mark I eyeball is all you have.
And you can jam your “anglocentric cultural assumptions” in sideways with nails sticking out. The only reason I know anything about this subject is that when I first started using linux in the late nineties, I needed to get the text-handling tools I used (terminal, emacs, latex) to deal with Russian text (which at the time meant KOI-8R encoding). (At one point, more as an exercise than because of an actual need, I wrote a converter between said KOI-8R and the Cyrillic pages of UTF-8. Really brought home how hairy the encoding problem is in general and how brilliantly well-designed encodings like those two can help manage the hairiness.) I have gobs of files (not just .txt but .pl and .tex, no real uniform standard for announcing the encoding on all of these) on my home box with smatterings of KOI-8R in them, and probably UTF-8 as well; can’t currently be arsed unfortuantely to figure out how to get emacs to display the KOI-8R files in actual Cyrillic as opposed to its _assumption_ of ISO-8859-1 (what I used to do in init.el isn’t doing it, and I have just been too busy with work to investigate). The _text_ encoded in those files is perfectly fine, the _data_ isn’t brittle, the brittleness is in the tools.
Michael Deutschmann said:
_Perfect_ example, thanks. Stipulating that the text is properly encoded in ISO-8859-1, there’s nothing “brittle” about that chunk of text. But what Higher Level Assumptions should the hypothetical perfectly virtuous tool be making here? Always assume the meta tags are correct? Always asssume the “obvious” encoding of a given chunk of text (a la unix “file”) is correct? Always stop and ask the user? See when I say the tools are brittle, I am not slagging on the tools or their authors, I admit this is a hard problem. But saying the brittleness is in the text itself (or the anglocentric redneck bag of bytes or whatever) seems even sillier.
Random832:
I can easily imagine situations where I would want it to match. I can at least as easily imagine situations in which I would want it not to match. (In KOI-8R that sequence is “a[Cyrillic Ts][vertical parallel bars]b”, for example.) I don’t mind a tool making a default assumption, but if the tool doesn’t allow correction of an assumption when that’s not the assumption I want to make, that tool is useless to me. YMMV of course, I’m speaking from my own experience here.
@Michael Deutschmann “The byte sequence isn’t valid UTF-8”
Oops. I mistakenly put the decimal [97] and put 0x before it. Should have been 0x61 [and 0x62 for the other]. But since in context I clearly meant a string that starts with “a” and ends with “b”, I wonder why you didn’t realize that. I meant 0x61 0xc3 0xa1 0x62.
@darrin http://www.gnu.org/software/emacs/manual/html_node/emacs/Specify-Coding.html
@Michael Deutschmann “Ideally, a Unicode grep would return whatever result “fails safe” in the surrounding security context – but that’s not possible to determine.”
Morgan Greywolf’s position (which I mistakenly conflated with Patrick Maupin’s when I characterized it as “tools should emit invalid UTF-8”) seems to be that grep should “not mangle” incoming data – that if the line contains a match, it _should print the original [invalid] byte sequence_.
Of course, really, your example brings us full circle: the security reasons for disallowing overlong sequences _only exist_ in a world where strings are checked by tools that aren’t UTF8-aware. The reason your overlong sequence for U+0027 is a security problem comes solely from the fact that it slipped past an earlier bag-of-bytes sanitizing phase that filtered out 0x27.
The ban on overlongs is not just about safe coexistence with multibyte-naive applications.
One way to build a high-performance UTF8 grep is to write it as a wrapper around a byte-oriented regex engine. This would permit memchr() to be used on an mmap()-ed window for maximum speed. If overlongs were permitted, then even a simple fgrep for a purely ASCII pattern would turn into a very complex underlying pattern. There is one way to encode “michael” in UTF-8 — there would be from 16384 to 2^21 ways using overlongs, depending on how far you extend UTF-8’s methods.
But such a grep would not notice attempts to pass overlongs, or in fact any invalid multibyte sequence. Security can still be maintained though, if all entities doing UTF8-UTF32 conversion strictly block overlongs.
This means the Python behavior that gave ESR grief isn’t that unreasonable — it is a fail safe. (The original illegal sequence probably wasn’t an overlong, but a filter that only panics on apparent overlongs would eventually run into the same problem.) However, it was the duty of the IRC library author to catch the exception and provide some sort of fallback behavior. Even if UTF-8 was specified in the IRC RFC, internet applications must be resilient to dirty blows. (But always, a denial of service due to dirty blows is better than a security breach.)
Overlong and surrogate banning also decreases the false positive rate when analyzing a file to determine if it is UTF-8 rather than just random binary data.
I’ll end with one question: who is worse — Narrow-character bigots who never use wide characters and thus ignore invalid sequences while miscounting extended characters, or Unicode bigots who assume that wchar_t is always UTF-32 and char is always UTF-8?
@Michael Deutschmann:
You forgot the option of “Western language bigots” who assume most of the data going through their systems can, in fact, be coded in ASCII.
A byte-oriented engine is never going to perform as well as a word or dword engine, unless some large portion of the characters to be processed do, in fact, fit into bytes.
“Western language bigot” is an unimportant subset of “narrow character bigot”.
A 32-bit regex engine is only fastest if the data file to be grepped is already in UTF-32. Otherwise, the cost of decoding UTF-8 before the 32-bit regex engine can work on it penalizes this approach. It would likely be faster for small files, but usually we judge algorithms by their asymptotic speed.
Converting the input pattern into an 8-bit raw regex imposes a sizable one-time cost, but then the grep program can mmap in the data file and let the 8-bit regex engine loose upon it, with zero overhead for converting the file.
Not to mention being much harder to do than porting a byte engine to an appropriately sized character data type. Certain processors in the 1990’s made you pay a speed penalty for working with int16s instead of int8 or int32, but not as much of a penalty as dealing with crazy variable-length-string-encoding-aware search algorithms in any word size.
UTF8 was intended for tools that didn’t care what the encoding of a null-terminated byte blob was. UTF8 is for transporting text from place to place with a function like strcpy, as you would when transporting text from your filesystem to a network peer, or from the return value of one syscall to the parameter of another. If your use cases involve just assignment, concatenation, and I/O, UTF-8 is appropriate (but so are null-terminated byte blobs, and those are cheaper). For other use cases UTF-8 is usually the wrong data type.
If you want to process bytes as a sequence of characters, you have to convert them first. For all but a few character encodings, such conversions are lossy, irreversible, irreproducible, and can fail with application-specific consequences, but that is OK because that sort of problem has to be expected when turning arbitrary sequences of bytes into sequences of characters. It’s the same problem as turning arbitrary sequences of bytes into floating-point numbers or dates. It’s part of the definition of the task, and it’s why such conversions must occur only at the outer edges of software systems, closest to where policy and error recovery are implemented.
I use ‘grep -l’ to search binary files for byte sequences almost as often as I use it to search text files for character sequences. Fortunately grep makes it easy to do one or the other explicitly (although during its development grep’s default behavior changed, which caused much inconvenience at the time). Less popular (and therefore less-well debugged) tools force you to mess with environment variables, or don’t work at all because they’re hardcoded to read only UTF-8 input and have broken error handling.
@Michael Deutschmann:
I believe this would depend on several variables, including the pattern to be grepped for.
Sure, but by how much?
I was thinking the opposite. (Cache effects aside) an engine to convert UTF-8 to 32 bit words will be more efficient if you give it lots of data to convert.
And perhaps a sizable on-going cost, depending on the actual pattern. In some cases, you will now have 4 decision points for the engine, where before you only had one.
While it’s accurate to say that using mmap gives you efficiency without adding complexity, it’s not necessarily accurate to say that you can’t achieve similar efficiency as mmap at the cost of added complexity. And then there’s always the possibility of the regex engine itself converting UTF-8 to 32 bits on demand for itself — after all, what code is in a better position to know when it needs data and when it is finished with particular data?
“Converting the input pattern into an 8-bit raw regex imposes a sizable one-time cost,”
“sizable”, indeed. Every single “.” [you could maybe get by with ignoring it for “.+” and “.*” if you don’t care about passing invalid data] would have to be translated to “(?:[\x00-\x7f]|[\xc0-\xdf][\x80-\xbf]|[\xe0-\xef][\x80-\xbf][\x80-\xbf]|[\xf0-\xf4][\x80-\xbf][\x80-\xbf][\x80-\xbf])”, and any character class [even in + and * contexts] that involves (sometimes very extensive and scattered, [[:alpha:]] for instance) ranges of non-ASCII characters is even hairier.
Let’s not forget, though, that a UTF-32 regex engine has its own penalties. For one thing, you can’t reasonably represent character classes as bitmaps anymore.
You’d probably be better off implementing certain operations (like “.” and non-ASCII character classes) in a way that can directly read UTF-8 data than doing either your proposed solution _or_ a full-blown UTF-32 engine.
Converting a Unicode regex to an 8-bit regex expression for a human to read would indeed be hell, but I’m imagining the converter would be written together with the 8-bit regex implementation, and directly create a “compiled” 8-bit regex.
Also, rather than police correct encoding in the intermediate regex itself, it would be simpler to write a loose regex that can match invalid coding, and then filter out invalid candidate matches afterwards. Under that approach, “?” becomes just “[^\x80-\xbf][\x80-\xbf]*”. (Note that a perusal of Single Unix seems to indicate that there is no such construct as \x escapes within [] — but you know what I mean, and again, the converter will not be talking to the 8-bit engine via regcomp()).
(I hope this blog doesn’t mangle the above paragraph…)
But this digresses from my original point, which is that this optimization is only possible because of the “security in spite of narrowbigots” ban on overlongs.
Empirically, the RE2 regular expression engine does consistently well on benchmarks, and takes the most obvious approach to dealing with UTF-8: turn those Unicode character classes into big, hairy NFAs, then just be really good at matching big, hairy NFAs efficiently. Lots of juicy technical details here:
http://swtch.com/~rsc/regexp/regexp3.html
From the linked page: “Character classes are represented not as a simple list of ranges or as a bitmap but as a balanced binary tree of ranges. This representation is more complex than a simple list but crucial for handling large Unicode character classes.”
This does not imply the thing you just claimed, not even a little bit.
Oops… that was talking about an intermediate form, and I realized my mistake once I read further down. Sorry for the confusion.
End-to-end as an ideal in network design has proven to be impractical in real usage. The best you can hope for is a situation where, to paraphrase Trotsky, the endpoints may not be interested in the middle, but the middle is interested in the endpoints.
Providing optimized service, providing degraded service (for example to collect rents to restore service to normal levels), and surveillance are all reasons for the network to take an active interest in what its endpoints are doing. Absent stringent privacy and service-neutrality regulations, this will continue.
> End-to-end as an ideal in network design has proven to be impractical in real usage. The best you can hope for is a situation where, to paraphrase Trotsky, the endpoints may not be interested in the middle, but the middle is interested in the endpoints.
The solution is end to end encryption at the packet level, reliable encrypted transport on top of unreliable encrypted transport on top of unreliable unencrypted transport rather than on top of reliable unencrypted transport on top of unreliable unencrypted transport, rather than SSL on top of TCP – but, as experience has proven, end to end encryption does not work with unique true names, which is what DNS was designed to support. Need Zooko’s triangle in place of DNS, rather than on top of DNS, so that you find the network address of an entity from the hash of the rule that its public key satisfies.
in other words, for the end points to enforce the end to end ideal a whole new infrastructure on top of UDP and in place of TCP, DNS, and CAs is needed.
James A. Donald,
A plausible future:
Federal wiretap regulations require network providers to give law enforcement the ability to intercept any communications. Federal copyright law makes you an accomplice in grand theft for manufacturing, using, or trafficking in any measure designed to circumvent RIAA/MPAA copyright monitoring infrastructure. Encrypted data is trivially detectable due to its high entropy. Anyone caught sending large amounts of random data that isn’t readily decryptable by the authorities is to be considered suspicious and flagged as a possible terrorist, pirate, or both. The ToS for your ISP contain fine-print clauses under which you risk forfeiting your right to net access if they sniff too many undecryptable encrypted packets from you.
The network is evolving in a direction of being interested in your communications, and I have no doubt it will punish you for trying to hide them. It is a creature of the military-industrial complex, after all.
> Encrypted data is trivially detectable due to its high entropy. Anyone caught sending large amounts of random data that isn’t readily decryptable by the authorities is to be considered suspicious and flagged as a possible terrorist, pirate, or both.
If prayers to Allah are permitted, I can convert high entropy data to plausible prayers to Allah, and back again.
> If prayers to Allah are permitted, I can convert high entropy data to plausible prayers to Allah, and back again
For higher bandwidth, I can steganograph encrypted packets into video camera noise.
If random looking data is outlawed, the relevant algorithm is to construct a plausible model of non random data, take a stream of genuine non random data, adjust random data to fit the model using arithmetic compression in reverse (arithmetic decompression), and mingle the derandomized model data with the real non random data.
> Encrypted data is trivially detectable due to its high entropy.
No, it isn’t, because compressed data has also high entropy.
@Jeff: Internet banking and internet shopping needs encryption (though probably not a large amount of data).
@Jeff Read
“Federal wiretap regulations require network providers to give law enforcement the ability to intercept any communications.”
For all purposes, they already have.
@Jeff Read
“Encrypted data is trivially detectable due to its high entropy.”
Even “Little brother” gives you all the parts you need to defeat that kind of nonsense. And that is just a novel for high school teenagers.
http://craphound.com/littlebrother/download/
Compressed data has high entropy, but decompressed data does not (if it did, why bother trying to compress it?). For that matter, even compressed data usually has some structure that can be quickly tested. A side-effect of anti-encryption legislation may be that new compression algorithms are regulated to the point of being effectively illegal.
Internet banking and shopping use SSL. SSL uses PKI with a few CAs signing certs. Legislation could require ephemeral key persistence and retrieval infrastructure as a condition of getting a cert, and any other use of encryption could be illegal.
A more likely side-effect of anti-encryption legislation is that legal digital communications will be priced well above their utility, so nobody will bother.
The other side of the “encrypt everything” coin is that your ISP and your own network gear can’t help you (e.g. by the latter repacketizing TCP to work around poor software choices at the end point, or by the former accepting bribes from you to prioritize your favorite protocols over protocols other customers might be using).
> Legislation could require ephemeral key persistence and retrieval infrastructure as a condition of getting a cert, and any other use of encryption could be illegal.
The more things they make illegal, the less effective the laws.
Congratulations, you’ve created a darknet — a slow, buggy, unreliable network that only terrorists and paedophiles will use. Anyone else will steer clear of it for fear of being suspected of terrorism or paedophilia, because suspicion = guilt.
> Congratulations, you’ve created a darknet — a slow, buggy, unreliable network that only terrorists and paedophiles will use. Anyone else will steer clear of it for fear of being suspected of terrorism or paedophilia, because suspicion = guilt.
The more broadly the the state goes after copyright violators, tax evaders, the politically incorrect, etc, declaring them all paedophiles and terrorists, the more people have to use a darknet, for fear of being suspected of thinking bad thoughts, therefore being deemed a terrorist paedophile.
If the state forbids random looking data, it is going after just about everyone, whereupon just about everyone has a compelling reason to use a darknet.
HTTP 2.0 is a binary protocol.
The winning Linux startup system — systemd — comes with its own syslogd replacement which records all syslog events in a binary format that must be viewed and manipulated with special tools (included). It’s also not like traditional Unix in other key ways — adopting a “one daemon to rule them all” design philosophy, but the general consensus is that this is a superior approach.
Binary + special tools is winning over “plain text”.
Oh, and it’s alarming how quickly my “plausible future” became reality. The NSA can be assumed to be logging all internet packets, sharing them with law enforcement, and flagging encrypted data as “suspicious” and storing it for future decryption.
>Binary + special tools is winning over “plain text”.
Only in your fantasies, HTTP 2.0 is a not even to final submission yet, ant likely to die the same death as HTTP-NG for the same reasons. And systemd has been rejected by the overwhelimingly most widely deployed family of Linux distributions, including Ubuntu and Mint.
HTTP 2.0 is a spit-shine on SPDY, which already has wide acceptance.
As for systemd, Ubuntu refused to adopt it only for NIH reasons on Canonical’s part. Debian hasn’t adopted it yet to maintain compatibility with its Hurd and kFreeBSD spins, but is under overwhelming community pressure to do so. Every other major Linux distribution has adopted or is considering systemd.
Outside Canonical, the Linux community prefers systemd to any alternative. I’d say that’s a good definition of winning.
So what you’re saying is that after 20 years of field experience, the problem of getting runtime dependencies up and running in the right order, and bringing them back down again in an orderly fashion, has finally been understood well enough for a solution to be partially implemented in (more) C code, ready for deployment in only-a-few more years of debugging. Well, I suppose that’s a victory of a kind. Well done!
I would say that I prefer SysVInit over systemd but that would be misleading. I usually neuter or outright delete 90% of the code of a typical SysVInit installation. For example, vendors checking boxes on feature lists seem to believe that “bringing ISC BIND to an orderly stop” belongs on the “reboot the system” code path, which is a patently stupid decision. BIND is nowhere near reliable enough to be trusted on a critical code path at the best of times–it doesn’t get better when things are going so badly elsewhere that a system-level reboot is indicated. The correct thing to do is to not interact with BIND at all (except for SIGKILL) on the shutdown path unless it has unsaved internal state worth preserving–and if there is such state, report it as a bug against named, and kill the process unconditionally anyway.
systemd’s development process is the blind leading the blind. Their solution to the bad-code problem is to add even more code to clean up after various kinds of failure, and an explosion of configuration options to deal with all the special cases the defaults handle disastrously badly.
SysVInit’s development process is no better. insserv combines the unspecified-boot-order-is-unpredictable-boot-order problems of systemd with the inflexibility of a compilation step and specialized tools. The fact that the output of the compilation step is a Make-style text file doesn’t save it from the stupid.
If anything, recent developments in both systemd and SysVInit are forcing me to move any code I care about to somewhere that is defended from interference from either init system. I no longer care who wins that absurd race to fail.
An iron law of software ecosystems: community support is a much bigger determiner of the overall quality of a piece of software than technical superiority or correctness. The BSDs are better written operating systems than Linux, but their communities are small and they fell behind in hardware support, so Linux continues to steamroll over them.
And as to the question of whether the arbitrary, self-serving decisions Canonical makes are indicative of the direction of the Linux community as a whole — No. No, they aren’t. Canonical does not engage with the community and contribute to upstream the way companies like Red Hat and Intel do, and is in the process of divorcing itself from the community and making Ubuntu an island.
A few months ago the Debian project decided to call a special committee to vote on a new init system for Debian. Though the opinions of the committee seem to indicate a tied vote (between systemd and upstart), it looks like the tie-breaking vote, cast by committee chairman Bdale Garbee) will be for systemd, making systemd the init system for Debian going forward.
So much for systemd being overwhelmingly rejected by the Debian family.
Well here’s a surprise. Ubuntu will switch to systemd too, as will Mint, presumably.
So the thing which you said was rejected by the largest family of Linux distributions is now universally accepted as the way forward by that same family.
Do you see what’s happening here? Lennart Poettering has gone full Feyerabend on the Unix Way, challenging its universality, and in the process, created a better system.
Unix qua design philosophy is dead. It’s pushing up the daisies, joined the choir invisible, etc.