It not news to readers of this blog that I like to find common tactics and traps in programming that don’t have names and name them. I don’t only do this because it’s fun. When you have named a thing you give your brain permission to reason about it as a conceptual unit. Bad jargon obfuscates, map hiding territory; good jargon reveals, aiding reflection on and and improvement of your practice.
In my last post I coined “shtoopid problem”. It went viral; every programmer has hit this, and it’s useful to have the term because you can attach to it recognition rules and tactics for escaping such traps. (And not only in programming; consider kafkatrapping).
Today’s invention is the term “rule-swarm attack”. It’s derived from the military term “swarm attack” and opposed to “deep reasoning”, “structural analysis” and “generative rules”. I’ll explain it and provide some case studies.
A rule-swarm attack is what you can sometimes do when you have some messy data-reduction or data-translation problem and deep reasoning can’t be applied effectively – either you don’t have a theory or the theory is too expensive to apply in the place you are working. So instead you look for patterns – cliches – in the data and apply a whole bunch of little, individually stupid rules that transform it towards what you want. You win when the result is observably good enough.
It’s curious that this strategy never had a general name before, because it’s actually pretty common. Peephole optimizers in compilers. Statistical language translation as Google does it. In AI it’s called “production systems” – they’re widely used for tasks like automated medical diagnoses. It’s not principle-based – rule-swarms know nothing about meaning in any introspective sense, they’re just collections of if-this-then-do-thats applied recursively until you have reached a state where no rules can fire.
Yes, we are in the territory of Searle’s “Chinese Room” thought experiment. I’m just going to nod in the direction of the philosophical issues, because a dive into the meaning of meaning isn’t what I want to do in this post. Today I’m here to give practical engineering advice.
I’ve written before about my program doclifter, which lifts groff/troff markup to structural XML using a rule-swarm strategy. The thing to notice is that doclifter has to work that way because its inputs are an only weakly structured tag soup. Deep-reasoning approaches cough and die on datasets like this, they can’t deal with irregularity gracefully enough to cope.
This is fundamentally the same reason natural-language translation by statistical coupling of text or speech utterances has beaten the living crap out of approaches that try to extract some kind of Chomskian deep structure from language A and then render it in language B as though you’re a compiler or transpiler back end. Natural language, it turns out, just doesn’t work that way – an insight which, alas, hasn’t yet exploded as many heads among theoretical linguists as it should have.
But: rule swarms can be a useful strategy even when your data is in some sense perfectly regular and well-formed. Transpiling computer languages is a good example. They’re not messy in the way natural languages are. The conventional, “principled” way to transpile is to analyze the code in the source language into an AST (abstract syntax tree) then generate code in the target language from the AST.
This is elegant in theory, and if it works at all you probably get a formally perfect look-ma-no-handwork translation. But in practice the deep-reasoning strategy has two serious problems:
1. Preserving comments is hard. Most code-walkers (the source-code analyzers at the front end of transpilers) don’t even try. This isn’t because the writers are lazy; good rules for where to attach comments to the right node in in an AST are remarkably hard to formulate. Even when you pull that off, knowing where to interpolate them in the target language output is much more difficult. You’d need to have some detailed theory of how each segment of the source AST corresponds to some unique segment of the target AST. That’s really difficultt when the language grammars are more than trivially different. So most codewalkers don’t even try.
2. There is a C-to-Go source transpiler out there which shall remain nameless because, although it looks like it may do an excellent job in all other respects, it produces generated Go that is utterly horrible. Beyond obfuscated; unreadable, unmaintainable….
…and thus, unacceptable. No responsible infrastructure maintainer would or should tolerate this sort of thing. But it is, alas, a pretty common problem with the output of transpilers. Losing comments is not really acceptable either; often, especially in older code, they are repositories of hard-won knowledge that future maintainers will need. What then, is one to do?
Language translation by rule-swarm attack, using purely textual transformations on the source file, can be an answer. It’s easy to preserve comments and program structure if you can pull this off at all. It only works if (a) the syntactic and semantic distance between source and target languages is relatively small, (b) you’re willing to hand-fix the places where textual rewriting rules can’t do a good enough job, and (c) you don’t care that theorists will scoff “that kludge can’t possibly work” at you.
Then again, sometimes there’s no competition for a rule-swarm attack to beat because everbody thinks principled AST-based translation would be too hard, rule-swarming would be too flaky, and nobody (except, er, me) actually tries either thing.
Case in point: Back in 2006 I wrote ctopy, a crude C-to-Python translator, because it seemed possible and nobody was offering me a better tool. It doesn’t parse C, it just bashes C code with regular-expression transformations until it gets to something that, as Douglas Adams might have put it, is “almost, but not completely unlike” idiomatic Python. As far as I’m aware there still isn’t a tool that does more complete translation. And while ctopy is in no way perfect, it is a good deal better than nothing.
Swarm-attack translators like doclifter and ctopy are best viewed as translator’s assistants; their role is to automate way the parts computers do well but humans do poorly so humans can concentrate on the parts they do well and computers do poorly.
In Automatons, judgment amplifiers, and DSLs I made the case (about reposurgeon) that designing tools for judgment-amplifier workflow is sometimes a much better choice that trying for fully automatic conversion tools. Too often, when going for full automation, we sacrifice output quality. Like transpiling horrifyingly unreadable Go from clean C. Or getting crappy translations of repositories when reposurgeon assisting a human could have made good ones.
So, two days ago I shipped version 1.0 of pytogo, a crude Python-to-Go translator’s assistant. I wrote it because the NTPsec project has a plan to move its Python userspace tools to Go in order to get shut of some nasty issues in Python deployment (those of you who have been there will know what I mean when I say “library-directory hell”).
pytogo works way better than ctopy did; the semantic gap between Python and Go is much narrower than the gap between C and Python, because GC and maps as a first-class data type make that much difference. I’m field-qualifying it by using it to translate reposurgeon to Go, and there is already a report on the go-nuts list of someone other than me using it successfully.
You can read about the transformation it does here. More will be added in the future – in fact I notice that list is already slightly out of date and will fix it.
Besides preserving comments and structure, the big advantage of the rule-swarm approach pytogo uses is that you don’t have to have a global vision going in. You can discover targets of opportunity as you go. The corresponding disadvantage is that your discovery process can easily turn into a Zeno tarpit, spending ever-increasing effort on ever-decreasing returns.
Of course rule-swarm attacks can also fail by insufficiency. You might find out that the deep-structure fans are right about a particular problem domain, that rule swarms are bound to be either ineffective or excessively prone to unconstrainable false positives. It’s hard to predict this in advance; about all you can do is exploit the low cost of starting a rule-swarm experiment, notice when it’s failing, and stop.
Oh, and you will find that a lot of your effort goes into avoiding false matches. Having a regression-test suite, and running it frequently so you get fast notification when an ambitious new rule craps on your carpet, is really important. Start building it from day one, because you will come to regret it if you don’t.
And now Eric reaches for the sweeping theoretical summation….maybe? I think the real lesson here is about methodological prejudices. Western culture has a tendency one might call “a-priorism” that goes clear back to the ancient Greeks, an over-fondess for theory-generated methods as opposed to just plunging your head and hands into the data and seeing where that takes you. It’s a telling point that the phrase “reasoning from first principles” has a moralistic ring to it, vaguely tying starting from fixed premises to personal honor and integrity.
Because of this, the most difficult thing about rule-swarm attacks may be allowing oneself to notice that they’re effective. Natural-language translation was stunted by this for decades. Let’s learn how not to repeat that mistake, eh?
“Typo” in the article:
Brain faster than fingers spots:
>>>Bad jargon obfuscates. map hiding territory; good jargon reveals, aiding reflection on and and improvement of your practice.
>>>their role is to automate way the parts computers do well but humans do poorly so humans can concentrate on the parts they do well and humans do poorly.
“rule-swarm attack” – sounds much too much like a black-hat hackers latest nasty behavior. How about “rule-swarm solver” or “rule-swarm engine” (or something or other) instead?
Then again, isn’t what you’re talking about really just a ‘wicked’ problem solver? Or maybe a ‘partial wicked problem’ solver? https://en.wikipedia.org/wiki/Wicked_problem
Given that Google figured this out ten or so yeaes ago, as part of their quest to collect and sell detailed intel on anyone with an Android phone or Chrome browser, I’d say “attack” is the right word.
As much as I love to hate on Google, I found no evidence from the inside that they selling or purposefully exporting detailed data on their users. They seem to have fairly robust (if imperfect) defenses against that.
Rulebanging.
But this wouldn’t work if you were, for example going from a procedural language to a functional language, right?
> case (about reposurgeon) that designingn
> Western culture has a tendencu one might call “a-priorism”
Tpyos.
“Yes, we are in the territory of Searle’s “Chinese Room” thought experiment. I’m just going to nod in the direction of the philosophical issues, because a dive into the meaning of meaning isn’t what I want to do in this post. ”
Don’t be a tease :-P
>Don’t be a tease :-P
Oh come on, Trent. You can figure that one out. What is the algorithm the man in the room is applying but a rule swarm? It is exactly Searle’s premise that the man in the room has no deep-reasoning knowledge about Chinese.
But his “thought experiment” was actually flim-flam, and I remain astonished that anyone thought it was interesting for more than five minutes. Can you spot the fast one Searle pulled?
Daniel Dennett, in his book Thinking Tools, mentions that his philosophy students quickly stop being unduly impressed after he teaches them the workings of a simple register machine, Rodrego (An early draft of his Rodrego book chapter can be found here (PDF)). So maybe the reason you’re astonished is that you can’t un-learn what you know about computers?
> Can you spot the fast one Searle pulled?
Assuming that humans aren’t doing exactly the same thing, of course.
I would say it’s assuming that’s all there is to it. As the Churchlands pointed out long ago, the equivalent in our brain of the chinese room would only be one step (probably the base) in the process of meaning.
Depends. Searle’s argument was that “syntax is not sufficient for semantics”. Uninterpreted symbols carry syntactic, but no semantic information. Searle’s argument is that symbols are not an intrinsic feature of the physical world. It is observer-relative, thus computation or running algorithms are observer-relative and not intrinsic features of the physical world.
“We might discover in nature objects which had the same sort of shape as chairs and which could therefore be used as chairs; but we could not discover objects in nature which were functioning as chairs, except relative to some agents who regarded them or used them as chairs.”
“We could no doubt discover a pattern of events in my brain that was isomorphic to the implementation of the vi program on this computer. But to say that something is functioning as a computational process is to say something more than that a pattern of physical events is occurring. It requires the assignment of a computational interpretation by some agent.”
“For any program there is some suf?ciently complex object such that there is some description of the object under which it is implementing the program. Thus for example the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements which is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar then if it is a big enough wall it is implementing any program, including any program implemented in the brain.”
The argument is plain simply that computational states are not discovered as an intrinsic feature of physics, they are assigned to the physics by agents.
My far simpler argument in the same general direction is that you cannot talk about computation without referring to features and bugs, because computation where any behavior and output is acceptable, isn’t computation in any of the useful senses of the word. And features and bugs imply intent.
So I suppose we can get away with calling the brain a computer if we refer to the quasi-intent generated by evolution. Another philosopher, Ruth Millikan helps in this: whenever things are selected for and copied, from genes to books to slang words or social institutions, the feature they are selected and copied for is their “proper function”. Not quite an intent, but still something like a function or telos. So we can talk about features and bugs in our brain from an evolutionary viewpoint. We still have the problem that whatever features evolution judged useful is not necessarily what we would – think sugar cravings.
In other words, Millikan would tell Searle computation might imply something like an intent or at least an inherent function and thus may not be an inherent part of the physical world in general, but it is part of the biological world, the living subset of the physical world, because in the biological world there is selection and copying.
Thus for example the wall behind my back is right now implementing the Wordstar program
No, it isn’t, because the Wordstar program has input-output relations with the rest of the world that the wall behind your back doesn’t.
This is a key point that Searle always overlooks in his arguments. He is technically correct that syntax by itself is not sufficient for semantics. But that does not mean you just get to make up the semantics. You have to actually go look at what the semantics are, which does indeed involve looking at more than the syntax of the program itself.
I think Searle actually addressed that? That having the requisite causal structure which the wall does not have is a necessary but not sufficient condition, an agent still has to determine if the semantics are okay.
Rephrasing his arguments in my words: features and bugs matter, computation is that kind of thing that when it is sufficiently bad, it doesn’t really count as computation.
So again in my words: take a program whose job is to check the correctness of input, such as a spell checker. There is some Unix tradition that when the input is correct, it is okay for it to give no output, just exit silently, and talk only when something is bad, right? So as long as you feed only grammatically correct inputs into it, you don’t notice it that it was replaced by a null program that does nothing. Or suppose the spell checker has some tolerance, 0.01% of words misspelled is okay, and under this limit it exists silently. And this limit could be arbitarily high. 99.999999%… how do you determine it is still a spell checker and not a null program? This is a version of Kripke’s quaddition-function argument…
(At any rate I am not a hard-Searleist, but a soft-Searleist, I think Searle is going a bit too far with talking about agents and intent, I think Millikans model, that any mechanism of selection and copying evolves proper functions (selects for it) and that is sufficient for something like a telos.)
I think Searle actually addressed that?
I don’t see how. Searle basically claims that “being a computer program” *means* “having a particular syntax”, and further believes that any object at all–the wall behind you, for example–can be interpreted as having any syntax you like, if you’re creative enough in your interpretation.
The first claim is obviously false: computer programs run on computers and have input-output relations with the outside world. Disembodied syntax does not. The syntax is part of the program, but it’s not all of the program. (Sometimes this error shows up as him saying something is a “digital computer” when it obviously doesn’t have the input-output relations that a digital computer does; what he really means is “I’ve assumed that I can construct a gerrymandered description of this thing under which it has a particular syntax”.)
The second claim might possibly be true (though I would sure like to see a lot more backing it up than I’ve ever seen Searle give), but it’s irrelevant to the discussion.
So as long as you feed only grammatically correct inputs into it, you don’t notice it that it was replaced by a null program that does nothing.
Yes, but as soon as you feed a known grammatically incorrect input into it, you’ll see that it’s not the same program. So there’s an obvious test for the correct semantics, which, at least for any reasonable person, also serves as an obvious test for whether the internals of the program are what you think they are.
But Searle refuses to accept such tests, at least not for things he cares about like consciousness. That’s the whole point of his Chinese Room argument: that he refuses to accept that an obvious test for whether someone understands Chinese (i.e., has the correct semantics for a Chinese understander) actually tests for the internals being what they should be for a Chinese understander.
Further thoughts. I know there is something annoying about saying “X is not an inherent property of the physical world but observer-relative”, because, of course, is the observer not a part of the physical world? What the heck is it made of?
So there is this frustrating problem that the concept of the soul as some sort of a ghost stubbornly refuses to die, and lives on under names like observer, agent, subject, mind experiencing qualia and whatever else. Could someone finally shoot it dead and bury it?
But that is hard. Hard because the world consists not only of matter but also of information, which is really a different kind of thing. Speaking “four”, pixels on the screen in the shape of “4”, “IV” written on paper, all carry the same meaning, information, yet there is nothing in common in their physical properties. The meaning cannot be derived from their physical properties. It is based on convention. Mutual agreement.
So I communicate this number to you by any of these means and if you understood a meaning, I basically caused a state change in your mind/brain via an action whose physical properties do not matter at all. Isn’t it weird as fsck? An action whose physical properties can be just about anything? We can just agree every time I shrug my left shoulder I mean “four” and it works. How is that not really strange?
So the “observer”, really, a meaning-understander, information-understander, maybe information-processor is something really unusual. I am pretty sure it is not a ghost, but don’t know what it is.
Computers do not really help solving this dilemma because they are made by the brain/mind and are basically tools of it. Brain-augmentation tools. The only information I know of that is not so is the DNA. And perhaps therein lies the solution. DNA is getting processed and not by an agent or observer. Is it being “understood”? Probably a pointless question. Does it have a “meaning”? Probably a pointless question.
So maybe there is a clue there. But I don’t know what it is. @ESR, want to take a shot?
>Could someone finally shoot it dead and bury it?
It doesn’t die because Decartes was right; substance dualism is in fact true, and provably so.
[citation needed]
Shocking, bald assertion is bald.
If you want me to explain, ask. Don’t passive-aggress. It makes you look like a close-minded activist.
I apologize.
I’d still like you to elaborate on that. I’m not aware of any proofs of the existence of the soul, and I honestly don’t see how one could even go about proving that.
Extremely civil response, kudos. (As if you should care about my kudos.)
The mind has the wrong properties to be a physical substance.
Physics is objective, both ontologically and epistemically. It does not go away if you stop believing in it; any observer may learn about it.
The mind is objective neither ontologically nor epistemically. Your thoughts are exactly those aspects of existence that go away if you stop believing in them. No observer except the self can see a thought; not only that, but the observer in question cannot be wrong about the thoughts, by sheer law of identity. Your thoughts are what you think they are.
As an example, if you think you see a blue cube, which in fact an orange, you are wrong about the physical reality. You cannot be wrong about believing that the quale in question is blue and cubic. To suppose otherwise (say, a black octagon) is to suppose you’re thinking something other than what you’re thinking.
Thus Decartes was right that there exists an immutable epistemic base to the world. The lowest level concrete properties of your thoughts are 100% reliable. (Though not because of some literal deity’s ongoing effort.) Second Descartes was right that there exists a division between mind-stuff and body-stuff.
Descartes also proposed an organ that was made of both mind-stuff and body-stuff, which links the two. He supposed (based on nothing) that it was the pineal gland. Cartesian dualism is still naturalistic. There most likely exists a metaphorical pineal gland. E.g. Roger Penrose supposes it is in the microtubules. (Based on very little.)
There’s also a constellation of incidental hints that mind isn’t physics. If math can model a brain, then a brain is no more conscious than a ball rolling down a ramp. Both are simply mathematical functions of time, which would mean consciousness exists – we can’t be wrong about our thoughts – yet has no function.
Consciousness is very expensive in terms of neurons. This suggests it is highly adaptive, yet it cannot have any function in terms of physical computation.
It seems clear that p-zombies could exist. There exists no single action that a human can take that a robot cannot also take. Building a human robot should be merely a matter of getting the pattern of actions correct.This suggests again that consciousness should have no physical function, but is contrary to known experiment.
Consciousness is very good at comparison. The likeness or unlikeness of thoughts is trival to see. Physical computers struggle mightily with comparison.
Except that there are no qualia (though there seem to be) and there are no p-zombies (or else we are all p-zombies).
How do you know that the human robot that “[gets] the pattern of actions correct” wouldn’t experience consciousness too? Maybe it’s just a natural side effect of a complex enough system of behaviors, rather than an expensive standalone feature that our body developed for a mysterious reason.
Circular reasoning. It depends on a special meaning of “meaning” in order to say that machines can’t have it. Whereas the strong AI position implicitly asserts that isn’t anything special about “meaning”. It’s an attempt to define away the oppositions argument.
I had a discussion about it with another slashdot commenter way back when, but I can’t find it now.
“The” fast one? I have to limit myself to only one error?
“A rule-swarm attack is what you can sometimes do when you have some messy data-reduction or data-translation problem and deep reasoning can’t be applied effectively – either you don’t have a theory or the theory is too expensive to apply in the place you are working. So instead you look for patterns – cliches – in the data and apply a whole bunch of little, individually stupid rules that transform it towards what you want. You win when the result is observably good enough.”
Venkatesh Rao calls this finding a “cheap trick” when exploring a VUCA (Volatile, Uncertain, Complex, Ambiguous) space. You find some little thing that works, and that gives you momentum to really figure out what you’re doing, and build a more sustainable model.
Related concept, not 100% the same thing. From his book on narrative rationality: https://www.amazon.com/Tempo-tactics-strategy-narrative-driven-decision-making/dp/0982703007
This one is also good for “how to navigate high-dimensional structures in a common sense way”: https://mailchi.mp/ribbonfarm/frankenstacks-and-rhizomes
This quote parallells esr’s sentiment:
“46/ You’ve heard of analysis paralysis, right? I have a similar concept I call aesthetic paralysis: the desire for elegance in behavior limiting your agency. Superficial beauty is expensive in a rhizome.”
And regarding this:
‘Western culture has a tendency one might call “a-priorism” that goes clear back to the ancient Greeks, an over-fondess for theory-generated methods as opposed to just plunging your head and hands into the data and seeing where that takes you.’
That would be Aristotle’s “theoria” – plunging your hands in would be “praxis” or “poiesis”, the other two modes of knowledge.
Two benefits I see to the abstract-tree approach that aren’t mentioned are what I’ll call confidence in correctness of output and modularity.
You come close to the first when you talk about eliminating false positives. But more generally, if you have a specification for your source and target languages (C99 etc) you know the mapping from a statement in code to its semantic meaning and vice versa, your code can explicitly identify what parts of the standard are supported and which are unsupported. As a programmer with some source I want to translate, it is easier for me to know a priori whether your tool will work for me if it advertises “memset() statements not yet implemented” than if it says “some issues with initialising arrays”. The former you know based on whether you wrote a rule implementing that part of the standard, the latter is an observation based on where your tests show failure of your regexes. It comes as no surprise that natural language and groff, your examples of languages where the abstract approach fails, lack formal specifications, and/or content written in them frequently violates the specification but the parser tolerates it anyway.
The second is that if you write regexes to translate between a specific source and a specific target language, your total set of rules represents neither language, but the transformation between them. Modern high-level userland tools (eg ffmpeg, ImageMagick) offer conversion capabilities among a multitude of formats. If you try to build such a tool on top of one-to-one transformations, the number of rulesets you have to maintain scales as N^2. On the other hand, if your tool works by reading a source program into an internal abstract representation and then generating a target program from that same representation, any additional language requires only an importer and exporter no matter how many other languages the tool speaks and thus the total number of rulesets scales with N.
In principle you can make 1:1 transformations with a single natural output format, with no internal abstract, and have O(n) scaling using nothing but 1:1 tools between typical formats.
Obviously this is a terrible plan if you’re dealing with lossy transformations, like languages(natural or computer). But for images, you can easily find a commonly used lossless format to act as your intermediary, and save yourself the trouble of creating or understanding an abstract.
The problem with preserving comments when transliterating is not difficult to solve; just use combining parsers.
Or use generic meta-parser that can take EBNF description of language, and create a parser.
I think the problem with preserving comments is determining to which part of the AST they attach. Say you have a comment between two lines of code. Is it commenting the line before or the line after? A programmer reading the comment would know but no mechanical tool could.
In the vein of an assistant, it would easily be possible to attach a comment to two nodes in the AST with a note about the ambiguity.
The comment could easily apply to the space (state transition, implied concurrent event, etc) *between* the lines.
That said, comment-based markup languages generally have comment syntax that lets you say “this comment applies to the thing that {follows,precedes} it.” So blame the commenter.
I think Knuth had the right idea with tangle and weave: one language is embedded within another, and the human language is the more important. I did tangle/weave for x86 assembler out of pure self-preservation, and see comments as a form of signpost: “this syntactic construct implements this semantic”. K&R C and Go are good examples of that form of commentary.
I suggest that it is because “Oh, and you will find that a lot of your effort goes into avoiding false matches. Having a regression-test suite, and running it frequently so you get fast notification when an ambitious new rule craps on your carpet, is really important. Start building it from day one, because you will come to regret it if you don’t.”
that the words chosen for “real lesson” is prejudices. Chosen for good cause but chosen mistakenly.
” I think the real lesson here is about methodological prejudices. Western culture has a tendency one might call “a-priorism” that goes clear back to the ancient Greeks, an over-fondess for theory-generated methods as opposed to just plunging your head and hands into the data and seeing where that takes you.”
I suggest not prejudices but informed experience based conclusions about validating results lies behind the preference for what is here called “a-priorism.” Generally speaking the scientific method – which has been shown to work pretty well so in that sense is validated – says start with theory, develop a hypothesis, test the hypothesis and accept or reject based on the test but start from theory. Not falsifiable is not science but predicts right, at least most of the time, and so is useful works for many tasks. That a thread is often discarded does not invalidate predictive execution with good hardware.
See for example the current New York Times for a story on a disgraced scientific researcher: “Dr. Wansink’s lab was known for data dredging, or p-hacking, the process of running exhaustive analyses on data sets to tease out subtle signals that might otherwise be unremarkable. Critics say it is tantamount to casting a wide net and then creating a hypothesis to support whatever cherry-picked findings seem interesting — the opposite of the scientific method.”
The wide net mentioned might be analogized to a neural net. When results can be validated, as by prediction, the validated results do indeed justify the means. Few would think astrology scientifically sound – Mr. Heinlein’s remark: “find out how he feels about astrology”- but astrology is a valid predictor for early childhood success in preschool/kindergarten. The reason is of course that age at entry predicts success in kindergarten and here sun sign is another way to say age. Similarly “Snow’s endeavor to find the cause of the transmission of cholera caused him to unknowingly create a double-blind experiment.(Wikipedia) ”
Given that all time series are correlated and given that we can now handle larger matrices than ever before (I vaguely remember a sign at the University of Chicago when dropping off decks and picking up batch jobs that said if we catch you trying to invert a large matrix you will lose some privilege because it doesn’t work and the results don’t mean anything or words to that effect?) these days a massive Markov Chain approach to a time series can very well tease out “subtle signals.” Better hardware has made it possible to do massive swarm attacks and so the time for massive swarm attacks has come. My point is that frex teased out by big matrix by big matrix can be justified by it works but does indeed require avoiding false matches.
This is so timely. I’ve spent the last two days dealing with a csv-collection to json config file transform that we are rule-swarming. I’m pointing my team to this post.
In your judgment, how much difference does it make that C to Python is a step up in the languages’ abstraction level whereas Python to Go is a (smaller) step down?
>In your judgment, how much difference does it make that C to Python is a step up in the languages’ abstraction level whereas Python to Go is a (smaller) step down?
I don’t think “abstraction level” is a useful way to talk about the differences, so that question seems ill-formed to me.
Let me re-phrae my question then. If you were to start writing a Python-to-C translator or a Go-to-Python translator, would you expect the project to be harder or easier than ctopy or pytogo, respectively? (Other things, like the quality of the translations, being equal.) What features of the languages would you expect to account for the asymmetries, if any?
Ahaaaa! I think you already answered my question in the pytogo man page. Reading through its list of supported transformations, I see that the translations are mostly very low-level, like replacing Python-“def” with Go-“func”, Python-“del” with Go-“delete”, and so forth.
Because each of these transformations is so close to a search-and-replace pass and so far from the ‘interesting ‘ grammatical transformations you would learn in a compiler class in college, they should be reversible quite easily. So the answer to my question is that translations in the opposite direction should be about equally hard because you use a rule-swarm. This is also why my earlier distinction between high-level and low-level languages was an unhelpful category.
I had failed to internalize just how low-level the transformations in your rule-swarm are, and just how little they need to know about any nontrivial grammatical information in the input and its language.
>I had failed to internalize just how low-level the transformations in your rule-swarm are, and just how little they need to know about any nontrivial grammatical information in the input and its language.
Startling, isn’t it?
It surprises even me, and I’ve done this sort of thing before.
I’ve made the argument a few times that Go is basically a next-gen scripting language, written by people who did not want to throw away 40-50x performance for convenience, so they got what they could without much penalty.
I get looked at weird. But after several years using it, I still see no reason to recant. Is it a bit harder to use Go than Python? Yes. Do I sometimes miss a Python construct? Yes. But as much as you’d hear people whine about Go’s error handling and lack of generics and static types, etc… in the end my line count is not that much greater than my line count in Python or Perl, it’s only slightly harder to write, and then once I do, it runs 20x faster and probably makes up for it entirely in easier maintenance and refactoring.
If it’s not a scripting language, which I can understand someone thinking, then it obliterates the entire reason I used to care about the distinction between “scripting” and “system” language by being Good Enough in both categories to not send me running for either extreme very often anymore. (Things that fit on one screen I’ll still script sometimes, but if it’s going to be more than a couple of screens, I use Go now.)
>If it’s not a scripting language, which I can understand someone thinking, then it obliterates the entire reason I used to care about the distinction between “scripting” and “system” language[s]
That’s right. It has automatic memory management.
You’re sort of right about Go being a scripting language, but what your remark really shows is that “scripting language” is an accidental proxy for “has automatic memory management” that can now be discarded.
Java has automatic memory management, yet no one ever called it a scripting language.
I don’t think the term has ever really been precise, but I believe it has primarily only applied to languages that, by and large, don’t have a compilation stage, but instead get interpreted as it runs. Java has one compilation stage, albeit to JVM bytecode, that normally gets interpreted or JIT compiled while the program runs (recent versions can now “Ahead of Time” compile bytecode to your native ISA and avoid JIT altogether).
>Java has automatic memory management, yet no one ever called it a scripting language.
You’re quite right. That’s why I used the term “accidental proxy”. The things that that are called “scripting languages” are automatic-memory-management languages originally targeted at writing glue scripts under Linux.
The thing is, these escaped from that niche because AMM is too useful not to deploy for general-purpose programming. Java was purpose-written for that from the start, as was Go. What both languages illustrate is the historical contingency of the label “scripting language”.
I sense a social component as well: “scripting” seen as something easier, less serious, therefore kind of lower status than “real programming”.
I don’t really mind it much. The relative ease of “scripting” makes it possible to be written by people whose primary expertise is domain knowledge, hence cutting down the need for detailed specification. Like how 3D engines in videogames are developed by math-heavy “real programmers”, and then scripted by folks whose primary expertise is in how to make a game fun. It’s OK.
I keep recommending it to young people to major in something and minor in programming and then spend their career “scripting” to automate that something. Because if they don’t, someone else will and they may be automated out of a job then.
Does Bash do automatic memory management, or is it just that you’re spawning sub-shells that go away after doing their work?
These days I don’t really consider Python a “Scripting Language” because in my view a “scripting language” is used to glue other applications together and Python doesn’t do that as cheaply as it used to.
If I have to do something that is mostly gluing other stuff together I use Bash or Perl. Or both.
I still use Python, but these days it’s mostly doing stuff IN python, and using other binaries as little as possible.
>Does Bash do automatic memory management, or is it just that you’re spawning sub-shells that go away after doing their work?
It does There is all kinds of malloc activity when yiu use shell variables.
>These days I don’t really consider Python a “Scripting Language” because in my view a “scripting language” is used to glue other applications together and Python doesn’t do that as cheaply as it used to.
It outgrew that niche. This is not exactly breaking news. :-)
To me, a definition of a “scripting language” is that it is interpreted, and can be used from an interactive shell as well as in script files. This allows the script to be worked out literally on a line-by-line basis. It’s one of the reasons why, when I work on Windows, I find the cmd.exe scripting language so frustrating. It has many constructs that work differently inside a batch file from how they work at a command prompt (such as for loop variables needing %% prefix instead of %).
It’s hard to imagine an interpreted language without AMM. And a compiled language with it would be less likely to work the same from an interactive shell as from a script stored in a file.
O/T: Can you change the WP template to not indent the right margin on replies?
Sidenote: p-hacking can be likened to shooting a large number of bullets at range, then chosing 5 with tightest grouping as your accuracy… (not a very good analogy).
I did exactly what ESR is talking about doing data conversion from a very old and messy database into PostgreSQL. I had “proper” conversions for 98% of the data then pages of rules for the stuff that only a human could parse.
It seemed very crude at the time and part of me hoped no-one ever saw the code, but it was good enough to run in production for several years doing frequent conversions.
I do something similar at work with a couple tools – one task we do a lot of is formatting tables, and I made some Excel sheets to do a lot of the scut work for us. By the very nature of Excel, it’s a rule-swarm – pass after pass after pass with fairly simple tweaks (rename this, add up that total, delete those blank entries, etc.), which turns the ugly tables we start with into fairly clean end products. Few things in it are at all complex, it’s just repeated banging away at little rules to fix most of the problems(and flag a few hard-to-fix ones for human attention). And it works really well.
(Though it does occasionally produce some bizarre stories…)
I studied linguistics for several years and I would love to see a good natural language programmer debate a Chomskian linguist.
Pretty much everybody who has kids observes that kids learn to speak by statistical heuristics, making the kinds of mistakes that can be expected from statistical heuristics, like calling every mid-sized animal a dog or calling orange juice orange milk.
Daughter once referred to her shoes as “foot gloves” and to the soles of her feet as the “palms” of her feet.
Your daughter should learn German. Its word for “glove” is “Handschuh”, which is pronounced pretty much the same as English “hand shoe”, because that’s what it means.
And it makes more sense for “shoe” to be the primary noun and “glove” to be derived from it, because we wear shoes a lot more often than when we wear gloves/hand shoes.
I think I do have that methodological prejudice – I don’t like to work on stuff where the solution is likely to be unelegant as fsck. I would flat out refuse to translate any programming language to a target language that was not meant to be generated by code – so any tree like LISP or XML are OK. In a corporate environment, it is easier to call it not doable than to do an inelegant imperfect work, and have people chew you out for the results and having to tweak it forever. I think your case must be a scratching the own itch: you really really wanted to rewrite some code, it would be 200 hours by hand, if an inelegant program can cut it down to 20, it worths it.
This isn’t really that Platonist legacy, this is more like managing expectations set by nontechnical people. It would predictably be a lot of headache and would not really make users happy, as they don’t really care about what language something is written in. But very often you just do something extremely simple like automatically sending an email notification when certain conditions are met and they are happier than kids in Disneyland because they used to spend too much time on watching those conditions – or kept forgetting to look them up.
One of the things I tend to include in a rule swarm is that, after applying all of your rules to an input item, if none of the rules matched, there should be an ELSE rule that flags/adds the item to a list of things requiring human intervention.
A translator probably ought to also convert the item into a comment in the output, showing what the original item was. In fact, any automated translator arguably should do that on every line, to show exactly where the translated code came from in the input language, allowing a human to review how faithful the transation was.
That is useful rule not only for rule-swarms but for any human-generated input processing. I think one useful way to split the world of programming is dealing with machine input, like gpsd, which is supposed to be fairly regular, and human input, which can be wrong or unexpected in very innovative ways. Even if it was already processed by other software… when businesses build EDI, like sending each other electronic invoices, 1 out of 100 will not import. Babysitting is required. If the sending partner is cooperative, they will build more and more rules that will tell the user right at issuing the invoice that this or that data is required or should be different. So it can get down to 1 out of 5000. But human randomness is never really eliminable, and has to be fixed by other humans.
That’s a very good distinction, but I wouldn’t even say “machine” vs. “human” so much as “structured” vs. “free-form”. There are plenty of programs that output information without any thought of it being parsable by another program. We see this in the tendency of “art” people designing websites to look good to the human eye, using markup that focuses on looks rather than semantic tags that could be used by another machine.
To those of us who follow the Tao of Unix, generating output that can be easily parsed by another program is as automatic as breathing. But to most people, they want the output to look good to a Mark 1 Eyeball.
Could this rule be essentially a special case of Ken Thompson’s maxim: “When in doubt, use brute force”?
LOL – that’s my favorite comment so far.
Brute force works when you don’t have time to jack around.
Sounds like the software equivalent of Taleb’s green lumber fallacy: https://fs.blog/2016/11/green-lumber-fallacy/
And, your theoretical summation is eerily similar to Taleb’s philosophy and literature, you’d love his work.
>And, your theoretical summation is eerily similar to Taleb’s philosophy and literature, you’d love his work.
I’m familiar with it and do respect it a lot. Taleb knows my stuff, too – there’s a passage in Anti-Fragile about the Maya and the wheel that is nearly a quote from my road show.
I should clarify that I think I understood the a-priorism problem before Taleb wrote about it. Thank you, General Semantics.
I love Taleb’s “IYI” (Intellectual Yet Idiot), which seems to partake of the “Idiotarian” nature you describe in the Anti-Idiotarian Manifesto.
These capture well the Clever Kids who are good at manipulating words to create at least the illusion of intellectuality, but that skill is orthogonal to competence in other areas.
And he bangs the drum about “skin in the game”, one of the best ways to separate the pros from the poseurs.
On rule-swarm systems – a common pattern (subpattern?) for rule-swarm analysis I see and have recently used is the “freighter + tugboats” approach. For example: “PDFs can be lexed as a stream – except when they can’t.” Case in point are the “stream” + “endstream” tokens, each of which must start at the beginning of the line. (There are other cases, alas.) For an internal project, I created a BNF grammar for PDF using Marpa (my freighter), which had “tugboats” to work around the non-streaming lexical analysis portions, while the Marpa PDF lexer did most of the heavy lifting (making it practical to do as a background project). Works nicely for us (over 1000 PDFs analyzed so far).
On shtoopid problems – most of these for me are the “We’re going to need a bigger truth table” problems. Dusting off truth tables from my original electrical engineering background, I find that if I keep running into edge cases and just plain weirdnesses while coding my solution, my internal model’s truth table does not represent the whole problem. I don’t think I’ve had more than once or twice where I had 6 different variables that could affect the output, but 4-5 is pretty common for me on what I think you would class as “shtoopid” problems. (And a finite-state diagram helps to clarify large systems with many possible transitions.)
sorry didn’t read all the comments
There are metal toasters that can be used over a gas burner
or fire camper friendly
Spammers are pretty good at natural language processing these days.
That discussion was several months ago, no?
HTML Tidy is a rule-swarm converter from HTML to well-structured XML with HTML semantics. It’s full of rules, and one of its classic problems is that it loops on certain inputs, because some of those rules cooperate to keep transforming the document to longer and longer versions.
TagSoup (written by me) is completely classical, and is guaranteed never to loop on any input however ill-formed: it processes the source from beginning to end and applies a bunch of parser tables (at the tag level, not the character level) to do its work.