Structure Is Not Meaning

So, I announce ForgePlucker, and within a day I’ve got some guy from Y Combinator sneering at me for using regular expressions to parse HTML. Says it’s “crappy code”. The poor fool…he has fallen victim to a conceptual trap which I, fortunately, learned to avoid decades ago. I could spout a freshet of theory about it, but instead I’m just going to utter a maxim: Never confuse structure with meaning.

When you parse an XML or HTML document into a DOM, you’re not getting meaning. You’re only getting structure. The DOM structure. And by doing that, you add a kind of dependency to your code that it didn’t have before – you become vulnarable to changes in the document’s parse structure that don’t change the meaning and wouldn’t have bitten you if you had stuck to doing local pattern-matching.

That may be an acceptable or even necessary risk if the meaning you’re trying to extract is closely tied to the structure of the document. If what you’re looking at is pure high-church XML, that’s often the case. But HTML? It’s tag soup. The structure the DOM tries to capture may not be well-formed at all. Even if it is, some UI designer fiddling with presentation-level tags can easily mutate the structure in a way that that points your beautiful theoretically-neat XPath queries off into la-la land.

Yes, sometimes lxml or BeautifulSoup is the right thing. I’ll probably use lxml when I get around to writing the parser for the XML state-dump format I’ve started to define. That will be appropriate, because the generators for that format can guarantee well-formedness and aren’t going to be changed casually by UI designers who innocently believe they’re doing no harm.

But that’s a rather different use case from HTML generated by someone else’s code. It still could be that DOM-based parsing is the smart thing, if the HTML is stable and the generator was designed by somebody fastidious. Best not to count on it, though. Here’s an example of why…

At one point in my code, I have to parse HTML that presents as a two-column table on a bug index page. First column is bug IDs, wrapped in anchor markup so they hotlink to bug detail pages. Second column is the bug summary. This table is embedded in huge amounts of gorp – page headers, page footers, a sidebar, various text annotations. All I care about is the bug IDs, though. I just want a list of them.

If I did what my unwise critic says is right, I’d take a parse tree of the entire page and then write a path query down to the table cell and the embedded link. Which would be cute, and I could do that, and I’m sure code like that got him an A in college – but would leave me shit out of luck the second after, say, anything in the page’s top-level structure changed.

So instead, I walk through the page looking for anything that looks like a hotlink wrapping literal text of the form #[0-9]*. Actually, that oversimplifies; I look for a hotlink with a specific pattern of URL in a hotlink that I know points into the site bugtracker.

Now, ask yourself: what’s more likely to remain stable: the tag soup on the page, or the path structure of the forge site? Which is the more reliable cue to the data I actually want to extract?

Never confuse structure with meaning.

Sometimes, the brute-force hack-at-it-with-regexps approach really is best. It looks stupid, but it gets the job done.

CORRECTION: The snotty kid (or snotty-kid soundalike) isn’t from Y Combinator. That’s actually a relief – I know Paul Graham, I like Paul Graham, and the crew around him usually has more sense. Anyway, this post was supposed to be about the critic’s mistake, not the critic.

UPDATE: I’ve had more to say on this topic in The Pragmatics of Webscraping.

34 comments

  1. He’s not from Y Combinator, he’s from Hacker News, which makes a very big difference.

  2. > then write a path query down to the table cell and the embedded link.

    Straw man. You usually don’t want to do that: the path query, as you point out, doesn’t contain any semantic content. It’s a better idea to look for features of HTML that do tend to correspond to semantics: ID tags, or, if the page is simple enough, the name of the tag used.

    Still, regular expressions can do an admirable job of parsing simple things out of HTML: it’s merely a matter of scale.

  3. Are their developers who really need to be told that scraping public, styled websites is different than interacting with a controlled, documented web service? Really? Someone’s been reading too many semantic web white papers.

  4. very true. i once had to scrape an exchange’s page, and about four lines of perl sufficed to find the table, extract the stocks and numbers, and output a csv. god alone knows what it would have taken if i’d tried to do it via the DOM–i didn’t even try.

  5. >Someone’s been reading too many semantic web white papers.

    Heh. Yeah. That’s pretty exactly what I thought when I read the guy’s comment.

  6. > I’d take a parse tree of the entire page and then write a path query down to the table cell and the embedded link.

    Try Beautiful Soup, you can tell it in very pythonic way to find all the links inside the table cells that match your pattern. It’s designed to parse bad markup. And it’s just one python file.

  7. I don’t think he was saying to use a full path query – that would almost always be a bad idea. He was saying that a path of the form “//a” will extract all a tags in the document, of the form “//td/a” will extract all a tags inside tds, etc. You give the path only as much structure as it needs to get to the nodes you’re interested in, and no more. This also makes the code robust against changes in things like attribute order, handles case-sensitivity issues for you, etc.

    1. >You give the path only as much structure as it needs to get to the nodes you’re interested in, and no more.

      That’s a better idea. But it won’t cope gracefully with some real cases. For example, the Gna! trackers display an attachment list that looks almighty like an HTML table. But it isn’t one – it’s a sort of mockery assembled from DIV and SPAN tags with CSS styling. I don’t think there’s any way to do a structured query of that mess that would remain stable under the semi-random tweaks pages tend to undergo when somebody decides they can be prettified with a few touches here and there.

      The fundamental tradeoff, which you can’t make go away, is this: regexps look at small pieces, structure-oriented queries at large ones. They’re both prone to failure as the page mutates, but they’re prone to failure in different ways. You pays your money and you takes your choice. Pretending that either method fits every situation is the only position that is certainly wrong.

  8. the “snotty kid” seems to exchange letters with the Queen of England:
    http://www.jgc.org/blog/uploaded_images/buckingham-palace-722048.jpeg
    (and with the Prime Minister)
    :-)

    Back to the topic,
    his other comment has merit: you are doing a case-insensitive match
    two times, once with uppercase letters, once with lowercase letters.
    That certainly looks strange – back from the days when I myself was
    a snotty kid, I remember reading in The New Hacker’s Dictionary
    that this kind of coding is referred to Voodo Programming:-)

    1. >That certainly looks strange – back from the days when I myself was a snotty kid, I remember reading in The New Hacker’s Dictionary

      That may be a genuine bug, I’ll have to look at it. I’ve got higher things on my priority list, though.

  9. The easiest way to do this would be to use something similar to regular expressions, but better. In newLISP, you do xml-parse to create s-expression tree, then ref-all will dig down into it and find all lists that match the list expression you give, such as ‘(a +) would match (a (@ (href “http://esr.org”)) “ESR’s website”). Very powerful. Ref-all returns index references, which tie in with newLISPs superior list indexing system, and saves cycles, the data itself is left alone instead of being consed into a new list.

    http://newlisper.blogspot.com/2006/10/lutzs-ninth.html
    http://www.newlisp.org/downloads/manual_frame.html

  10. Isn’t a perhaps an example of that “Cartesian” way of thinking I commented about in http://esr.ibiblio.org/?p=1350#comment-241723 ?

    Philosophically speaking, your regex-based approach is a kind of empirical-inductive thinking. The DOM-based approach might be similar to a “Cartesian” deductive kind of thinking, but I’m struggling how to put in a clear and concise way why or how is it similar, so I’ll just chicken out by saying I simply suspect a similarity.

    Of course both methods have their uses in different circumstances, it’s not like the deductive method is inherently wrong, no, in some circumstances it’s very useful. What my original point was that the deductive approach is just… too sexy, too tempting, we might too often be tempted to use it even when it’s a poor fit, because we WANT the world or human nature or in this case a website to be built logically, and it’s just too hard to admit when it’s not the case, and thus it’s a wise thing to try to somehow develop a resistance against the “Cartesian temptation”. Generally and by large, this learned resistance = common sense, and the other way around too, most of common sense = this learned resistance.

  11. The poor fool…he has fallen victim to a conceptual trap which I, fortunately, learned to avoid decades ago.

    Never go in against a Sicilian when death is on the line?

    1. >Never go in against a Sicilian when death is on the line?

      You know what makes that extra funny? I’m not Sicilian…but my swordmaster is, and guess what style he taught me?

  12. Well, that was fun, I learned something. I like the idea of coding for the reality of the cruft that is out there and avoiding the “structured” (not that many db’s are well structured) approach of SQL/XML/ etc to the underlying data.

    A fascinating discussion.

    thx

    t.

  13. I’m all with you on this, but there is more to it, than what is described in the blog post. Structure can be used to convey meaning. This is what is done in classification systems and taxonomies. It requires an extraordinary amount of effort to keep the meaning built into the structure. You have to maintain your taxonomy and you have to be very careful about the classification you apply to the objects in the collection. This is only economical in very large and fairly static collections. Libraries, archives and museums are the paramount examples. Making systems inter-operate that have applied different structuring systems to similar materials is a very interesting exercise. It is actually the only area where I can envision that the ideas for the semantic web can successfully be applied. I recently read a report from the Royal Library in Sweden (where I worked some 10 years ago). The had actually made a live working system that applied the ideas of the semantic web to a collection of library catalogs of different origins. It is the first time ever that I have seen anyone succeed in this area. Everyone else has produced crap and then made an inflated report of how wonderful the research has been and how spending a huge pile more money would generate usable results.

    The reason it has worked in this case is that the quality of the structure in each catalog is very good and smart semantics can help map the various structures to each other. Trying to apply semantic web techniques to HTML soup will always fail, because you have garbage in, and you will get garbage out. HTML formatting contains very little real information, and computer systems still have a long way to go before they can deduce meaning from words in a context.

    (If you thought Dewey decimal was the only classification system libraries used, you are in for a chock. The Library of Congress has its own system (used by various university libraries throughout the world), the National Library of Medicine has yet another system. In Sweden all the public libraries use one system, which almost no university library uses, and in the university world, there are some 10 different classification systems in use.

    Transliteration is another fun subject. In Sweden the public libraries use one way to transliterate Cyrillic names. The university world has 2 different ways. None of them correspond to US transliterations. This is actually where the semantic web could be useful. Systems should be able to figure out that Piotr Ilyich Khaikovski is the same as Peter Tchaikovsky.

  14. Systems should be able to figure out that Piotr Ilyich Khaikovski is the same as Peter Tchaikovsky.

    Should you ever get a job at Google I think you’ve just found a wonderful 20% time project. :)

  15. >The reason it has worked in this case is that the quality of the structure in each catalog is very good and smart semantics can help map the various structures to each other.

    This is pretty much the same reason I’m confident about being able to map dissimilar forge data models to each other.

  16. …the generators for that format can guarantee well-formedness and aren’t going to be changed casually by UI designers who innocently believe they’re doing no harm.

    But that’s a rather different use case from HTML generated by someone else’s code.

    I believe the late Jon Postel nailed this one in RFC 793:

    2.10. Robustness Principle

    TCP implementations will follow a general principle of robustness: be
    conservative in what you do, be liberal in what you accept from
    others.

    and I’m pretty sure you used very similar language in TAoUP

    The ultimate goal is for the Forges to follow this principle themselves, and export to the standard formats you’re working toward. Meanwhile, if less-than-elegant glue is required to make it work, the key words are: make it work.

  17. I have a very uneasy relationship with the Robustness Principle. In too many places it has been taken as a license to do sloppy work, since the peer will deal with it no matter what it looks like. Mail User Agents and HTML are paramount examples. The longer the standards are in operation, the worse it gets. My company has a collection of about 3000 emails that break standards, all doing so in different ways. This is because nobody is being a prick and reject the faulty ones. In an established standard, it takes the market leader or a cabal of smaller players to rectify the situation. In a new standard you must require demonstration of compliance on a regular basis (it must be possible for the reference implementation to add to the implementation when dark corners are discovered). Only then can you build systems that are liberal with what they accept without going down the hole of having to deal with crap all of the time.

    1. >I have a very uneasy relationship with the Robustness Principle. In too many places it has been taken as a license to do sloppy work, since the peer will deal with it no matter what it looks like. Mail User Agents and HTML are paramount examples.

      I feel the same pain. But I am fairly sure *not* trying to observe it would make matters worse. Imagine an alternate universe where the Robustness Principle isn’t part of the lore; there would be sloppy work done in both universes, but only one would frequently exhibit the capacity to cope with the resulting mess.

  18. Well, I’ve certainly done plenty of Web scraping by using regexes myself and I agree with your approach, although I might actually use a SAX parser, which is nothing more than an event-driven regex search itself, just for the sake of convenience. You can have methods that launch on the start of a tag and then use some nested if…then…else statements (or a case statement if you’re using a language that supports them) that use regexes to see if the tag you’ve encountered has the data or attributes you want. When I look at the bug tracker list view in Savane, for example, some of the data, like the bug priority, is contained in class=”priora” (etc.) tags. This just screams for using a SAX parser, IMHO.

  19. A couple of minor additional points on SAX parsers: Unlike DOM, they don’t require you to read the whole document into memory (which creates HUGE data structures — think of all the nodes in a typical DOM tree, now imagine how many more a page with a bunch of tables on it has!), they’re lightweight, and, the most important point: xml.sax is included in the box with Python!

  20. Oof. I ran into exactly the same problem when writing a Python scraper for online shopping stores (amazon.com, buy.com etc).

    My theoretical head initially suggested an elegant DOM walk, but was quickly slapped back into reality by my hacker head when it realized what an ungodly pile of shit the HTML was. You chose very wisely!

  21. I know that this is waaay out of my league, but that guy is seriously brain damaged. Why is he complaining about code maintenance for software which is barely even in the prototype stage?

  22. FLMKane: on esr’s original project announcement blog posting, he states: “Notice that this is already a production-ready tool…”. I agree that the project is ‘young’, but the statement claims a certain level of readiness that is most definitely not “prototype”.

    See link for reference: http://esr.ibiblio.org/?p=1369

  23. >but the statement claims a certain level of readiness that is most definitely not “prototype”.

    Orthogonal categories. It’s an already-useful prototype that needs serious polishing.

  24. I was just trying to set the record straight regarding FLMKane’s comment. The term ‘prototype’ does not equate to what you’ve released in your project – that’s all my point was.

Leave a Reply to timothy Cancel reply

Your email address will not be published. Required fields are marked *