Here’s an amplification of my previous post, Structure Is Not Meaning. It’s an except from the ForgePlucker HOWTO on writing code to web-scrape project data out of forge systems.
Your handler class’s job is to extract project data. If you are lucky, your target forge already has an export feature that will dump everything to you in clean XML or JSON; in that case, you have a fairly trivial exercise using BeautifulStoneSoup or the Python-library JSON parser and can skip the rest of this section.
Usually, however, you’re going to need to extract the data from the same pages that humans use. This is a problem, because these pages are cluttered with all kinds of presentation-level markup, headers, footers, sidebars, and site-navigation gorp — any of which is highly likely to mutate any time the UI gets tweaked.
Here are the tactics we use to try to stay out of trouble:
1. When you don’t see what you expect, use the framework’s self.error() call to abort with a message. And put in lots of expect checks; it’s better for a handler to break loudly and soon than to return bad data. Fixing the handler to track a page mutation won’t usually be hard once you know you need to – and knowing you need to is why we have regression tests.
2. Use peephole analysis with regexps (as opposed to HTML parsing of the whole page) as much as possible. Every time you get away with matching on strictly local patterns, like special URLs, you avoid a dependency on larger areas of page structure which can mutate.
3. Throw away as many irrelevant parts of the page as you can before attempting either regexp matching or HTML parsing. (The most mutation-prone parts of ppsages are headers, footers, and sidebars; that’s where the decorative elements and navigation stuff tend to cluster.) If you can identify fixed end strings for headers or fixed start strings for footers, use those to trim (and error out if they’re not there); that way you’ll be safe even if the headers and footers mutate. This is what the narrow() method in the framework code is for.
4. Rely on forms. You can assume you’ll be logged in with authentication and permissions to modify project data, which means the forge will display forms for editing things like issue data and project-member permissions. Use the forms structure, as it is much less likely to be casually mutated than the page decorations.
5. When you must parse HTML, BeautifulSoup is available to handler classes. Use it, rather than hand-rolling a parser, unless you have to cope with markup so badly malformed that it cannot cope.
Actual field experience shows that throwing out portions of a page that are highly susceptible to mutation is a valuable tactic. Also, think about where in the site a page lives. Entry pages and other highly visible ones tend to get tweaked the most often, so the tradeoffs push you towards peephole methods and not relying on DOM structure. Deeper in the site , especially on pages that are heavily tabular and mostly consist of one big form, relying on DOM structure is less risky.
Hideous idea. Talk about making busy-work for yourself.
ESR says: I’ve noticed that, over time, spite and jealousy tend to be self-punishing.
Use peephole analysis with regexps (as opposed to HTML parsing of the whole page) as much as possible. Every time you get away with matching on strictly local patterns, like special URLs, you avoid a dependency on larger areas of page structure which can mutate.
This looks like a false dichotomy. As pointed out in the previous discussion, XPath, or even DOM walking, can also use only local structure.
>This looks like a false dichotomy.
It isn’t. Path queries for things like “all table elements” are all very well, but generally you have to know the number of the one you actually want before you can dive deeper – and hey presto, you’re back to a dependency on the upper levels of the DOM structure again!
You can’t parse [X]HTML with regex. Because HTML can’t be parsed by regex. Regex is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of regex will not allow you to consume HTML. Regular expressions are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by regular expressions. Regex queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular regular expressions as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by regular expressions. Even Jon Skeet cannot parse HTML using regular expressions. Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with regex summons tainted souls into the realm of the living. HTML and regex go together like love, marriage, and ritual infanticide. The cannot hold it is too late. The force of regex and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with regex you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-regexp will liquify the n​erves of the sentient whilst you observe, your psyche withering in the onslaught of horror. Rege̿̔̉x-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the trangession of a chiÍ¡ld ensures regex will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using regex to parse HTML has doomed humanity to an eternity of dread torture and security holes using regex as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of reg​ex parsers for HTML will ins​tantly transport a programmer’s consciousness into a world of ceaseless screaming, he comes, the pestilent slithy regex-infection wil​l devour your HT​ML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fi​ght he comÌ¡e̶s, Ì•h̵i​s un̨hoÍžly radiaÅ„cé destroÒ‰ying all enliÌ̈Ì̂̈Ìghtenment, HTML tags leaÍ ki̧n͘g fr̶ǫm Ì¡yo​͟ur eyeÍ¢s̸ Ì›lÌ•ikÍe liq​uid pain, the song of re̸gular exp​ression parsing will exti​nguish the voices of mor​tal man from the sp​here I can see it can you see ̲͚̖͔̙î̩Ìt̲͎̩̱͔ÌÌ‹Ì€ it is beautiful t​he final snuffing of the lie​s of Man ALL IS LOŚ͖̩͇̗̪Ì̈ÌT ALL I​S LOST the ponÌ·y he comes he c̶̮omes he comes the ich​or permeates all MY FACE MY FACE áµ’h god no NO NOO̼O​O NΘ stop the an​*̶͑̾̾​̅ͫÍ̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟ÌͫͥͨeÌ Ì…s ÍŽa̧͈͖r̽̾̈ÌÍ’Í‘e n​ot rè̑ͧ̌aͨl̘Ì̙̃ͤ͂̾̆ ZAÌ¡ÍŠÍ ÍLGÎŒ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅÆ̴ȳ̳ TH̘Ë͖Ì̉ Í P̯ÍÌO̚​NÌYÌ¡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈ÌÌ€Ì͘ ̶̧̨̱̹Ì̯ͧ̾ͬC̷̙̲ÌÍ–ÍÌͥͮ͟OÍ®Í̮̪ÌÍM̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌ÍS̨̥̫͎Ìͯ̿̔̀ͅ
Have you tried using an XML parser instead?
(cite: http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454>
>Have you tried using an XML parser instead?
Dude! How did you get that smudged-and-blotted effect?
I don’t know if any forges are such offenders, but I’ve encountered a particular problem in scraping web sites in the past: some sites are so AJAX/JavaScript dependent that the only things that are rendered in static HTML is the root page structure and the headers and footers. All of the actual dynamic content is pushed onto JavaScript. Yeah, programmers that do this sort of thing aught to be drawn and quartered, and have their remains pissed on, but they do exist. I usually just give up on these types of tasks, because the thought of writing a lightweight user agent that can actually execute the JavaScript is more than I have time for.
It’s part of my design philosophy that pages which use on JavaScript should degrade gracefully in its absence. Obviously, you’re not going to be able to duplicate some functionality that absolutely requires dynamic content (such as, say, Google docs word processor’s ability to show real-time updates to a document that three people are editing at the same time) but wherever it is possible, the damn thing should at least display it’s text content in a release of lynx from 1996.
Smudged-and-blotted on demand: http://www.textozor.com/ . I just saw that today in a reddit link. I’m not sure which is more amazing, that Unicode permits that, or that Linux seems to be rendering it more-or-less correctly, for some value of “cÍ”Ì¤Í•Ì Í‰o͉̩ÌrÌ¯Ì–Ì¤Ì Í‡Ì¯ÌžrÌže̫̯̪͖͔̬ͅc̩͙̬̼̹̺̮t̯͖̪͓̦̞”.
Or, y’know, if you’re using NoScript. Unless I’m asking for some content which is, in some meaningful sense, dynamic, it’s ridiculous to have the page fail horribly (seriously? no CSS?) in the absence of JavaScript execution. I know, I know, it’s like complaining about AIMspeak, in that it’ll never change, but it is, indeed, ridiculous.
Jake: silly boy, pay attention. Eric isn’t recommending that you parse HTML with regexps. He’s recommending that you discern the output of certain printf()s using appropriate regexps. That *can* be done with a regexp.
What does google do when it crawls the web? Surely they hit 99+% malformed html.
Russell,
Perhaps you should stick to economics.
If you were familiar with automata theory and the Chomsky hierarchy you would know that regular expressions are equivalent to finite state machines. These finite state machines can recognize regular languages, and are good for (among other things), tokenizing a string.
(For any non-CS types: When you write a program, it’s the tokenizer in the compiler that does the simple task of figuring out what’s an integer and what’s a string and how to handle escape sequences and what the keywords are and where the braces are.)
Context-free grammars, which specify push-down automata that can recognize context-free languages (a superset of regular languages), are good for parsing a string. They figure out the structure of the tokens, to see what is nested where and how deep.
Basically, anything that can be determined just by looking at a very local area of the text can be handled by a regex, and anything involving arbitrary nesting requires a grammar.
As an example, standard regex cannot match anything of the form (a^n)(b^n) for any arbitrary n > 0.
Using as b would give us the grammar that can be enumerated as {empty, , <>, <<>>, …}.
To do this, you require a context-free grammar. A simple grammar for the above problem would be:
s -> | ε
where ε is the symbol for nothing.
That said, most implementations contain extensions such as looping constructs. These extend the regular expression language to be able to recognise recursively enumerable languages – in other words they are Turing complete. Because of this Perl’s regex implementation (and thus Python’s) cannot strictly be called regular expressions.
ReÍ¡memberÍ¢,Ì´ Ì´ju͘sÅ£ bÌ·e̸caùse ̶y̸ǫuÌ´ c̨à n, dÌ›oesÍ n’t̶ Ì•meÍžaÍ¡n̵ Ì›yoÍ¢u ÍshÍ¡o̶uÌ¢lÌ•d.
Russell isn’t much use with economics, either.
ESR: This is your first warning. You are allowed to insult me, but I have a lower threshold for insults directed at others. Keep a civil tongue in your head or I will ban you; it’s not like you’re actually contributing anything except as an object of amusement.
esr’s website tends to eat left-angle chars, and anything that follows up to the next right-angle char, too.
the above is supposed to read: “Using (left angle) as a and (right angle) as b…”
and my snide little rant is missing a (leftangle)center(rightangle), too.
Barry, I concur.
> Smudged-and-blotted on demand: http://www.textozor.com/.
This concept originates from a thread on the forums of the website ‘Something Awful’ about a particularly bad webcomic called ‘Ctrl-Alt-Del’. Some of the comics were much funnier if you made them descend into Lovecraftian-esque madness.
PS. The rest of the forum is mostly okay (if not very hacker-y), except for the extreme leftist cesspool that developed last election season. You could make a seven-part series of posts on its sheer inanity.
Yeah, programmers that do this sort of thing aught to be drawn and quartered, and have their remains pissed on, but they do exist.
Mightn’t they just not want their pages scraped? Some people are funny about it.
You know, I might have to move this blog from the politics category of my feedreader to the computing category. Nice to see ESR doing something with a compiler involved.
ESR says: Technically, no compiler is involved; I’m writing in Python.
Talk about pedantry! Perl’s regex implementation is, for all intents and purposes, the standard due to the ubiquity of libraries such as libpcre. Yes, ‘technically’ they’re “regular expressions” in the strictest sense, but when most talk about about regular expressions these days, it is the Perl-compatible implementations found built into Perl, Python, Ruby, etc., that they mean most of the time.
> Mightn’t they just not want their pages scraped? Some people are funny about it.
My comment stands. Why would they put something up on the web and get touchy about scraping?
Python’s implementation includes a compiler.
Hey, ESR — how about a post on Google’s new language — http://golang.org. It is by Ken Thompson & Rob Pike (and one other Google guy). Would love to see you take on it. Thanks.
>Hey, ESR — how about a post on Google’s new language
I’ve skimmed the site, but I’ve been too busty to dig in and give it a detailed evaluation yet. Sorry.
Python’s “compiler” isn’t really a compiler. It doesn’t compile source into native machine code, it translates Python source into Python bytecode (think “.py to .pyc”), which still needs the Python interpreter to execute it.
My comment stands. Why would they put something up on the web and get touchy about scraping?
Well, they might be serving ads alongside it and regard people taking the content and not looking at the ads as resource leechers. See Google’s TOS regarding screen scraping – that’s dynamic rather than static content, but I can imagine online newspapers feeling the same way.
> I’ve skimmed the site, but I’ve been too busty to dig in and give it a detailed evaluation yet. Sorry.
You don’t have to take your shirt off when jumping into Google’s pool. :)
ESR: Trust me, I wouldn’t float…
What? Do you even know what the term “compiler” means? Compilation to bytecode is compilation.
I don’t like supporting trolls, but there is some precedence for calling things that compile to bytecode a compiler.
Here, CPython is explicitly called a compiler. http://www.python.org/dev/peps/pep-0339/
Furthermore, javac (c for compiler) compiles to java bytecode for the JVM.
>I don’t like supporting trolls, but there is some precedence for calling things that compile to bytecode a compiler.
Of course there is. Compilers to *bytecode* — not something people normally intend when they say “using a compiler”, which without qualification implies a compiler to *machine language*. The troll knows this perfectly well, he was just being tendentious.
Who said “using a compiler”? Nobody, that’s who. Furthermore, compiler means compiler, not “compiler to machine code”, except for mediocre C programmers who think languages like Python are still interpreted from the AST. Perhaps you should admit your words were poorly chosen and move on.
>Who said “using a compiler�
Sigh….Richard Gadsden: “You know, I might have to move this blog from the politics category of my feedreader to the computing category. Nice to see ESR doing something with a compiler involved.”
Troll, this is your second warning. I’ve been very patient, but the next time you engage in contentless flaming, I will ban you.
I’m very sorry that you don’t know what the word “compiler” means, Eric. By all means, ban me.
I do. However, classically, language implementations fall into roughly two categories: interpreted and compiled. As time has worn on, interpreters have evolved to do things differently and most language interpreters today are probably more accurately called “just-in-time compilers with virtual machines and big runtime libraries” than interpreters, strictly speaking. So while CPython is a “compiler” in a very narrow sense, what most people mean by “compiler” is something much closer to gcc or fpc or Visual C++’s cl.exe, etc., and not a bytecompiler like CPython. In practice virtually everyone, with the exception of trolls like you or people who are referring to CPython’s compiler component, refers to CPython as an interpreter.
And if you think you’re a better programmer than Eric, then why don’t you show us? I’m not saying I think ESR’s the world’s greatest coder (despite what his ego might tell him ;), but he’s definitely more than merely competent.
CPython contains a compiler, and a bytecode interpreter/VM. The component that translates into bytecode is a compiler. If the teeming masses of the Internets want to say that CPython does not ‘involve a compiler’ (the exact words used), then let them; they are wrong. Also, unlike the word ‘compiler’, the term JIT has a specific meaning involving native code compilation, and CPython’s setup does not qualify. I think it’s pretty clear that you have no experience with these areas.
I wasn’t referring to Eric in the sentence you’re thinking of; that’s your own point-of-view seeping through.
So let me get this straight. You’re seriously impugning the expertise of at least two people based on a minor semantic debate over the correct usage of the term “compiler”? ESR, I would call this ‘contentless flaming’.
>ESR, I would call this ‘contentless flaming’.
You’re right. I cut him more slack than I should have, and he’s just running down the tone of the blog. I’ll shitcan the troll’s posts until the spam filter starts doing it.
Talk about pedantry! Perl’s regex implementation is, for all intents and purposes, the standard due to the ubiquity of libraries such as libpcre.
A course on automata theory might change your mind.
Your ubiquity of implementation (*) claim fails to alter the standard, accepted definition.
The Perl 6 Rules were mostly added to allow Perl to be self-hosting.
(*) Henry Spencer is a friend of mine.
OT, but here’s a somewhat different take on “Cathedral and Bazaar” – http://www.advogato.org/article/1020.html ; I found it from a HN link.
> A course on automata theory might change your mind.
> Your ubiquity of implementation (*) claim fails to alter the standard, accepted definition.
> The Perl 6 Rules were mostly added to allow Perl to be self-hosting.
And in the context of this discussion this entire sub-thread seems purely irrelevant.
Eric isn’t talking about some research-paper-style general statement here. He’s specifically saying “this is what your basic tactics should be if you’re creating a module for forgeplucker”. That means we’re talking about rubber hitting the road, and specifically we’re talking about python regex hitting the random not-locally-controlled web server.
So if what a handler developer gets when they type “from re import compile” in an unmodified base install of python is not pure regex but instead the “upgraded” pcre version it doesn’t really matter what pure regex can or can’t do.
And given the proposed counter solution is apparantly an XML parser…
>And given the proposed counter solution is apparantly an XML parser…
Well. An HTML parser, to be fair. And we’re using Beautful Soup a lot, internally — after trimming the focus of view a lot with regexps.
> Well. An HTML parser, to be fair. And we’re using Beautful Soup a lot, internally —
> after trimming the focus of view a lot with regexps.
Actually i was referring to Jake Fischer’s ‘Have you tried using an XML parser instead?’ but yeah, beautiful soup is very good at cleaning up the kinds of non-conforming tag soup I always expect to find on some random site.
this is funny, indeed.
excuse me for not reading it all, still let me ask a simple question:
writing so much code for a little task – how do you find time for sex and firearms ;-)
very kind regards
jorgen (who hasn’t actually contributed anything valuable to the python community and has no firearms;-)
Just in case you did not see this…
http://en.wikipedia.org/wiki/Comparison_of_open_source_software_hosting_facilities
Wikipedia fails to mention https://www.forge.mil/ since they do not allow the public Internet to scrape their host. You could ask for an account but I don’t know if the DoD is going to go for the “freedom of information” argument.
I don’t have anything useful to add either, other than to say that this comment thread brings back fond memories (that were sometimes very unhappy at the time) of the glory days of Usenet.
Now, if we could only have killfiles for blog comments…