[7 January 2013; typos in code patterns corrected 8 January and 21 June 2013]
Everyone who designs or builds systems to do interesting work occasionally needs to deal with input in some specialized notation or other. Nowadays a lot of specialized information is in XML, and in that case the need is to deal with vocabularies designed for the particular kind of specialized information involved. But sometimes specialized data comes in its own non-XML syntax — ISBNs, URIs, expressions in symbolic logic, chess-game notation, textual user interface languages, query languages, and so on. Even in the XML universe there are plenty of notations for structured information of various kinds that are not XML: DTDs, XPath expressions, CSS, XQuery.
In these cases, if you want to do anything useful that depends on understanding the structure of the data, you’re going to need to write a parser for its notation. For some simple cases like ISBNs and ISSNs, or URIs, you can get by with regular expressions. (At least, more or less: if you want to validate the check digit in an ISBN or ISSN, regular expressions are going to have a hard time getting the job done, though oddly enough a finite state automaton would have no particular trouble with it.) But many interesting notations are context-free languages, which means regular expressions don’t suffice to distinguish well-formed expressions from other strings of characters, or to identify the internal structure of expressions.
Now, if you’re writing in C and you run into this problem, you can easily use yacc and lex to generate a parser for your language (assuming it satisfies the requirements of yacc). If you’re writing in Java, there are several parser-generator tools to choose from. If you’re writing in a less widely used language, you may find a parser generator, or you may not.
It’s handy, in this situation, to be able to write your own parsers from scratch.
By far the simplest method for hand-written parsing is the one known as recursive descent. For each non-terminal symbol in the grammar, there is a routine whose job it is to read strings in the input which represent that non-terminal. The structure of the routine follows the structure of the grammar rules for that non-terminal in a simple way, which makes recursive-descent parsers feel close to the structure of the information and also to the structure of the parsing process. (The parser generated by yacc, on the other hand, remains a completely opaque black box to me, even after years of using it.)
In his book Compiler Construction (Harlow, England: Addison-Wesley, 1996, tr. from Grundlagen und Techniken des Compilerbaus [Bonn: Addison-Wesley, 1996]), the Swiss computer scientist Niklaus Wirth summarizes the rules for formulating a recursive-descent parser in a table occupying less than half a page. For each construct in the EBNF notation for grammars, Wirth shows a corresponding construct in an imperative programming language (Oberon), so before I show the table I should perhaps review the EBNF notation. In any formal notation for grammars, a grammar is made up of a sequence of grammar productions, and each production (in the case of context-free grammars) consists of a single non-terminal symbol on the left-hand side of the rule, and an expression on the right-hand side of the rule which represents the possible realizations of the non-terminal. The right-hand side expression is made up of non-terminal symbols and terminal symbols (e.g. quoted strings), arranged in sequences, separated as need be by choice operators (for choice, the or-bar | is used), with parentheses, square brackets (which mark optional material), and braces (which mark material that can occur zero or more times).
Wirth’s EBNF for EBNF will serve to illustrate the syntax:
syntax = {production}.
production = identifier “=” expression “.”.
expression = term {“|” term}.
term = factor {factor}.
factor = identifier | string | “(” expression “)” | “[” expression “]” | “{” expression “}”.
identifier = letter {letter | digit}.
string = “”” {character} “””.
letter = “A” | … | “Z”.
digit = “0” | … | “9”.
(It may be worth noting that this formulation achieves its simplicity in part by hand-waving: it doesn’t say anything about whitespace separating identifiers, and the definition of string is not one a machine can be expected to read and understand. But Wirth isn’t writing this grammar for a machine, but for human readers.)
It’s easy to see that the routines in a recursive-descent parser for a grammar in this notation must deal with six constructs on the right-hand side of rules: strings, parenthesized expressions (three kinds), sequences of expressions, and choices between expressions. Wirth summarizes the necessary code in this table with the construct K on the left, and the program for it, Pr(K), on the right. In the code fragments, sym is a global variable representing the symbol most recently read from the input stream, and next is the routine responsible for reading the input stream and updating sym. The meta-expression first(K) denotes the set of symbols which can possibly occur as the first symbol of a string derived from construct K.
k | Pr(k) |
---|---|
“x” | IF sym = “x” THEN next ELSE error END |
(exp) | Pr(exp) |
[exp] | IF sym IN first(exp) THEN Pr(exp) END |
{exp} | WHILE sym IN first(exp) DO Pr(exp) END |
fac_{0} fac_{1} … fac_{n} | Pr(fac_{0}); Pr(fac_{1}); … ; Pr(fac_{n}) |
term_{0} | term_{1} | … | term_{n} | CASE sym of first(term_{0}) : Pr(term_{0}) | first(term_{1}) : Pr(term_{1}) | … | first(term_{n}) : Pr(term_{n}) END |
This is easy enough to express in any language that has global variables and assignment statements. But what do we do when we are writing an application in a functional language, like XQuery or XSLT, and need to parse sentences in some context-free language? No assignment statements, and all functions have the property that if you call them several times with the same argument you will always get the same results back. [Addendum, 8 January 2013: XQuery and XSLT users do in fact have access to useful parser generators: see the pointers to Gunther Rademacher’s REx and Dmitre Novatchev’s FXSL provided in the comments. The question does, however, still arise for those who want to write recursive-descent parsers along the lines sketched by Wirth, which is where this post is trying to go.]
I’ve wondered about this for some time (my notes show I was worrying about it a year ago today), and the other day a simple solution occurred to me: each of the functions in a recursive descent parser depends on the state of the input, so in a functional language the state of the input has to passed to the function as an argument. And each function changes the state of the input (by advancing the input pointer), which in a functional language we can represent by having each function return the new state of the input and the new current symbol as its result.
A new table, analogous to Wirth’s, but with XQuery code patterns on the right hand side, looks like this. Here, the common assumption is that each function is passed a parameter named $E0 whose value is an env variable, with two children: sym contains the current symbol and input contains the remainder of the input (which for simplicity I’m going to assume is a string). If an error condition arises, an error element is added to the environment. The job of reading the next token is handled by the function next().
k | Pr(k, $E0) |
---|---|
“x” | if ($E0/sym = “x”) then next($E0) else <env> <error>expected “x” but did not find it</error> {$E0/*} </env> |
(exp) | Pr(exp, $E0) |
[exp] | if ($E0/sym = first(exp)) then Pr(exp, $E0) else $E0 |
{exp} | This requires two translations. For each such sequence exp, we declare a function: declare function seq_exp( $E0 as element(env) ) as element(env) { if ($E0/sym = first(exp)) then let $E1 := Pr(exp), $E2 := seq_exp($E1) return $E2 else $E0 }; Inline, we just call that function: seq_exp($E0) |
fac_{0} fac_{1} … fac_{n} |
let $E1 := Pr(fac_{0}, $E0), $E2 := Pr(fac_{1}, $E1), … , $E_{n + 1} := Pr(fac_{n}, $En) return $E_{n + 1} |
term_{0} | term_{1} | … | term_{n} | if ($E0/sym = first(term_{0})) then Pr(term_{0}) else if ($E0/sym = first(term_{1})) then Pr(term_{1}) … else if ($E0/sym = first(term_{n})) then Pr(term_{n}) |
Like Wirth’s, the code shown here produces a recognizer that doesn’t do anything with the input except read it and accept or reject it; like Wirth’s, it can easily be extended to do things with the input. (My first instinct, of course, is to return an XML representation of the input string’s abstract syntax tree, so I can process it further using other XML tools.)
Awesome article. Do you know about http://bottlecaps.de/rex/ from Gunther Rademacher? This tools seems to be very popular in the XML community.
Very nice article, Micheal. I believe you know about the generic LR-1 parser implemented in FXSL? I blogged about this a few years ago. Have used it for creating JSON and XPath 2.0 parsers: http://fxsl.cvs.sourceforge.net/viewvc/fxsl/fxsl-xslt2/f/func-lrParse.xsl?revision=1.7&view=markup
@William, thank you for the pointer to REx — I have not used it before (though I see it is mentioned in some papers I believe I have read, so I ought to have known about it!). It looks very interesting — and it means writers of XQuery and XSLT do not need to write parsers by hand if they do not wish to do so. Just one question: is there any documentation for it anywhere?
@Dmitre, yes, I should have mentioned FXSL’s parser generator as a potential solution as well. Spending the time needed to get my head around FXSL in general, and func-lrParse in particular, so that I can use them actively, is a task which has been on my to-do list for a long time; I am embarrassed to admit that it’s still on the To-do list, and not on the Done list. It’s now even higher than before in priority. (But here, too, I have to admit that it is made more daunting by the absence of readily visible documentation. Note that the PDFs of your 2003 and 2006 papers at Extreme Markup Languages have disappeared, and it took a little digging to find the HTML versions, which suffer from some missing stylesheets but are mostly legible.)
A remark Wirth makes in the preface of his book may explain why I am glad to have worked out the code patterns shown here, even though I could have gotten a parser for the languages I am interested in much faster by using either of these excellent pre-existing tools. “In einer akademischen Ausbildung ist es von zentraler Bedeutung, daß nicht nur Wissen, und beim Ingenieur Können, vermittelt wird, sondern auch Verständnis und Einsicht.” (In education it is essential that not only knowledge and, in the case of engineering education, know-how be transmitted, but also understanding and insight [my translation].) What Wirth says about academic training holds true also for my efforts to educate myself; the fewer black boxes I rely on, the more I learn.