Xerophily, a parser for XSD regular expressions

[30 December 2009]

A while back, in connection with the work of the W3C XML Schema working group, I wrote a parser, in Prolog, for the regular-expression language defined by the XSD spec. This has been on the Web for some time, at http://www.w3.org/XML/2008/03/xsdl-regex/.

Not long ago, though, I received an inquiry from Jan Wielemaker, the maintainer of SWI Prolog (and ipso facto someone to whom I owe a great debt, along with every other user of that well maintained system), asking if it wouldn’t be possible to re-issue the code under the Lesser Gnu Public License, so that he could use parts of it in some work he’s doing on support for the simple types of XSD. (Strictly speaking, he could do that under the W3C license, but that would mean he has to manage not one license but two for the same library: too complicated.)

Fine with me, but since I wrote it while working at W3C, I don’t own the copyright. To make a long story short, after some cavilling and objections and assorted mutual misunderstandings, agreement was reached, and W3C gave their blessing.

The LGPL boilerplate assumes that the package being issued has a name, so I have given the XSD regex parser the name Xerophily.

[“Xerophily? What on earth is that?” asked Enrique. The word, I explained to him, denotes the property possessed by plants well adapted to growing in dry, especially hot and dry, conditions. “Well, that makes it apt for code produced in New Mexico, I guess,” he allowed. “And your code, like those plants, is a little scraggly and scrawny. But what name are you going to use for your next program?” “It’s also one of the few words I could find containing X (for XSD), R (for regular expression) and P (for parser, and for Prolog).” “Oh.”]

A word of caution: Xerophily was written to support some very dry work on the grammar of the regex language; it doesn’t really have a user interface; and it doesn’t actually perform regex matching against strings. It just parses the regex into an abstract syntax tree (AST). I do have code, or I think I did some years back, to do the matching, but the shape of the AST has changed since then. In other words, Jan Wielemaker may be the only person on earth who could have any plausible reason for wanting to use or read this code.

But for what it’s worth, as of a few minutes ago, it’s on the Web, under the LGPL, at the address http://www.w3.org/XML/2008/03/xsdl-regex/LGPL/. The W3C-licensed version remains available, at the address given above.

Philosophers make quick keyboardists

[17 December 2009]

A mnemonic for ACL2 induction

Lately I’ve been spending time working through a lot of exercises in ACL2. As a way of helping the user internalize the requirements for successful induction, several exercises ask for an explicit reformulation of a problem in terms of the ACL2 induction principle.

Don’t worry: I don’t want to try to explain the ACL2 induction principle here. It suffices for present purposes to observe that a fully explicit application of the ACL2 induction principle requires that you write down a number of things; you, dear reader, don’t need to understand what they are, only that they exist and need to be specified:

  • φ, the formula being proved
  • m, the measure to be used when computing the ‘size’ of a particular instance of the formula
  • qi (for 1 ≤ ik), the conditions which determine the different induction steps: one induction step for each qi
  • k, the number of induction steps (and thus of conditions)
  • σi,j (for 1 ≤ ik and 1 ≤ jhi), the substitutions applicable to condition qi; each condition qi may have up to hi hypotheses and corresponding substitutions
  • hi, number of induction hypotheses for each induction step qi
  • the measure conjectures for the case: (a) that the measure given always produces an ordinal value, and (b) that the measure decreases on each recursive call (i.e. in each induction hypothesis)

After a while, my evil twin Enrique got tired of watching me flip back and forth between the statement of the problem I was trying to solve and the page that showed all the things that needed to be written down; he said “Haven’t you memorized that list yet?” “No,” I said. “It’s not that simple a list, is it?”

“Sure it is,” he said. “Just use a mnemonic to remember it. The full list, with all subscripts, is

φ m, qi ≤ k, k, σi, j, hi, m c

“So just remember

Philosophers make quick keyboardists; strength in judgment helps improve mental capacity.

“Or if you can remember the subscripts by yourself, and just need (φ, m, q, σ):

Philosophy multiplies quizzical subtleties.

“Easy, see?”

Sometimes I think Enrique has too much time on his hands.

Automata and infinite strings

[15 December 2009]

[This is another one of those ignorance-is-bliss postings. If I had studied automata theory properly, this would (I guess) have been covered in class; that would have deprived me of the fun of thinking about it without knowing the right answer. If you did study automata theory, and you know how infinite strings are handled, and it irritates you to see someone waffling on and on about it instead of just doing some homework and reading the damn literature, you might want to stop reading soon.]

Some time ago, Michael Kay suggested that it was pointless for the XSD datatypes spec to specify that the lexical representations of integers, or strings, or various other simple types, were finite sequences of characters with certain properties. No implementation, he pointed out, can reliably distinguish finite from infinite strings, so it’s a non-testable assertion.

[“True if you’re using conventional I/O and conventional representations of strings, maybe,” said Enrique. “But if you represent the sequence of characters using a description, rather than an array of characters, it’s not clear that that’s true. Instead of the sequence "3.141592...", store an algorithm for calculating, on demand, the nth digit of the decimal expansion of π. Ditto for the square root of 2. And so on!” “You may be right,” I said. “But that wasn’t what I wanted to talk about, so be quiet.”]

The working group declined the proposal to drop the word “finite” on the grounds that if the strings in question are required to be finite, then we know that all the lexical representations of integers (for example) can in principle be recognized by a finite state automaton. Without the restriction to finite sequences, most of what people know about finite state automata isn’t guaranteed to apply.

I found myself wondering this morning about the possible application of automata to infinite and semi-infinite strings. I know that in principle automata theorists have historically not restricted their interest to finite automata; it seems plausible to assume they have also considered infinite strings. But I don’t know what they have said, without spending time looking it up; instead, I am going to enjoy myself for a little while seeing how much I can figure out for myself.

One obvious question to answer is: if you want to use an automaton to identify infinite sequences, how do you define acceptance of the sequence? For a finite sequence, you ask what state you’re in at the end of the sequence, and whether that state is an “accept state” or not. That won’t work for an infinite sequence: there is no final state.

Perhaps we can consider the sequence of states the automaton enters and define acceptance in terms of that sequence. Possible answers:

  1. Accept if (a) the automaton eventually ends up in a single state which it never again leaves, and (b) that state is an accept state.
  2. Accept if there is some point in the sequence of states such that every state following that point is an accept state.

These would work (in the sense of providing a yes/no answer).
Do these rules for acceptance of strings define sets of automata with different discriminating power?

It seems obvious that they do, but what exactly are the differences?

Consider, for example, automata for recognizing various classes of numbers written as an infinite sequence of decimal digits. Numbers seem to be on my mind, perhaps because of the tie-in to XSD datatypes.

For such infinite strings of digits (including a decimal point), integers have the property that every digit to the right of (i.e. following) the decimal point is a 0. If you build the obvious automaton, for an integer it will spend all its time in the zero-after-decimal-point state, and for a non-integer it will, eventually, end up caught in an error state.

[Enrique tells me I should pause to draw pictures of these automata, but I’m not going to take the time just yet. Apologies to those who find it hard to visualize what I’m talking about.]

So the first acceptance rule suffices for recognizing integers. It may be relevant that the same automaton can be used to recognize finite strings as representing integers: any prefix of the infinite string representing an integer will also be accepted as representing an integer.

The first rule would also suffice to allow us to build a recognizer for certain fractions, e.g. 1/3: the infinite string denoting 1/3 ends up perpetually in the “we’ve just read a 3” state.

On the other hand, it doesn’t suffice for all rationals: in decimal notation,1/7 has an infinitely repeating sequence of digits (142857, if you were wondering). To distinguish 1/7 in decimal notation we’d need a cycle of six states in the automaton.

All rational numbers have a decimal expansion that eventually settles into an infinite series of repeated strings of digits (if only an infinitely repeating sequence of zeroes). So if we adopt the second rule for defining acceptance of the string, we can say: for every rational number, there is a finite state automaton that recognizes that rational number. And irrationals, which have no repeating sequences, aren’t recognizable by an automaton with finite states. (An automaton with infinitely many states might be able to recognize the decimal expansion of a particular irrational number, say π, but it’s hard to know what to do with that information — maybe it’s a reason to say that languages recognizable with an infinite automaton are not necessarily regular.)

That sounds like a nice pattern. It would be even nicer if we could devise an automaton to recognize the set of decimal expansions of rational numbers, but I suspect that’s not feasible, since the complement of that set is the irrationals, and being able to recognize the one set by regular means would entail being able to recognize the other, too.

Does it make sense to require that the automaton eventually end up spending all its time in accept states? (Or equivalently, that the sequence of states have a suffix in which every element in the suffix is an accept state.)

What if that is too restrictive a rule? What if we said instead

  1. Accept if at every point in the sequence of states there are an infinite number of accept states among the states following that point.

That is, allow the string to put the automaton into a non-accepting state, as long as it’s only temporary, and it eventually gets back into an accepting state.

Consider an automaton which has two states, A and B. Every time a B is found in the input, we go to state B; for any other symbol we go to state A. B is an accept state.

If we adopt the second story about termination, a string ending in an unending series of Bs will be accepted and is thus recognizable by an automaton. A string with an infinite number of Bs, interspersed with other symbols, will not be accepted by this automaton (nor by any other, as far as I can tell).

OK, that seems to establish (if we accept the conjecture about strings with infinitely many Bs) that the second and third rules define distinct sets of languages. I suppose that one chooses to use the second rule, or the third, or some other I haven’t thought of yet, in part based on whether it feels right to count as regular the languages one can recognize using that rule.

Hmm. OK, time to look at the bookshelves.

I’ve just checked and found that John E. Hopcroft and Jeffrey D. Ullman, in Introduction to automata theory, languages, and computation (Reading: Addison-Wesley, 1979), restrict their attention to finite strings.

Dick Grune and Ceriel J. H. Jacobs, Parsing techniques: a practical guide, second edition (New York: Springer, 2008), don’t explicitly impose this restriction. But a quick scan of their opening pages also doesn’t show any explicit consideration of infinite sequences of symbols, either. I’m guessing they do treat infinite input somewhere, if only because if you can contemplate van Wijngaarden grammars, which have infinite numbers of context-free rules (and remember, Grune didn’t just contemplate van Wijngaarden grammars, he wrote a parser generator for them), infinite strings are just not going to frighten you.

I suppose the idea of thinking seriously about infinitely long sentences in a language is one I first encountered in D. Terence Langendoen and Paul Postal, The vastness of natural languages (Oxford: Blackwell, 1984). To whom (for this, as for many other things) thanks!

I’m pretty sure that there was some treatment of infinite automata and/or infinite input strings in S. C. Kleene, “Representation of events in nerve nets and finite automata”, in Automata studies, ed. C. E. Shannon and J. McCarthy (Princeton: PUP, 1956), and V. M. Glushkov, “The abstract theory of automata”, Russian mathematical surveys: a translation of the survey articles and of selected biographical articles in Uspekhi matematicheskikh nauk 16 (1961). They are both kind of tough sledding, but I suppose I really ought to go back and read them carefully with an eye to this topic.

Grail for regular languages

[11 December 2009]

Every now and then — not constantly, but recurrently — I experience a strong need to have a running copy of Grail, a software package first written by Derick Wood and Darrell Raymond and described by its documentation as “a symbolic computation environment for finite-state machines, regular expressions, and other formal language theory objects.”

Among other things, Grail is handy for answering questions about the equivalence or non-equivalence of regular expressions, or about subset/superset relations holding between the languages recognized by them. A few years ago, for example, the W3C XML Schema Working Group found itself in possession of two different descriptions of the lexical space of the XSD duration type. The working group wished, not unreasonably, to check that the two really were equivalent.

The first description provided three regular expressions, and said the lexical space of duration included all the strings which matched all three expressions:

  • -?P([0-9]+Y)?([0-9]+M)?([0-9]+D)?(T([0-9]+H)?([0-9]+M)?([0-9]+(\.[0-9]+)?S)?)? (strings in which the fields of an ISO 8601 duration appear in the correct order, and in which each field appears only if it has at least one digit present)
  • .*[YMDHS].* (strings in which at least one field is present)
  • [^T]+(T[^HMS]+[HMS].*)? (if the character T appears, it must be followed by one of the time-related fields)

The second description translated the context-free grammar into regular-expression form (I’ve introduced white space for legibility):

-?P(([0-9]+Y)([0-9]+M)?([0-9]+D)?
 (T(([0-9]+H)([0-9]+M)?([0-9]+(\.[0-9]+)?S)?
            |([0-9]+M)?([0-9]+(\.[0-9]+)?S)?
                    |([0-9]+(\.[0-9]+)?S)))?
|([0-9]+M)([0-9]+D)?
 (T(([0-9]+H)([0-9]+M)?([0-9]+(\.[0-9]+)?S)?
            |([0-9]+M)?([0-9]+(\.[0-9]+)?S)?
                    |([0-9]+(\.[0-9]+)?S)))?
|([0-9]+D)?
 (T(([0-9]+H)([0-9]+M)?([0-9]+(\.[0-9]+)?S)?
            |([0-9]+M)?([0-9]+(\.[0-9]+)?S)?
                      |([0-9]+(\.[0-9]+)?S)))?
|T(([0-9]+H)([0-9]+M)?([0-9]+(\.[0-9]+)?S)?
           |([0-9]+M)?([0-9]+(\.[0-9]+)?S)?
                     |([0-9]+(\.[0-9]+)?S)))

Easy enough to eyeball, for some people, I guess, but the working group wanted a more reliable method.

After a few hours trying vainly to compile Grail for my Linux box, I found an RPM that worked for me, and in ten minutes or so I had used Grail to establish that the two descriptions are equivalent.

Today I realized that another problem I face could best be solved by using Grail, but I no longer have a Linux box (and have not, in any case, found that old RPM). Grail 2.5 is dated March 1996, and the C++ in which it is written does not seem to please GCC 4.0.1. Grail+ 3.0, a successor project in other hands, may have been touched as recently as 2002 or 2004, but most of the dates appear to be in summer or fall 1998. GCC doesn’t like it, either.

So I have thus far been unable to recompile this very helpful tool.

If anyone out there knows of anyone who has either massaged the source of Grail into a form more like what modern C++ compilers will compile, or found out what combination of compile-time flags will persuade GCC to put itself in a more forgiving frame of mind and compile the thing, please get in touch. (And no, -Wno-deprecated does not suffice to do the trick.)

And any C++ proficients looking for interesting and useful projects to undertake could do a lot worse for themselves and for the world than to bring Grail into the twenty-first century.

Cross training

[11 December 2009]

One of my most pleasant memories is the discovery, some years ago, that while it is often rather boring to copy a program out of a book or off the Web and modify it to suit the needs at hand, it’s a lot more interesting if at the same time you can translate it from one programming language into another.

I discovered this when I needed a sort routine in Spitbol once; one of the Spitbol implementations we were using had a built-in sort, but the other didn’t. I copied a Shell sort, or possibly a Quicksort, from a Pascal or C textbook, and found that the task of translating the algorithm from one language into a different language with rather different control and data structures was much more rewarding and interesting than it had ever been to copy a Pascal or C program from a book into a file on my disk, compile it, and ‘play’ with it. It was non-trivial enough to give me a small feeling of accomplishment, and easy enough (the algorithm was right in front of me, after all, in executable form) not to cause serious troubles.

I suppose that most textbooks have no choice: they have to show you programs in the language they are teaching, and they can’t really assume the reader knows some other language. (When I started to learn Middle High German, the professor asked the participants in the class who had learned Greek in school, so he could know whether citing Greek parallels would be helpful. He remarked with a sigh that it was years since it had made any sense to ask who in the class had Hebrew.) But I learned a lot more about both Spitbol and the sorting algorithm in question when I did that translation than I ever had before.

This topic came to mind this morning because in my continuing work with Alloy and ACL2 I have been trying to rewrite a simple ACL2 example (a function, a few examples, and a couple simple theorems to prove that the function meets its requirements) into a roughly corresponding Alloy model. I think the exercise illuminates both Alloy and ACL2 (more on that later, after I’ve gotten this example to work and maybe done a few more, in both directions).

Similarly, I have learned a lot about Steve Pepper’s Italian Opera topic map by thinking about what a translation into other forms (SQL, Prolog, …) would look like; I expect to learn more about the topic map and about the technologies involved, when I push those translations further.

It’s funny, though: I am not sure I’ve ever seen anyone say explicitly that translating short programs from one language to another can be an interesting and rewarding experience for those learning the languages involved. Does no one else find it as helpful as I do? (Am I insufficiently lazy as a programmer?) Or is it too obvious to merit mention? Or is there just not a good single term for the practice?

Makes me wonder whether the double-translation method (by which Roger Ascham taught Greek to Elizabeth I) could be applied to programming and markup languages?