Tell me, Captain Yossarian, how many elements do you see?

[22 January 2010]

In an earlier post, I asked how many element nodes are present in the following XML document’s representation in the XPath 1.0 data model.

<a><b/><b/><b/></a>

I think the spec clearly expects the answer “four” (a parent and three children). More than that, I think the spec reflects the belief of its authors and editors that that answer follows necessarily from the properties of the data model as defined in section 5 of the spec.

But I don’t think “four” is the only answer consistent with the data model as defined.

In particular, consider the answer “two”: one ‘a’ element node, and one ‘b’ element node (which for brevity I’ll just call A and B from here on out; note that A and B are element nodes, whose generic identifiers happen to be ‘a’ and ‘b’.). As far as I can tell, the abstract object just defined obeys all the rules which define the XPath 1.0 model. The rules which seem to apply to this document are these:

  1. “The tree contains nodes.”

    Check.

  2. “Some types of nodes … have an expanded-name …”

    Check: here the names are “a” and “b”.

  3. “There is an ordering, document order, defined on all the nodes in the document …”

    Check: in the model instance I have in mind the nodes are ordered. (In fact they have a total order, which is more than the spec explicitly requires here.) The root node is first, A second, and B third.

  4. “… corresponding to the order in which the first character of the XML representation of each node occurs in the XML representation of the document after expansion of general entities.”

    Check.

  5. “Thus, the root node will be the first node.”

    Check.

  6. “Element nodes occur before their children.”

    Check: The element node A occurs before its child B in the ordering.

  7. “Thus, document order orders element nodes in order of the occurrence of their start-tag in the XML (after expansion of entities).”

    Check: the start-tags for B begin at positions 4, 8, and 12 of the document’s serial form (counting from 1), and the start-tag for A begins at position 1. So the order of the start-tags in the XML matches the order of the nodes in the model.

    If we had several elements with multiple occurrences and thus multiple start-tags, and the positions of the start-tags were intermingled (as in <a> <b/> <c/> <b/> <c/> <b/> </a>), then it would appear that we had only a partial order on them. if the spec specified that document order was a total ordering over all nodes, we might have a problem. But it doesn’t actually say that; it just speaks of an “ordering”; it would seem strange to argue that a partial ordering is not an “ordering”.

  8. “Root nodes and element nodes have an ordered list of child nodes.”

    Check: the root node’s list of children is {1 → A}, A has the list {1 → B, 2 → B, 3 → B}, and B has the empty list {}.

  9. “Nodes never share children: if one node is not the same node as another node, then none of the children of the one node will be the same node as any of the children of another node.”

    Check: the sets {A}, {B}, and {} (representing the children, respectively, of the root node, A, and B) are pairwise disjoint.

  10. “Every node other than the root node has exactly one parent, which is either an element node or the root node.”

    Check. A has the root node as its parent, B has A as its parent.

  11. “A root node or an element node is the parent of each of its child nodes.”

    Check: the root’s only child is A, and A’s parent is the root. A’s only child is B, and B’s parent is A.

  12. “The descendants of a node are the children of the node and the descendants of the children of the node..”

    Check. The descendants of the root node are {A, B}, those of A are {B}, those of B are {}.

That’s it for the general rules; I think it’s clear that the construction we are describing satisfies them. The subsections of section 5 have some more specific rules, including one that is relevant here.

  1. “There is an element node for every element in the document.” (Sec. 5.2.)

    This rule was cited by John Cowan in his answer to the earlier post; it seems to me it can be taken in either of two ways.

    First, we can take it (as John did, and as I did the first time through this analysis) as saying that for each element node in an instance of the data model, there is an element in the corresponding serial-form XML document, and conversely (I read it as claiming a one to one correspondence) for every element in the serial-form document, there is an element node in the data model instance.

    In this case, the rule seems to me to have two problems. The first problem is that the rule assumes a mapping between XML serial-form documents and data model instances and further assumes (if we take the word “the” and the use of the singular seriously — we are, after all, dealing with a formal specification written by a group of gifted spec-writers, and edited by two of the best in the business) that the mapping from data model instance to serial-form document is a function. But how can it be a function, given that the data model does not model insignficant white space? There are infinitely many serial-form XML documents equivalent to any given data model instance. Serialization will not be a function unless additional rules are specified. And in any case, when we set out to define a formal data model as is done in the XPath 1.0 spec, I think the usual idea is that we should define the data model in such a way as to make it possible to prove that every data-model instance corresponds to a class of XML documents treated as equivalent for XPath purposes, and that every XML document corresponds to a data model instance. If the rule really does appeal to the number of elements in the serial-form XML document, then it’s assuming, rather than establishing, the correspondence. It’s hard to believe that either Steve DeRose or James Clark could make that mistake.

    The second problem, on this reading of the rule, is that it’s hard to say whether a given data model instance obeys the rule, because it’s not clear that XML gives a determinate answer for the question.

    Some argue that XML documents are, by definition, strings that match the relevant production of the XML spec (on this see my post of 5 March 2008); by the same logic we can infer that an element is a string matching the element production.

    [Note: For what it’s worth I don’t think the XML spec explicitly identifies either documents or elements with strings; the argument that XML documents and elements are strings rests on the claim that they can’t be anything else, or that making them anything else would make the spec inconsistent. As I noted in my blog post of 2008, there is at least one passage which seems to assume that documents are strings (it speaks of a document matching the document production), but I believe that passage is just a case of bad drafting.]

    If for discussion’s sake we accept this argument, then it seems we must ask ourselves: is the string consisting of the four characters U+003C, U+0062, U+002F, U+003E, in order, one string or three strings?

    The answer, as students of philosophy will have been shouting out at home for some moments now, is “yes”. If by character you mean ‘character type’, then one string (or string type). If on the other hand you mean ‘character token’, then for the document shown above, I think pretty clearly three strings (string tokens).

    So, on this first reading of the rule, check. Two distinct elements in the XML (counting string types), two in the data model instance. (To show that this rule excludes the model instance we’re discussing, it would be necessary to show that the serialized XML document has four elements, and that counting only two elements is inconsistent with the XML spec. Given how coy the XML spec is on the nature of XML documents, I don’t believe such a showing possible.)

    The second reading of the rule is that “document” does not mean, in this sentence, something different from the data model instance, but is just a way of referring to the entirety of the data model instance itself. A quick glance at the usage of the word “document” in the XPath 1.0 spec suggests that that is in fact its most common usage. In recent years, influenced perhaps by the work on the XPath 2.0 data model, with formalists of the caliber of Mary Fernández and Phil Wadler, many people have begun to think it natural to define an abstract model independently of the XML spec, and then (as I suggested above) establish in separate steps that there is a correspondence between the set of all XML documents viewed as character sequences and the set of all instances of the data model.

    The XPath 1.0 spec seems to take a slightly different tack, rhetorically. The definition of the data model begins

    XPath operates on an XML document as a tree. This section describes how XPath models an XML document as a tree.

    I take this as a suggestion that the data model instance operated on by XPath 1.0 can be thought of not as a thing separate from the XML document (whatever that is) but as a particular way of looking at and thinking about the XML document. I think it’s true that there was (historically speaking) no consensus among the XML community at that time that the term XML document referred to a string, as opposed to a tree. I think the idea would have met fierce resistance.

    On this reading, the rule quoted above is either a vacuous statement, or a statement about usage, establishing the correspondence (or equivalence) between the terms element and element node.

    So, on this second reading, check. Two elements, two element nodes. Same thing, really, element node and element.

As I say, I think it’s quite clear which answer the XPath 1.0 spec intends the question to have: plenty of places in the spec clearly rely on element nodes never having themselves as siblings, just as plenty of places rely on element nodes never having more than one parent. Both properties are a common-sensical interpretation of the element structure of XML. I believe the point of defining the data model explicitly is to eliminate, as far as possible, the need to appeal to common sense and “what everyone knows”, to get the required postulates down on paper so that any implementation which respects those postulates and obeys the constraints will conform and inter-operate. For the parent relation, the definition of the model makes the common-sense interpretation of XML explicit. But not (as far as I can see) for the sibling relation.

Perhaps the creators of the XPath 1.0 spec felt that no explicit statement about no elements being their own siblings was necessary, because it followed from other properties of the model as specified. If so, I think either I must have missed something, or (less likely, but still possible) they did. If the property is to hold for all instances of the model, and if it does not follow from the current definition of the model, then perhaps it needs to be stated explicitly as part of the definition of the model.

[When he reached the end of this post, my evil twin Enrique turned to me and asked “Who’s Yossarian? Was he a member of the XSL Working Group?” “No, he was a character in Joseph Heller’s novel Catch 22. The title of the post is a reference to an elaborate bit in chapter 18 of the novel.” “And by ‘elaborate,’” mused Enrique, “you mean —” “Exactly: that it’s too long to quote here and still claim fair use. Besides, this isn’t a commentary on Catch 22. Just search the Web (or the book) for the phrase ‘I see everything twice.’”]

An XPath 1.0 puzzle

[20 January 2010]

Consider the XML document shown below, and in particular consider its representation in the XPath 1.0 data model.

<a><b/><b/><b/></a>

How many element nodes are there in this document, regarded as an instance of the XPath 1.0 data model? I think it’s clear that, for purposes of XPath 1.0, the expected answer is four: one of type ‘a’ and three of type ‘b’, all children of the ‘a’ element.

I am finding it unexpectedly difficult to prove that conclusion formally on the basis of the definition of the data model given in the spec. I wonder if anyone else will have better luck.

Binary adder in ACL2 and Alloy

[11 January 2010]

Not too long ago, I wrote about cross training in programming languages, and the pleasure of rewriting things from one language into another.

Recently I had another occasion to think about that. I’ve been doing some work lately with two systems which I think of as similar, though they have different goals and emphases. ACL2 (‘a computational logic for applicative Common Lisp’) is an industrial-strength theorem prover, with a history of use for proofs of correctness for chip designs, programming languages, and mathematical propositions like the fundamental theorem of calculus. It’s a lot like a high-performance car, both in its power and in the fact that it spends a lot of time in the shop. Er, uh, I mean, it requires close, intelligent attention from the user. And its learning curve looks a lot like the approach to Mesa Verde. Alloy, by contrast, is an instance of what its designer calls “light-weight formal methods”; the language is kept simple for analysability’s sake, and no theorem prover is provided: instead, exhaustive searches through all the models of a system up to a given size are made, to find counter-examples to an assertion. It’s got a learning curve, too, but in my experience it’s pretty manageable.

I’m interested in both ACL2 and Alloy because both seem to me to offer help in making designs and specifications correct, and keeping them that way when changes are made. It would be nice to have a better sense of when one tool is more appropriate to a problem, and when the other. One conjecture I’ve entertained: Alloy for initial playing around with a design, ACL2 for fuller proofs of correctness (when real proofs are desired) after the design has settled down a bit. Solving the same problem using both tools seems like an obvious way to test that conjecture and get a sense of how the two compare.

So in a spirit of exploration, I recently took an example from the ACL2 textbook* and experimented with making an Alloy model of the same thing, which I finished the other day and have now published on the Alloy community site. The example covers a binary adder constructed from simple sub-units which add single-bit numbers. (I found it enlightening to read the Wikipedia article on adders; if you’re not an electrical engineer, you may, too.)

* Matt Kaufmann, Panagiotis Manolios, and J Strother Moore, Computer-Aided Reasoning: An Approach ([Austin]: [n.p.], 2002), section 10.5.1.

The experience poses a bit of a challenge to the conjecture just mentioned: although ACL2 does require some help proving that the relevant function terminates and that it’s correct, in some ways the ACL2 description seems simpler than the Alloy description. Certainly it’s more compact. To ensure that models are analysable, Alloy forbids recursive functions; finding an Alloy equivalent to the recursive ACL2 specification of the adder took me some head-scratching, and the result uses a lot of machinery that feels kind of ad hoc to me. But in a way, the fact that a non-recursive formulation can be found also feels like a demonstration of Alloy’s expressive power. (Short version: recursion can be changed to iteration, right? So think about using sequences.)

A really good comparative sense of the two systems will require a lot more than one exercise of this kind; ideally, I’d like a whole series of ACL2-to-Alloy and Alloy-to-ACL2 translations. Currently, my Alloy seems to be up to translating at least some ACL2 exercises into Alloy; I am not certain my ACL2 skills are up to finding ACL2 equivalents for standard Alloy exercises. If you are interested in formal methods (and who else will still be reading at this point?), all I can say is: stay tuned. And if you do any cross-translation exercises of this kind, think about sharing them!

Philosophers make quick keyboardists

[17 December 2009]

A mnemonic for ACL2 induction

Lately I’ve been spending time working through a lot of exercises in ACL2. As a way of helping the user internalize the requirements for successful induction, several exercises ask for an explicit reformulation of a problem in terms of the ACL2 induction principle.

Don’t worry: I don’t want to try to explain the ACL2 induction principle here. It suffices for present purposes to observe that a fully explicit application of the ACL2 induction principle requires that you write down a number of things; you, dear reader, don’t need to understand what they are, only that they exist and need to be specified:

  • φ, the formula being proved
  • m, the measure to be used when computing the ‘size’ of a particular instance of the formula
  • qi (for 1 ≤ ik), the conditions which determine the different induction steps: one induction step for each qi
  • k, the number of induction steps (and thus of conditions)
  • σi,j (for 1 ≤ ik and 1 ≤ jhi), the substitutions applicable to condition qi; each condition qi may have up to hi hypotheses and corresponding substitutions
  • hi, number of induction hypotheses for each induction step qi
  • the measure conjectures for the case: (a) that the measure given always produces an ordinal value, and (b) that the measure decreases on each recursive call (i.e. in each induction hypothesis)

After a while, my evil twin Enrique got tired of watching me flip back and forth between the statement of the problem I was trying to solve and the page that showed all the things that needed to be written down; he said “Haven’t you memorized that list yet?” “No,” I said. “It’s not that simple a list, is it?”

“Sure it is,” he said. “Just use a mnemonic to remember it. The full list, with all subscripts, is

φ m, qi ≤ k, k, σi, j, hi, m c

“So just remember

Philosophers make quick keyboardists; strength in judgment helps improve mental capacity.

“Or if you can remember the subscripts by yourself, and just need (φ, m, q, σ):

Philosophy multiplies quizzical subtleties.

“Easy, see?”

Sometimes I think Enrique has too much time on his hands.

Automata and infinite strings

[15 December 2009]

[This is another one of those ignorance-is-bliss postings. If I had studied automata theory properly, this would (I guess) have been covered in class; that would have deprived me of the fun of thinking about it without knowing the right answer. If you did study automata theory, and you know how infinite strings are handled, and it irritates you to see someone waffling on and on about it instead of just doing some homework and reading the damn literature, you might want to stop reading soon.]

Some time ago, Michael Kay suggested that it was pointless for the XSD datatypes spec to specify that the lexical representations of integers, or strings, or various other simple types, were finite sequences of characters with certain properties. No implementation, he pointed out, can reliably distinguish finite from infinite strings, so it’s a non-testable assertion.

[“True if you’re using conventional I/O and conventional representations of strings, maybe,” said Enrique. “But if you represent the sequence of characters using a description, rather than an array of characters, it’s not clear that that’s true. Instead of the sequence "3.141592...", store an algorithm for calculating, on demand, the nth digit of the decimal expansion of π. Ditto for the square root of 2. And so on!” “You may be right,” I said. “But that wasn’t what I wanted to talk about, so be quiet.”]

The working group declined the proposal to drop the word “finite” on the grounds that if the strings in question are required to be finite, then we know that all the lexical representations of integers (for example) can in principle be recognized by a finite state automaton. Without the restriction to finite sequences, most of what people know about finite state automata isn’t guaranteed to apply.

I found myself wondering this morning about the possible application of automata to infinite and semi-infinite strings. I know that in principle automata theorists have historically not restricted their interest to finite automata; it seems plausible to assume they have also considered infinite strings. But I don’t know what they have said, without spending time looking it up; instead, I am going to enjoy myself for a little while seeing how much I can figure out for myself.

One obvious question to answer is: if you want to use an automaton to identify infinite sequences, how do you define acceptance of the sequence? For a finite sequence, you ask what state you’re in at the end of the sequence, and whether that state is an “accept state” or not. That won’t work for an infinite sequence: there is no final state.

Perhaps we can consider the sequence of states the automaton enters and define acceptance in terms of that sequence. Possible answers:

  1. Accept if (a) the automaton eventually ends up in a single state which it never again leaves, and (b) that state is an accept state.
  2. Accept if there is some point in the sequence of states such that every state following that point is an accept state.

These would work (in the sense of providing a yes/no answer).
Do these rules for acceptance of strings define sets of automata with different discriminating power?

It seems obvious that they do, but what exactly are the differences?

Consider, for example, automata for recognizing various classes of numbers written as an infinite sequence of decimal digits. Numbers seem to be on my mind, perhaps because of the tie-in to XSD datatypes.

For such infinite strings of digits (including a decimal point), integers have the property that every digit to the right of (i.e. following) the decimal point is a 0. If you build the obvious automaton, for an integer it will spend all its time in the zero-after-decimal-point state, and for a non-integer it will, eventually, end up caught in an error state.

[Enrique tells me I should pause to draw pictures of these automata, but I’m not going to take the time just yet. Apologies to those who find it hard to visualize what I’m talking about.]

So the first acceptance rule suffices for recognizing integers. It may be relevant that the same automaton can be used to recognize finite strings as representing integers: any prefix of the infinite string representing an integer will also be accepted as representing an integer.

The first rule would also suffice to allow us to build a recognizer for certain fractions, e.g. 1/3: the infinite string denoting 1/3 ends up perpetually in the “we’ve just read a 3” state.

On the other hand, it doesn’t suffice for all rationals: in decimal notation,1/7 has an infinitely repeating sequence of digits (142857, if you were wondering). To distinguish 1/7 in decimal notation we’d need a cycle of six states in the automaton.

All rational numbers have a decimal expansion that eventually settles into an infinite series of repeated strings of digits (if only an infinitely repeating sequence of zeroes). So if we adopt the second rule for defining acceptance of the string, we can say: for every rational number, there is a finite state automaton that recognizes that rational number. And irrationals, which have no repeating sequences, aren’t recognizable by an automaton with finite states. (An automaton with infinitely many states might be able to recognize the decimal expansion of a particular irrational number, say π, but it’s hard to know what to do with that information — maybe it’s a reason to say that languages recognizable with an infinite automaton are not necessarily regular.)

That sounds like a nice pattern. It would be even nicer if we could devise an automaton to recognize the set of decimal expansions of rational numbers, but I suspect that’s not feasible, since the complement of that set is the irrationals, and being able to recognize the one set by regular means would entail being able to recognize the other, too.

Does it make sense to require that the automaton eventually end up spending all its time in accept states? (Or equivalently, that the sequence of states have a suffix in which every element in the suffix is an accept state.)

What if that is too restrictive a rule? What if we said instead

  1. Accept if at every point in the sequence of states there are an infinite number of accept states among the states following that point.

That is, allow the string to put the automaton into a non-accepting state, as long as it’s only temporary, and it eventually gets back into an accepting state.

Consider an automaton which has two states, A and B. Every time a B is found in the input, we go to state B; for any other symbol we go to state A. B is an accept state.

If we adopt the second story about termination, a string ending in an unending series of Bs will be accepted and is thus recognizable by an automaton. A string with an infinite number of Bs, interspersed with other symbols, will not be accepted by this automaton (nor by any other, as far as I can tell).

OK, that seems to establish (if we accept the conjecture about strings with infinitely many Bs) that the second and third rules define distinct sets of languages. I suppose that one chooses to use the second rule, or the third, or some other I haven’t thought of yet, in part based on whether it feels right to count as regular the languages one can recognize using that rule.

Hmm. OK, time to look at the bookshelves.

I’ve just checked and found that John E. Hopcroft and Jeffrey D. Ullman, in Introduction to automata theory, languages, and computation (Reading: Addison-Wesley, 1979), restrict their attention to finite strings.

Dick Grune and Ceriel J. H. Jacobs, Parsing techniques: a practical guide, second edition (New York: Springer, 2008), don’t explicitly impose this restriction. But a quick scan of their opening pages also doesn’t show any explicit consideration of infinite sequences of symbols, either. I’m guessing they do treat infinite input somewhere, if only because if you can contemplate van Wijngaarden grammars, which have infinite numbers of context-free rules (and remember, Grune didn’t just contemplate van Wijngaarden grammars, he wrote a parser generator for them), infinite strings are just not going to frighten you.

I suppose the idea of thinking seriously about infinitely long sentences in a language is one I first encountered in D. Terence Langendoen and Paul Postal, The vastness of natural languages (Oxford: Blackwell, 1984). To whom (for this, as for many other things) thanks!

I’m pretty sure that there was some treatment of infinite automata and/or infinite input strings in S. C. Kleene, “Representation of events in nerve nets and finite automata”, in Automata studies, ed. C. E. Shannon and J. McCarthy (Princeton: PUP, 1956), and V. M. Glushkov, “The abstract theory of automata”, Russian mathematical surveys: a translation of the survey articles and of selected biographical articles in Uspekhi matematicheskikh nauk 16 (1961). They are both kind of tough sledding, but I suppose I really ought to go back and read them carefully with an eye to this topic.