Tell me, Captain Yossarian, how many elements do you see?

[22 January 2010]

In an earlier post, I asked how many element nodes are present in the following XML document’s representation in the XPath 1.0 data model.


I think the spec clearly expects the answer “four” (a parent and three children). More than that, I think the spec reflects the belief of its authors and editors that that answer follows necessarily from the properties of the data model as defined in section 5 of the spec.

But I don’t think “four” is the only answer consistent with the data model as defined.

In particular, consider the answer “two”: one ‘a’ element node, and one ‘b’ element node (which for brevity I’ll just call A and B from here on out; note that A and B are element nodes, whose generic identifiers happen to be ‘a’ and ‘b’.). As far as I can tell, the abstract object just defined obeys all the rules which define the XPath 1.0 model. The rules which seem to apply to this document are these:

  1. “The tree contains nodes.”


  2. “Some types of nodes … have an expanded-name …”

    Check: here the names are “a” and “b”.

  3. “There is an ordering, document order, defined on all the nodes in the document …”

    Check: in the model instance I have in mind the nodes are ordered. (In fact they have a total order, which is more than the spec explicitly requires here.) The root node is first, A second, and B third.

  4. “… corresponding to the order in which the first character of the XML representation of each node occurs in the XML representation of the document after expansion of general entities.”


  5. “Thus, the root node will be the first node.”


  6. “Element nodes occur before their children.”

    Check: The element node A occurs before its child B in the ordering.

  7. “Thus, document order orders element nodes in order of the occurrence of their start-tag in the XML (after expansion of entities).”

    Check: the start-tags for B begin at positions 4, 8, and 12 of the document’s serial form (counting from 1), and the start-tag for A begins at position 1. So the order of the start-tags in the XML matches the order of the nodes in the model.

    If we had several elements with multiple occurrences and thus multiple start-tags, and the positions of the start-tags were intermingled (as in <a> <b/> <c/> <b/> <c/> <b/> </a>), then it would appear that we had only a partial order on them. if the spec specified that document order was a total ordering over all nodes, we might have a problem. But it doesn’t actually say that; it just speaks of an “ordering”; it would seem strange to argue that a partial ordering is not an “ordering”.

  8. “Root nodes and element nodes have an ordered list of child nodes.”

    Check: the root node’s list of children is {1 → A}, A has the list {1 → B, 2 → B, 3 → B}, and B has the empty list {}.

  9. “Nodes never share children: if one node is not the same node as another node, then none of the children of the one node will be the same node as any of the children of another node.”

    Check: the sets {A}, {B}, and {} (representing the children, respectively, of the root node, A, and B) are pairwise disjoint.

  10. “Every node other than the root node has exactly one parent, which is either an element node or the root node.”

    Check. A has the root node as its parent, B has A as its parent.

  11. “A root node or an element node is the parent of each of its child nodes.”

    Check: the root’s only child is A, and A’s parent is the root. A’s only child is B, and B’s parent is A.

  12. “The descendants of a node are the children of the node and the descendants of the children of the node..”

    Check. The descendants of the root node are {A, B}, those of A are {B}, those of B are {}.

That’s it for the general rules; I think it’s clear that the construction we are describing satisfies them. The subsections of section 5 have some more specific rules, including one that is relevant here.

  1. “There is an element node for every element in the document.” (Sec. 5.2.)

    This rule was cited by John Cowan in his answer to the earlier post; it seems to me it can be taken in either of two ways.

    First, we can take it (as John did, and as I did the first time through this analysis) as saying that for each element node in an instance of the data model, there is an element in the corresponding serial-form XML document, and conversely (I read it as claiming a one to one correspondence) for every element in the serial-form document, there is an element node in the data model instance.

    In this case, the rule seems to me to have two problems. The first problem is that the rule assumes a mapping between XML serial-form documents and data model instances and further assumes (if we take the word “the” and the use of the singular seriously — we are, after all, dealing with a formal specification written by a group of gifted spec-writers, and edited by two of the best in the business) that the mapping from data model instance to serial-form document is a function. But how can it be a function, given that the data model does not model insignficant white space? There are infinitely many serial-form XML documents equivalent to any given data model instance. Serialization will not be a function unless additional rules are specified. And in any case, when we set out to define a formal data model as is done in the XPath 1.0 spec, I think the usual idea is that we should define the data model in such a way as to make it possible to prove that every data-model instance corresponds to a class of XML documents treated as equivalent for XPath purposes, and that every XML document corresponds to a data model instance. If the rule really does appeal to the number of elements in the serial-form XML document, then it’s assuming, rather than establishing, the correspondence. It’s hard to believe that either Steve DeRose or James Clark could make that mistake.

    The second problem, on this reading of the rule, is that it’s hard to say whether a given data model instance obeys the rule, because it’s not clear that XML gives a determinate answer for the question.

    Some argue that XML documents are, by definition, strings that match the relevant production of the XML spec (on this see my post of 5 March 2008); by the same logic we can infer that an element is a string matching the element production.

    [Note: For what it’s worth I don’t think the XML spec explicitly identifies either documents or elements with strings; the argument that XML documents and elements are strings rests on the claim that they can’t be anything else, or that making them anything else would make the spec inconsistent. As I noted in my blog post of 2008, there is at least one passage which seems to assume that documents are strings (it speaks of a document matching the document production), but I believe that passage is just a case of bad drafting.]

    If for discussion’s sake we accept this argument, then it seems we must ask ourselves: is the string consisting of the four characters U+003C, U+0062, U+002F, U+003E, in order, one string or three strings?

    The answer, as students of philosophy will have been shouting out at home for some moments now, is “yes”. If by character you mean ‘character type’, then one string (or string type). If on the other hand you mean ‘character token’, then for the document shown above, I think pretty clearly three strings (string tokens).

    So, on this first reading of the rule, check. Two distinct elements in the XML (counting string types), two in the data model instance. (To show that this rule excludes the model instance we’re discussing, it would be necessary to show that the serialized XML document has four elements, and that counting only two elements is inconsistent with the XML spec. Given how coy the XML spec is on the nature of XML documents, I don’t believe such a showing possible.)

    The second reading of the rule is that “document” does not mean, in this sentence, something different from the data model instance, but is just a way of referring to the entirety of the data model instance itself. A quick glance at the usage of the word “document” in the XPath 1.0 spec suggests that that is in fact its most common usage. In recent years, influenced perhaps by the work on the XPath 2.0 data model, with formalists of the caliber of Mary Fernández and Phil Wadler, many people have begun to think it natural to define an abstract model independently of the XML spec, and then (as I suggested above) establish in separate steps that there is a correspondence between the set of all XML documents viewed as character sequences and the set of all instances of the data model.

    The XPath 1.0 spec seems to take a slightly different tack, rhetorically. The definition of the data model begins

    XPath operates on an XML document as a tree. This section describes how XPath models an XML document as a tree.

    I take this as a suggestion that the data model instance operated on by XPath 1.0 can be thought of not as a thing separate from the XML document (whatever that is) but as a particular way of looking at and thinking about the XML document. I think it’s true that there was (historically speaking) no consensus among the XML community at that time that the term XML document referred to a string, as opposed to a tree. I think the idea would have met fierce resistance.

    On this reading, the rule quoted above is either a vacuous statement, or a statement about usage, establishing the correspondence (or equivalence) between the terms element and element node.

    So, on this second reading, check. Two elements, two element nodes. Same thing, really, element node and element.

As I say, I think it’s quite clear which answer the XPath 1.0 spec intends the question to have: plenty of places in the spec clearly rely on element nodes never having themselves as siblings, just as plenty of places rely on element nodes never having more than one parent. Both properties are a common-sensical interpretation of the element structure of XML. I believe the point of defining the data model explicitly is to eliminate, as far as possible, the need to appeal to common sense and “what everyone knows”, to get the required postulates down on paper so that any implementation which respects those postulates and obeys the constraints will conform and inter-operate. For the parent relation, the definition of the model makes the common-sense interpretation of XML explicit. But not (as far as I can see) for the sibling relation.

Perhaps the creators of the XPath 1.0 spec felt that no explicit statement about no elements being their own siblings was necessary, because it followed from other properties of the model as specified. If so, I think either I must have missed something, or (less likely, but still possible) they did. If the property is to hold for all instances of the model, and if it does not follow from the current definition of the model, then perhaps it needs to be stated explicitly as part of the definition of the model.

[When he reached the end of this post, my evil twin Enrique turned to me and asked “Who’s Yossarian? Was he a member of the XSL Working Group?” “No, he was a character in Joseph Heller’s novel Catch 22. The title of the post is a reference to an elaborate bit in chapter 18 of the novel.” “And by ‘elaborate,’” mused Enrique, “you mean —” “Exactly: that it’s too long to quote here and still claim fair use. Besides, this isn’t a commentary on Catch 22. Just search the Web (or the book) for the phrase ‘I see everything twice.’”]

An XPath 1.0 puzzle

[20 January 2010]

Consider the XML document shown below, and in particular consider its representation in the XPath 1.0 data model.


How many element nodes are there in this document, regarded as an instance of the XPath 1.0 data model? I think it’s clear that, for purposes of XPath 1.0, the expected answer is four: one of type ‘a’ and three of type ‘b’, all children of the ‘a’ element.

I am finding it unexpectedly difficult to prove that conclusion formally on the basis of the definition of the data model given in the spec. I wonder if anyone else will have better luck.

XML as a sort-of open format

[3 December 2009]

I just encountered the following statements in technical documentation for a family of products which I’ll leave nameless.

This document does not describe the complete XML schema for either [Application 1] or [Application 2]. The complete XML schema for both applications is not available and will not be made public.

Perhaps there can be good reasons for such a situation. Perhaps the developers really don’t know how to use any existing schema language to describe the set of documents they actually accept; perhaps only a Turing machine can actually identify set of documents accepted, and the developers were unwilling to work with a simpler set whose membership could be more cheaply decided. (Well, wait, those may be reasons, but they don’t actually qualify as “good”.)

I wonder whether this is an insidious attempt to look like the products have an open format (See? it’s XML! How much more open can you get?) while ensuring that the commercial products in question remain the only arbiters of acceptable documents? Or whether the programmers in question were just too lazy to specify a clean vocabulary and ensure that their software handles all documents which meet some standard of validity that does not require Turing completeness?

Having a partially defined XML format is, at least for me, still a great deal more convenient than having the format be binary and completely undocumented. But it certainly seems to fall a long distance short of what XML could make possible.

Changing stylesheets in midstream

[19 October 2009]

My evil twin Enrique came by the other day in a great state of excitement. There’s been a bit of a kerfuffle in some W3C working groups lately, he told me. As some readers will know, the W3C recently unveiled a new design for their web site. (Many people seem to want to call this a site redesign, but as far as I know most of the site was originally developed by individuals and working groups working autonomously, and outside the front page, the Tech Reports page, and the other pages maintained by the Communications Team, the site never had a consistent design to begin with. Surely it’s only a redesign if there was a design there in the first place?)

Almost all the comments on the new design appear to be positive — at least, they were until some spec editors and working group chairs noticed that the site redesign had included reformatted versions of their working groups’ current Recommendations, which the working groups had not looked at before and which proved, when examined, to be sub-optimal in some ways.

“Sub-optimal is putting it mildly,” laughed Enrique. “Some of the specs looked like night soil on toast. And some of the editors were fit to be tied.” Enough pain was expressed over the new look of the old specs, apparently, that after a couple of days the standard URLs for existing Recommendations were all reset, and no longer point to the reformatted versions. (The reformatted versions are still around — no one at W3C ever deletes anything, it’s a point of some pride — though you have to know what URIs to point to.)

One of the most visible problems is that in some specs, extra space was appearing before and after large numbers of hyperlinked special terms. “You know what it was?” chortled Enrique. “Some bright young thing at some bright young design agency seems to have thought a 20px padding would be a good idea for the CODE element. Do these people not know any HTML? Here, look at the stylesheet!” He pulled out a hand-held and showed me a rule from one of the new stylesheets (reformatted here for legibility):

h1, h2, h3, h4, h5, h6, ul, ol, dl,
p, pre, blockquote, code {
padding:20px 20px 0 20px;

He was cackling with malice now. “The stylesheet author seems to have thought that code was not for inline material but for indented blocks. Where do they get these people? And giving measurements in pixels is so dead-tree-oriented!”

“Now, now,” I said. “I’m sure you were a bright young thing once yourself.”

“Not me,” he returned brightly. “I was fifty-two the day I was born, and I’ve always been dumb as a post.”

“Two, actually. Odd, though,” I said. “When I retrieve the reformatted versions of the XML and XSLT 2.0 specs, I don’t see extra white space around code elements.” I retrieved the stylesheet with the bogus padding values for code; the rule now read

h1, h2, h3, h4, h5, h6, ul, ol, dl,
p, pre, blockquote {
padding:20px 20px 0 20px;

“Those bastards!” Enrique cried. “You mean they’ve fixed it? I was going to charge them big bucks to tell them what was wrong!” And he stomped off again in spluttering disappointment. I haven’t seen him since, but I’m not worried; he’ll get over it.

[The new W3C site is the result of a long design history, and really does appear to be an improvement, for the most part. It makes it much easier than the old site to find your way around (or so I believe — I knew the old site structure well enough that the new one just confuses me; I assume that will pass). The new look intended for W3C technical reports (i.e. Recommendations, Notes, Working Drafts, etc.) can be inspected on the beta site’s Tech Reports page, or the beta site’s version of the new Standards page. I haven’t yet decided whether I think the new tech report styling is an improvement or not, and if it is, whether it’s enough of an improvement to justify the disruption of restyling the entire body of existing Recommendations. I’ll be interested in readers’ reactions.

One thing is unsurprising: if you launch a new stylesheet on technical material whose authors and editors pride themselves on precision, you would do well not to make it public until they have confirmed that it is OK. And it would be smart, before you let them see it at all, and certainly before you make it public, to make sure the new stylesheet doesn’t introduce highly visible problems like 20 pixels of extra white space around every code element.

Live and learn.]

Looking for open source XML software?

[18 August 2009]

Here’s a concrete example of the difference between the metadata-aware search we would like to have, and the metadata-oblivious full-text search we mostly have today, encountered the other day at the Balisage 2009 conference in Montréal.

Try to find a video of the song “I don’t want to go to Toronto”, by a group called Radio Free Vestibule.

When I search for “I don’t want to go to Toronto”, I get, in first place, a song called “I don’t want to go”, performed live in Toronto. When I put quotation marks around the title, it tells me nothing matches and shows me a video of Elvis Costello singing “I don’t want to go to Chelsea”.

It’s always good to have concrete examples, and I always like real ones better than made-up examples. (Real examples do often have a disconcerting habit of bringing in one complication after another and involving more than one problem, which is why good ones are so hard to find. But I don’t see many extraneous complications in this one.)
[25 August 2009]

Data persistence is a crapshoot. Load the dice.

-Dorothea Salo, Equipment and data curation, 7 August 2009 (on preferring widely supported open formats to niche formats and closed formats).

[25 August 2009]

Data persistence is a crapshoot. Load the dice.

-Dorothea Salo, Equipment and data curation, 7 August 2009 (on preferring widely supported open formats to niche formats and closed formats)

[25 August 2009]

Data persistence is a crapshoot. Load the dice.

-Dorothea Salo, Equipment and data curation, 7 August 2009 (on preferring widely supported open formats to niche formats and closed formats)

[30 September 2009]

Last week I participated in the XML Summer School organized by Eleven Informatics at St. Edmund Hall in Oxford. I hope the participants enjoyed it as much as the speakers did. The weather certainly cooperated, although it felt more autumnal than summery by the end of the week.

One of my responsibilities during the week was to give a survey of open-source software for XML applications; this turns out to be harder than it might look because there are so many, with such varying degrees of polish, reliability, and completeness. There are several lists of XML software, and open-source software, and open-source XML software (general, or in some specific categories) on the Web, but many of them appear to not to have been maintained or updated in several years. (Honorable exceptions include the lists maintained by Ron Bourret on databases and XML, Lars Marius Garshol on XML tools and Topic-Map tools, and Tony Graham on XSLT testing tools.) So the lists I made, arbitrary and capricious though some aspects of them are, may be helpful.

Eventually I plan to turn the information gathered into a more convenient form, and set up some infrastructure to make it easier to maintain, but in the meantime the slides I prepared for the session may be helpful; they provide a coarsely categorized and tersely annotated list of some open-source XML software that readers of this klog may find interesting.

Posted in XML