Alloy as logical desk calculator

[26 March 2010]

Long ago I used a wonderful file-oriented database system called Watfile, which was designed as a sort of desk-calculator for data. It was designed for personal use, not industrial-strength data management, and its designers successfully resisted the temptation to add more features and more power at the cost of a more complex user interface. Watfile was to a full enterprise-class database as a desk calculator of the 1960s was to … oh, perhaps to Fortran. For suitable problems, the ease of setup far outweighed any considerations of power or completeness.

The experience of using Watfile for data manipulation tasks established in my mind the class of ‘desk-calculator-like’ packages for various kinds of problem.

Today I experimented with Alloy as a sort of logical desk calculator, and I’m happy to report that it passed the test with flying colors.

For reasons I won’t go into here, I’ve wondered a bit recently what it might look like to apply the technique of distinctive-feature analysis (originally developed for phonological descriptions of sound systems of language) to writing systems. When I sat down a few months ago with pencil and paper to see if I could devise a smallish set of typographic features which could (say) distinguish the twenty-six letters of the alphabet as I was taught it in first grade, I rapidly found that working solely with pen and paper made me impatient: it was too tedious to look at the set of features already identified and see which letters could not yet be distinguished (because they had the same value for all the features in question).

When I came back to this problem this afternoon, I thought for a few minutes about what questions I’d like to be able to ask the machine. Given a specified set of graphemes (as a first exercise, I chose the lower-case alphabet) and a specified set of binary features (does the letter have an ascender? a descender? a vertical stroke? a full or partial circle or bowl? Is the stroke to the left of the bowl? …), with information about which graphemes have the feature in question, I want to be able to ask, first, whether a particular set of features suffices to distinguish each individual grapheme? Or are there two or more graphemes which have the same value for all features in the set? And of course, if there are such sets of indistinct graphemes, what are they?

It occurred to me to solve the problem in Prolog; it would take just a relatively simple set of Prolog predicates to do what I wanted. But as I was preparing to launch X Windows, so that I could launch Prolog, I realized that I already had the Alloy Analyzer running. And so I wrote the predicates I wanted in Alloy instead of Prolog, to see whether it would work.

The upshot is: yes, it worked, and it was probably a bit easier to do than it would have been in Prolog. When I was thinking about how to set up the problem in Prolog, I found myself wondering about the best data representation to choose, and so on, almost as much as about the structure of the problem. I won’t say that Alloy posed no analogous problems — I did have to think for a moment or two about the best way to describe graphemes and distinctive features. But the high level of abstraction afforded by Alloy made the decision feel less binding, and made me feel a bit more comfortable experimenting. (It sounds strange to put it this way: after all, Prolog’s high level of abstraction is one of its great design strengths. But Prolog is also designed to be an efficient and effective programming language, which means that some details are exposed which have only procedural significance, and sometimes you find yourself thinking about them, even in situations where questions of execution efficiency don’t arise.

In very short order, I found it possible to define a suitably abstract representation of graphemes and features, specify some useful functions and predicates for asking the questions described above, and specify a small set of features (ten) which have a certain degree of typographic plausibility and which suffice to distinguish the graphemes in question. (Ten binary features for twenty-six graphemes may seem high, given that the theoretical minimum is only five, and that ten bits suffice to distinguish a thousand objects, not just twenty-six. But writing, like natural language, has some redundancy. Feature sets used to analyse natural language sound systems are also often very inefficient.) The visualization tools did not prove very helpful, but the Evaluator feature of the Alloy Analyzer was a great help.

If I pursue this work any further, I probably will put it into Prolog, where the interactive interface for expression evaluation is perhaps a bit more convenient than in Alloy. But it’s nice to know that Alloy can be used for this kind of problem, too.

Interested readers can find both the generic definitions and the specific graphemes and features for lower-case Latin letters (as used in Anglophone countries) on the Black Mesa Technologies web site.

The axes of XPath

[25 March 2010; error noticed by Dimitre Novatchev fixed 29 March 2010]

Steve DeRose and I have been discussing the XPath [1.0] data model recently (among other things), and in the course of the discussion an interesting observation has emerged.

it’s obvious that some of the axes in XPath expressions are inverses of each other, and also that some are transitive closures of others (or, going the other way, that some are transitive reductions of others). What surprised me a little was that (if for the moment you leave out of account the self and the XYZ-or-self axes, the attribute axis, and the namespace axis [and also preceding and following) all of the XPath axes fit naturally into a pattern that can be represented by three squares. (Will table markup work here? I wonder.) The first square represents the up/down axes:

Up/down
parent ancestor
child descendant

The next square covers sibling relations. Unlike parent and child, which are just short-hand for single steps along the up or down axis, XPath provides no syntactic sugar for preceding-sibling :: * [1] and following-sibling :: * [1], so I’ve invented the names “nextsib” and “prevsib” (marked with a star here to signal that they are invented):

Sideways
*prevsib preceding-sibling
*nextsib following-sibling

The third square describes overall document order; again, I’ve invented names for the single-step relations [note that the names used here for the transitive relations are given by XPath 2.0; XPath 1.0 doesn’t provide notation for them]:

Overall document order
*prevnode >>
*nextnode <<

[In the first version of this post, the right-hand columns were labeled preceding and following, but Dimitre Novatchev reminded me gently that these axes do not in fact correspond to document order: preceding excludes andestors and following excludes descendants. That’s a plausible exclusion, since no one in their right mind would say that chapter one of Moby Dick precedes the first paragraph of Moby Dick. Contains, yes; precedes, no. In fact, I remember getting into an argument with Phil Wadler about this, early on in the days of the XML Query working group, not realizing (a) that the document ordering he was describing was actually prescribed by XPath 1.0, nor (b) that saying that ancestors precede their descendants in document order didn’t mean that the ancestors would have to be present on the preceding axis. Thank you, Dimitre! And sorry, Phil!]

In each table row, the relation on the right is the positive transitive closure of the one on the left, and the one on the left is the transitive reduction of the one on the right.

In each table column, the relations in the top and bottom rows are inverses of each other.

The tables make it easy to see that it suffices to take a single pair of relations on nodes as primitive (e.g. child [or better first-child] and nextsib, or parent and prevsib); everything else in the tree can be defined in terms of the two primitive relations. (It’s not completely clear to me yet whether any two relations will do as long as they are from different tables, or not. Taking nextnode and parent seems to work, as does the pair nextnode and child but nextnode and first-child seems to pose problems — why can child be replaced by first-child in some situations but not others? Hmm.)

There seem to be implications for the formalization of the data model (which is how we got here in the first place), but maybe also for teaching new users how to think about or learn XPath.

A small gotcha in Alloy’s closure operator

[24 March 2010]

Consider the following Alloy model:

sig Node {}
one sig gl { r, s : Node -> Node }{ s = *r }
run {}

It has no instances, and the Alloy Analyzer comments laconically “Predicate may be inconsistent.” But the similar model below does have instances. Why?

sig Node {}
one sig gl { r, s : univ -> univ }{ s = *r }
run {}

I ran into this phenomenon today, when I was trying to do some work relating to the definition of document order in the XPath data model. It’s convenient to have both a relation like the successor relation of arithmetic on the natural numbers, which gives you the next item in document order, and a relation like the less-than-or-equals relation of arithmetic, which is the reflexive transitive closure of the successor relation. And if you want to have both, you will want to specify, as is done above, that the one is the reflexive transitive closure of the other. And when you do, it’s a bit alarming to be told (and mostly very quickly) that nothing in your model has any instances at all.

It took me a couple hours to reduce the problem to the terms shown above (there were several complications which looked more likely to be the cause of the trouble, and which took time to eliminate), and then a little more time to realize that declaring the relations in question as Node -> Node or as univ -> univ made a difference.

I invite the interested reader, if a user of Alloy, to consider the two models and explain why one has instances and the other doesn’t. No peeking at what follows until you have a theory.

Found it? If you found it without spending a couple hours on it, my hat’s off to you.

The * operator produces the reflexive transitive closure of a binary relation r by essentially taking the union of the positive transitive closure of r and the identity relation iden. That is, for all relations r, *r is defined as ^r + iden.

The problem is that iden covers everything in the model, including the gl (globals) object and the automatically included integers. And the upshot is that s cannot satisfy the constraint s = *r while also satisfying its declaration (Node -> Node).

In his book Software Abstractions, Daniel Jackson remarks on the oddity of * including ‘irrelevant’ tuples in the relation, but (as he points out) it almost never matters, because in many context the irrelevant tuples are dropped out during evaluation of the containing expression. The consequence is that it’s possible to work with Alloy (as I have obviously been doing) with a mental model in which * is somehow smart enough to add only those tuples which are ‘relevant’ to the relation whose closure is being taken. That mental model proves to be an over-complexification.

One reason I like Alloy a lot is that it allows the user to operate at a fairly high level of abstraction, if you can just find the right level. Working in Alloy presents fewer of the small issues of syntax and data typing and so on that can bedevil attempts to explore problems even in high-level programming languages, and so you mostly get to your results a lot faster. But I guess no formal system is ever completely free of cases where the user stares blankly at the screen for some indefinite period of time, trying to figure out why the system is producing such a counter-intuitive and obviously wrong result.

In the words of the old MVS error message: Probable user error. Solution: correct and re-submit.

Day of Digital Humanities, 18 March 2010

[17 March 2010]

Tomorrow I’ll be participating in a mass experiment with self-consciousness, the 2010 edition of the Day of Digital Humanities. The organizers have persuaded close to 150 people who self-identify with the description “digital humanist” (list) to blog, during the course of 18 March, about what it is they actually spend their time doing. “The goal of the project” (say the organizers) “is to create a web site that weaves together the journals of the participants into a picture that answers the question, ‘Just what do computing humanists really do?’”

For the day, I’ll be using the special Day of Digital Humanities blog set up for me by the organizers; the blogs of all participants are aggregated on the project site; there is also an RSS feed.

XML Prague is over

[14 March 2010]

XML Prague took place yesterday and today, and I’m still coming down from the adrenaline high. Lots of good talks, in a single-tracked conference; I had the task of trying to knit them all together in the closing.

I’ll do a fuller trip report later, if working on my taxes doesn’t prevent it. But Norm Walsh has already asked that I post at least the last bit of my remarks.

[Readers who did not attend the conference or follow the streaming video need to know that in the opening sessions of the two conference days, Tony Graham and Sharon Adler had each referred at critical moments to the book of Genesis. One of Sharon’s slides bore the title “In the beginning was SGML,”, and Tony began his talk on Saturday with an extended reference to the book of Genesis: “In the beginning was the page. And the page was without form and void, …” Sharon wondered aloud whether people involved with descriptive markup all have God complexes; she may have something there. The text below also has some other references to things said at the conference, but I think for now I’ll spare readers the detailed annotation necessary to explain them all. Apologies in advance to any readers made uncomfortable by parodies of scripture.]

At the end of my talk, I described wandering through the streets of Prague, trying to find my way from the Strahov Monastery, where the conference dinner was held, back to my hotel, when suddenly I had a vision — one might almost say, a revelation.

And I saw in the right hand of him that sat on the throne an XML document canonicalized and serialized with EXI, containing seven pi-trees encrypted with seven public-key encryption key pairs.

And I saw a strong angel proclaiming with a loud voice, Who is worthy to parse the XML document, and to decrypt the encryption thereof?

And no one in heaven, nor in earth, neither under the earth, was able to parse the XML document, neither by buffering it in memory nor by processing it in streaming mode.

And I wept much, because no man was found worthy to parse and to process the XML document, neither to exploit its vocabulary-specific semantics nor to perform vocabulary-independent pretty-printing thereon.

And one of the elders saith unto me, Weep not: behold, the standards-compliant XML application, with support for C14n and EXI, for XML encryption and schema validation, and for interoperable stylesheet technologies like XSLT 17.2 and XSL FO 42.0, hath prevailed to open the XML document, and to decrypt the seven encryptions thereof, and to display it coherently, yea even for those who abhor the sight of angle brackets and prefer beautiful well-formatted text with tasteful images.

And every creature which is in heaven, and on the earth, and under the earth, and such as are in the sea, and all that are in them, heard I saying, Blessing, and honour, and glory, and power, be unto those that preserve data, and provide access to information, if not for ever and ever, then at least for the foreseeable future.

And the presentations, and the coffee breaks, were the second day.

And the nine conference organizers said, Amen. And the representatives of the two gold sponsors, and the four silver sponsors, and the four bronze sponsors, and the five media partners, and the three sister events, stood up and said Amen. And the one hundred and forty-four conference particpants clapped their hands and thanked the conference organizers for a great conference focusing on information that shall outlive the applications which create and process it.