Gravity never sleeps (notations that use eval)

At the opening panel of XML 2007, Doug Crockford waxed eloquent on the weak security foundations of the Web (and managed, in a truly mystifying rhetorical move, to blame XML for them; apparently if XML had not been developed, people’s attention would not have been distracted from what he regards as core HTML development topics).

So during the discussion period I asked him “If you are concerned about security (and right you are to be so), then what on earth can have possessed you to promote a notation like JSON, which is most conveniently parsed using a call to eval? It’s very easy, very concise, and the moment you’ve done it there’s a huge Javascript injection hole in your application.”

Digression: before I go any further I should point out that JSON is easy to parse, and people have indeed provided parsers for it, so that the developer doesn’t have to use eval. And last April, Doug Crockford argued in a piece on JSON and browser security that JSON is security neutral (as near as I can make out, because the security problem is in the code that calls eval when it shouldn’t, so it’s really not JSON’s fault, JSON is just an innocent bystander). So there is no necessary relation between JSON and eval and code injection attacks.

And yet.

Those of sufficient age will well remember the GML systems shipped by IBM (DCF GML) and the University of Waterloo, and lots of people are still using LaTeX (well, some anyway, lots of them in computer science departments). These systems still exist, surely, on some machines, but I will describe them, as I think of them, in the past tense; apologies to those for whom they are still living systems. LaTeX and GML both supported descriptive markup; both provided extensible vocabularies for document structure that you could use to make reusable documents. And both were built on top of a lower-level formatting system, so in both systems it was possible, whenever it turned out to seem necessary, to drop down into the lower-level system (TeX in the case of LaTeX, Script in the case of GML).

Now, in both systems dropping down into the lower level notation was considered a little doubtful, a slightly bad practice that was tolerated because it was often so useful. It was better to avoid it if you could. And if you were disciplined, you could write a LaTeX or GML document without ever lapsing into the lower-level procedural notation. But the quality of your results depended very directly on the level of self-discipline you were able to maintain.

The end result turned out, in both cases, to be: almost no GML or LaTeX documents of any size are actually pure descriptive markup. At least, not the ones I have seen; and I have seen a few. Almost all documents end up a mixture of high- and low-level markup that cannot be processed in a purely declarative way. Why? Because there was no short-term penalty for violating the declarativity of the markup, and there was often a short-term gain that, at the crucial moment, masked the long-term cost. In this respect, JSON seems to be re-inventing the flaws of notations first developed a few decades ago.

To keep systems clean, you need to drive the right behavior.

JSON makes very good use of Javascript’s literal object notation. But it’s a consequence of this fact that a JSON message can conveniently be processed by reading it into a variable and then running eval on the variable. (This is where we came in.) The moment you do this, of course, you expose your code to a Javascript injection attack.

To say “You don’t have to use eval — JSON has a very simple syntax and you can parse it yourself, or use an off the shelf parser, and in so doing protect yourself against the security issue,” seems to ignore an important fact about notations: they make some things easier and (necessarily) some things harder. They don’t force you to do things the easy way; they don’t prevent you from doing them the hard way. They don’t have to. The gentle pressure of the notation can be enough. It’s like gravity: it never lets up.

If the notation makes a dangerous or dirty practice easy, then the systems built with it will be spotlessly clean if the users have the self-discipline to keep it clean. For most of us, that means: not very clean.

OK, end of digression.

When I asked my question, Doug Crockford answered reasonably enough that browsers already execute whatever Javascript they find in HTML pages they load. So JSON doesn’t make things any worse than they already were. (I’m not sure I can put into words just why I’m not finding much comfort in that observation.)

But there is a slight snag: Javascript isn’t used only in the browser. Since the conference, my colleague Thomas Roessler has written a number of blog entries outlining security problems in widgets which use Javascript; most recent is his lightning talk at the 24th Chaos Communication Congress.

Be careful about slopes, slippery or otherwise. Gravity never sleeps.

Does XML have a future on the Web?

Earlier this month, the opening session of the XML 2007 conference was devoted to a panel session on the topic “Does XML have a future on the Web?” Doug Crockford (of JSON fame) and Michael Day (of YesLogic) and I talked for a bit, and then the audience pitched in.

(By the way — surely there is something wrong when the top search result for “XML 2007 Boston” is the page on the 2006 conference site that mentions the plans for 2007, instead of a page from the 2007 conference site. Maybe people are beginning to take the winter XML conference for granted, and not linking to it anymore?)

Michael Day began by pointing out that in the earliest plans, the topic for the session had included the word “still”, which had made him wonder: “Did XML ever have a future on the Web?” He rather thought not: XML, he said, was yet another technology originally intended for the browser that ended up on the server, instead. No one serves XML on the Web, he said, and when they try to something as simple as XHTML, it’s not well-formed. (This, of course, is a simplistic caricature of his remarks, which were a lot more nuanced and thoughtful than this. But of course, while he was speaking I was trying to remember what I was supposed to say; these are the bits of his opening that penetrated my skull and stuck.)

Doug Crockford surprised me a bit; from what I had read about JSON and his sometimes fraught relations with XML, I had expected him to answer “No” to the question in the session title. But he began by saying quite firmly that yes, he thought XML had a very long future on the Web. He paused while we chewed on that a moment, before putting in the knife. We know this, he said, because once any technology is deployed, it can take forever to get rid of it again. (You can still buy Cobol compilers, he pointed out.) If I understood him correctly, his view is that XML (or XHTML, or the two together with all their associated technologies) has been a huge distraction for the Web community, and nothing to speak of has been done on HTML or critical Web technologies for several years as a result. We need, he thought, to rebuild the Web from its foundations to improve reliability and security.

It gives me some regret now that I did not interrupt at this moment to point out that XHTML and XForms are precisely an effort (all in all, a pretty good one) to improve the foundations of the Web, but I wasn’t quick enough to think of that then. (I also didn’t think to say that being compared to Grace Murray Hopper, however indirectly and with whatever intention, is surely one of the highest compliments anyone has ever paid me. Thank you, Doug!) And besides, it’s bad form to interrupt other panelists, especially when it’s your turn to speak next.

Since I have cut so short what Michael Day and Doug Crockford said, I ought in perfect fairness to truncate my own remarks just as savagely, so the reader can evaluate what we said on some sort of equal footing. But this is my blog, so to heck with that.

Revised slightly for clarity, my notes for the panel read something like the following (I have added some material in italics, either to reflect extempore additions during the session or to reflect later pentimenti). I’d like to have given some account of the ensuing discussion, as well, but this post is already a bit long; perhaps in a different post.

I agree with Doug Crockford in answering “Yes” to the question, but we have different reasons. I don’t think just that XML has a future because we can’t manage to get rid of it; I think it ought to have a future, because it has some properties it’s hard to find elsewhere.

1 What do we mean by “the Web”?

A lot depends on what we mean by “the Web”. If we mean Web 2.0 Ajax applications, we may get one answer. If we mean the universe of data publicly accessible through HTTP, the answer might be different. But neither of these, in reality, is “the Web”.

If there is a single central idea of the Web, it’s that of a single connected information space that contains all the information we might want to link to — that means, in practice, all the information we care about (or might come to care about in future): not just publicly available resources, but also resources behind my enterprise firewall, or on my personal hard disk. If there is a single technical idea at the center of the Web, it’s not HTTP (important though it is) but the idea of the Uniform Resource Identifier, a single identifier space with distributed responsibility and authority, in which anyone can name things they care about, and use their own names or names provided by others, without fear of name collisions.

Looked at in this way, “the Web” becomes a rough synonym for ‘data we care about’, or ‘the data we process, store, or manage using information technology’. And the question “Does XML have a future on the Web?” becomes another way of asking “Does XML have a future?”

Not all parts of the Web resemble each other closely. In some neighborhoods, rapid development is central, and fashion rules all things. In others, there are large enterprises for whom fashion moves more slowly, if at all. Data quality, fault tolerance, fault detection, reliability, and permanence are crucial in a lot of enterprises.

The Web is for everyone. So a data format for the Web has to have good support for internationalization and accessibility.

Any data format for “the Web” must satisfy a lot of demands beyond loading configuration data or objects in a client-side Javascript program. As Murata Makoto has often said, one reason to be interested in XML is that it offers us the possibility of managing in a single notation data that for a long time we held separately, in databases and in documents, managed by separate tool sets. General-purpose tools are sometimes cumbersome for particular specialized forms of data, but the provision of a common model and notation is a huge win; before I decide to use another specialized notation, I want to think hard about the costs of yet another notation.

I think XML has a future on the Web because it is the only format around that can plausibly be used for such a broad range of different kinds of data.

2 Loose coupling, tight coupling

One of the important technical properties of the Web is that it encourages a relatively loose coupling between parts of the larger system. Because the server and the client communicate through a relatively narrow channel, and because the HTTP server is stateless, client and server can develop independently of each other.

In a typical configuration there are lots of layers, so there are lots of points of flexibility, lots of places where we can intervene to process requests or data in a different way. By and large, the abstractions are not very leaky, so we can change things at one layer without disturbing (very much) things in the adjoining layers.

In information systems, as in physical systems [or so I think — but I am not a mechanical engineer], loose couplings incur a certain efficiency cost, and systems with tighter couplings are often more efficient. But loose coupling turns out to be extremely useful for allowing diverse communities to satisfy diverse needs on the Web. It turns out to be extremely useful in allowing the interchange of information between unlike devices: if the Web had tighter coupling, it would be impossible to provide Web access to new kinds of devices. And, of course, loose coupling turns out to be a good way of allowing a system to evolve and grow.

One of the secrets of loose coupling is not to expose more information between the two partners in information exchange than you want to.

And in this context, some of the notations sometimes offered as alternatives to XML (at least in some contexts) — or for that matter, as uses of XML — have always made me nervous. We’re building a distributed system; we want to exchange information between client and server, while limiting their mutual dependencies, so that we can refactor either side whenever we need to. And you want me to expose my object structures?! Are you out of your mind? In polite company there is such a thing as too much information. And exhibiting my object structures for the world to see is definitely a case of too much information. I don’t want to see yours, and I don’t want you to see mine. Sorry. Let’s stick to the business at hand, and leave my implementation details out of it.

So, second, I think XML has a future on the Web because (for reasons I think are social as much as technical) the discipline of developing XML vocabularies has a pretty good track record as a way of defining interfaces with loose coupling and controlled exposure of information.

3 Publication without lossy down-translation

There were something like two hundred people actively involved in the original design of XML, and among us I wouldn’t be surprised to learn that we had a few hundred, or a few thousand, different goals for XML.

One goal I had, among those many, was to be able to write documents and technical papers and essays in a descriptive vocabulary I found comfortable, and to publish them on the Web without requiring a lossy down-translation into HTML. I made an interesting discovery a while ago, about that goal: we succeeded.

XML documents can now be read, and styled using XSLT, by the large majority of major browsers (IE, Mozilla and friends, Opera, Safari). It’s been months since I had to generate an HTML form of a paper I had written, in order to put it on the Web.

I know XML has a future on the Web because XML makes it easier for publishers to publish rich information and for readers to get richer information. No one who cares about rich information will ever be willing to go back. XML will go away only after you rip it out of my cold, dead hands.

[After the session, Norm Walsh remarked “and once they’re done with your cold dead hands, they’ll also have to pry it out of mine!”]


One reason to think that XML has found broad uptake is the sheer variety of people complaining about XML and the contradictory nature of the problems they see and would like to fix. For some, XML is too complicated and they seek something simpler; for others, XML is too simple, and they want something that supports more complex structures than trees. Some would like less draconian error handling; others would like more restrictive schema languages.

Any language that can accumulate so many different enemies, with such widely different complaints, must be doing something right. Long life to descriptive markup! Long life to XML!

What’s a klog?

Why does the subtitle of this blog say “MSM’s klog” — don’t you mean “blog”?

Well, maybe.

But when I was thinking about this material, I thought of it mostly as a series of meditations on issues that arise in my work, with only the occasional piece unrelated to W3C or my working groups. So my own notes for Messages in a bottle call it a work log, or worklog, or (yes) klog.

Hence the term in the subtitle. If it comes to seem too precious or weird, I suppose I can always change it. (Cool URIs don’t change, it’s true, but maybe subtitles don’t have to be cool.)

Why “Messages in a bottle”?

People on the W3C staff have been talking about blogs and how they can improve communication within a group, for some time. The discussions we had as a Team in Montréal (in November 2006) primed me to think about blogging as something it might be interesting to do. So did Jonathan Robie’s telling me that Jon Udell had urged him to start a blog, and Jonathan’s urging me to do so. (When was that? A long time ago, XML 2005 maybe.)

But the immediate impetus was finding some long-neglected pages on the W3C site (it really doesn’t matter which they were, or who wrote them) in which a Team member set down, years ago, some musings on a topic it turns out we both (and, as far as I can tell, virtually no others in the Team) are interested in.

It felt like finding a message in a bottle.

Putting such musings in a blog won’t help make them easier or harder to find, of course. But somehow the pages in question — and the pages I felt like writing in response — seem to fit more neatly into the genre of the journal, or the ongoing work log, the lab notebook, than into any other.

So I’m going to start a six-month experiment in keeping a work log. Think of it, dear reader, as my lab notebook. (I was going to do it starting a year ago, but, well, I didn’t. So I’m going to start now.)

My original plan was to make it accessible only to the W3C Team, so that I could talk about things that probably shouldn’t be discussed in public or in member space. Norm Walsh has blown a hole in that idea by
pointing to this log
[Hi, Norm!]. So public it is. (Ideally, I’d have a blog in which each item could be marked with an ACL, like resources in W3C date space: Team-only, Member-only, World-readable. Maybe later.)

Next year about June, if I remember, I will evaluate the experiment and decide whether it’s been useful for me or not.

An odd fact about fifth powers

[27 December 2007; crucial typo corrected, graphics added 29 December]

Over Christmas I got a copy of Oystein Ore’s book Number theory and its history (New York: McGraw-Hill, 1948; reprint New York: Dover, 1988).
In section 2-3, exercise 5 reads

5. Prove that in the decadic number system [i.e. using decimal numerals] the fifth power of any number has the same last digit as the number itself.

This is straightforward enough, if you’ve just read the preceding section. (My exposition isn’t as good as Ore’s, so I won’t try to explain it.)

But now consider a related problem.

For numbers n whose decimal representation ends in a particular digit d, we can make a little directed graph that makes it easy to calculate the final digit of any number multiplied by n. Each graph will have ten nodes, labeled 0 through 9, and there will be an arc from node i to node j if and only if multiplying n by a number ending in i produces a number ending in j.

For the digit 3, for example, we’ll have the arcs 0 → 0, 1 → 3, 2 → 6, 3 → 9, 4 → 2, 5 → 5, 6 → 8, 7 → 1, 8 → 4, and 9 → 7. Like this:

Exponentiation graph for the digit 3

And for the digit 5, there will be arcs from each odd-numbered state to state 5, and from each even-numbered state to state 0. And so on.
Like this:

Exponentiation graph for the digit 5

There ought to be a name for these graphs, but I don’t know of one. I’ll call them multiplication graphs.

Now, consider what problem 5 means in terms of these multiplication graphs. For the given number, we pick the appropriate graph. If we raise the number to the power 0, the result is a number whose decimal representation ends in ‘1’. (Because it’s always the number 1.) Call the state labeled ‘1’ the start state. A path of length 1 will always take us to the state named for the digit whose multiplication properties the graph represents: in graph 3, a path of length 1 takes us to state 3, in graph 7, a path of length 1 takes us to state 7. Call this state the ‘characteristic state’.

If we raise the number to the power n, the result will have a decimal representation ending in the digit we reach by following a path through the graph, beginning at the start state and having exactly n steps. Final digit of the square? Follow a path two steps long. Final digit of the fifth power? Follow a path five steps long.

So: in terms of multiplication graphs, exercise 5 amounts to proving that in each graph, a path of length 5 takes you to the same state as a graph of length 1. Or, equivalently, that there is a cycle of length 4, or 2, or 1, beginning at the characteristic state of the graph.

It’s clear that there must be cycles in the graphs, and that the cycles can’t have length greater than 10, since there are only ten states. It’s clear, after a moment’s thought, that any cycle must either hit only even numbers or only odd numbers, and so any cycle must have length less than or equal to 5.

In fact, however, the cycles are all for length 1 (in graphs 0, 1, 5, and 6), 2 (graphs 4 and 9) or 4 (graphs 2, 3, 7, 8).

Why are there no cycles of length 5?