Getting it in writing
[Closing remarks at Extreme Markup Languages 2005]
5 August 2005
C. M. Sperberg-McQueen
Transcribed from tape by Kate Hamilton
Thanks to Kate Hamilton of Mulberry Technologies for
transcribing this from tape. She and I have taken the opportunity to
repair a few sentences and improve a few transitions, but mostly
the text below is what was said rather than what one might wish
I had said. I've left Kate's indication of audience reaction
and added some footnotes where I thought people
might be interested in additional information.
There is a strain in our culture which distrusts writing things down.
There is another that says writing things down is essential.
“Getting it in writing” is a pejorative, sometimes jocular, way of
expressing distrust. Surely the only person who wants to get an
agreement in writing is one who distrusts the other; much better to
say of someone that “Their word is their bond.”
On the other hand, people who want too much freedom of action make
us nervous unless we trust them implicitly. Unless we trust them
implicitly, we don't know exactly what they're going to do.
Those of you who, like me, grew up listening to Charlie Mingus
records may remember from the liner notes on Mingus Ah Um
about a jazz workshop where Mingus brings in a composition where he's
left a few measures open for improvisation. The musicians say,
“You're getting lazy, man! Write it down, write it out!”[1
That's the tension that I see.
I have been thinking lately that we as a community may not be
hitting exactly the right balance with regard to that tension.
There have been tensions over the years, sometimes minor,
sometimes more major, in the W3C as in other standards-development
organizations, in part because the W3C started with the goal of
having a really light-weight process: there was a widespread
perception that written procedures slowed things down. Well, that's
true. They do.
There was a desire to get some written rules that working groups
could trust and work by; and there was a certain resistance by
some of the W3C staff, including Tim Berners-Lee, which made some
people suspicious; it made me suspicious, before I started working at
the W3C and got to know Tim Berners-Lee better.
Part of the issue was transatlantic. I have to remind myself even
now that Tim grew up in a country governed by an unwritten
constitution, and they're proud of the fact that it is not written
down. It gives you flexibility when the chips are down, in ways that
the American constitution doesn't always. (Unless, of course, you ...
No, let's not go into Supreme Court issues!)
More generally, I think we've all had ample opportunities to
observe a general distrust on the part of many geeks of written
procedures, parliamentary procedures.
In part this is part of an endemic distrust of democracy on the
part of technically-oriented people. It's hard not to mistrust
majority rule if you grow up going through school being one of the
brightest in your class and knowing that if two of you say the answer
to this math problem is x, and 27 other people in the room say the
answer is y, the chances are that you're right, not them. It's hard
to learn to trust majority rule after enough experiences of that
But I think mistrust of democracy is only part of it.
Part of it is mistrust of writing things down, of losing the
flexibility that you get when things are not written down.
This goes way back in our culture. The subtitle of my talk is from
2 Cor. 3:6 - “For the letter killeth but the spirit giveth life.” That
has been interpreted in a lot of different ways. The spin I'm going
to put on it is that if you follow the strict letter of the law,
you're not necessarily following the spirit of the law. The spirit,
in this Pauline view, is more important. Some of the time I think
that's right; some of the time, I'm not so sure.
The distrust of writing goes back farther than Paul; Paul got it from Greek
philosophy; Greek philosophy got it from Socrates and Plato. There is
a well-known passage in Plato's Phaedrus, which some of you will have been
expecting me to quote; I hate to disappoint anybody, so I'll quote
The god Theuth (or Thoth) has invented writing and he takes it to the king
Thamus. Theuth says that writing will make people wiser, and improve their
memories. Thamus replies:[2
O most ingenious Theuth, the ... inventor
of an art is not always the best judge of the utility ... of his
own inventions .... And in this instance, you ... have been led to attribute
to them a quality which they cannot have; for this discovery of yours
will create forgetfulness in the learners' souls,
because they will no longer use their memories; they will trust to
the external written characters and not remember of themselves.
The specific which you have discovered is an aid not to memory,
but to reminiscence, and you give
your disciples not truth, but only the semblance of truth;
they will be hearers of many things and will have learned nothing;
they will appear to be omniscient and will generally know nothing;
they will be tiresome company,
having the show of wisdom without the reality.
There are a couple of interesting things to note here. It seems to
me, reading this out, that Thoth and Thamus seem to be arguing at
least as much about what counts as memory as they are about writing
and whether it will be a good thing or not.
Of course they are both right. Thoth is right because no matter
what Thamus says, human societies that possess writing have, on the
whole, much better historical knowledge than societies without
writing, and much broader scientific and technical knowledge. On the
other hand, Thamus is right, because few of us as individuals have
memories that can compete with a Greek rhapsode or a Nigerian griot
or a Serbian singer of tales. The individual memory is weaker; the
system is stronger.
They sound to me a little like John Searle and Alan Turing arguing
about whether a computer that can speak Chinese can legitimately be
described as thinking — completely ignoring the revolutionary
implications of the premise
that we have a mechanical system that
can speak Chinese in a way indistinguishable from a human being! Who
whether it's “thinking” or not?![3
Thamus seems to me to be succumbing to what I think of as the John
Searle fallacy. Because the human is the most important part of the system
being described, or at least the part that most nearly resembles
Thamus himself, Thamus identifies the capacities of the system with the
capacities of the human component in exactly the same way that, in
the discussions of the so-called Chinese Room, John Searle appears to be
systematically unable to tell the difference between a computer
system and a CPU.[4
Thamus, on the other hand, is absolutely right in identifying a
fallacy to which Thoth succumbs, a fallacy which some of us might not
have expected to appear until much more recently, with the advent of data
processing. Thoth said writing will make people wise, and Thamus said
No, it will only make them appear well-informed: not the same
I find that interesting because, for a long time, in teaching
introductions to SGML and XML, I made the point that we want to build
systems that are more useful and more intelligent, but until
artificial intelligence bears fruit, if it ever does, we actually
don't know how to make systems that are intelligent in any useful
sense of the word. On the other hand, with descriptive markup, it's
much easier to make systems that are well-informed. In computer
systems as well as in human beings it's easy to confuse the two, and
by well-informedness to give the impression of being intelligent.
There's another way to think about writing things down that may be
worth spending a moment to consider.
In preparing for this conference I started reading the book that
Ann Wrightson has been recommending for several years as providing
Keith Devlin's book,
, and then I digressed to read another book about
] Standard information
theory as defined by
Shannon shows us one way — surely not the only way, Devlin demonstrates
that it's not the only way — to think about information
flow, how to calculate the capacity of a channel, how to calculate
the information content in an information source, how to do this
I've been troubled by my readings in information theory because it
seems clear that what descriptive markup does, what imposing the XML grammar on a
datastream does, is reduce the entropy, which means it reduces the
information content. How can any of us who care about information be
happy about reducing the information content of the message?
I've had to work very hard to remind myself that information, in
Shannon's theory, is effectively synonymous with entropy or
disorganization; uncertainty on the part of the recipient, choice on
the part of the sender. Reducing information content sounds alarming;
reducing uncertainty sounds a lot better.
If the well-formedness rules of XML and the validation rules of
our document grammar accurately describe the messages that will flow
across that channel, then it's not a question of reducing the
information content but a matter of more accurately estimating the
information content. If there are true regularities in the
information source that a document grammar allows us to capture, then
we have a better understanding of the information source, which is
good both from an engineering point of view and from a philosophical
point of view. If the sender doesn't really have a particular choice,
then it's better if the recipient knows that. The better we
understand the nature of the signal, the better we will be able to
distinguish information from noise at the receiving end.
There's a danger, if we prescribe too narrowly, or if we optimize
the channel prematurely, saying, We'll provide easy ways to express
all of the likely things, and these other things are so improbable
that they are negligible. This is not unknown in the history of
information theory. If we do that we have ruled out the possibility
of certain surprises, and we run the risk of mistaking an unlikely
message — a message that carries a lot of information precisely
because it is unexpected — for noise, and discarding it, impeding
our own ability to learn. That's another area where entropy could
stand being reduced.
So: Writing things down about the nature of the information that
flows is a good thing because all additional knowledge is good, and
because it helps us engineer better.
But there are some things which we are reticent about writing down.
Jon Bosak said the other day that there is a good reason that
standard rules and procedures say a deliberative body should write
down what it decides to do, and not why it decided to do it.[7
One reason, of course, is that it's hard enough to decide what
wording is going to go in the spec; if we also have to agree on
why it should go in the spec, we will never finish.
Not everyone knows that when the Civil Rights of 1964 was
passed by the U.S. Congress, the first draft forbade discrimination
on the basis of race, color, or creed, and an amendment was offered
adding sex to the list of protected categories. Many of us today,
looking back, think of that as a good thing. But the amendment was
not offered in that spirit. It was offered by southern congressmen who wished
to reduce the pending bill to absurdity and ensure that it
Should we reject their votes from the majority that passed the
bill we approve simply because they voted for the amendment for what
we believe to be the wrong reason? Should we reject the votes of
those who voted in favor of the amended bill in spite of their
misgivings about the inclusion in the amendment of sex, even though
we think their misgivings were wrong? On the contrary. The first rule
of any deliberative body, the most important skill of a Supreme Court
Justice, as I think Justice William Brennan said,
is to know how to count to five.[8
There are some things you really don't want to write down, because
it's too much work and it's not worth it. Even if we do write down the rationale,
there are limits — as Patrick Durusau was reminding me this morning
after breakfast — there are very, very strict limits to our ability
to constrain the interpretation of those who read the specs we write.
We can try. But there are no guarantees. The ingenuity of readers
knows no bounds. [gentle laughter]
There are some other things that we don't want to write down
because they seem redundant. When you write rules for how
working groups will conduct their meetings, no one ever
suggests that we should make it a rule that the meetings should take
place on Earth. We know they will. It would only raise questions in
people's minds [great laughter] if we said so.
And there are some things I think have not gotten written down for
The SGML spec doesn't say anything normative about descriptive
markup. It does have a description of descriptive markup and what
some people call the SGML methodology, but it's in a non-normative
appendix. The XML spec certainly doesn't tell you anything about how
to design your vocabulary. Why?
One reason is [heavy sigh] even if the SGML and XML specs
prescribed the correct design philosophy for markup vocabularies —
let's assume we could all achieve enough hubris to believe that we
could define it; most members of the working groups knew enough that
they didn't want to try to nail it down for all time — even if they
had tried, can you imagine achieving a normative definition of the
difference between declarative semantics and imperative semantics? I
can't. I pride myself on my spec draftsmanship, but that's not a
definition I would want to make; it's not something that would turn
into what the QA people would consider a testable assertion.
So nothing is said in the SGML and XML specs about that
distinction. Why doesn't it trouble us?
It doesn't trouble us in part because many of us are convinced
it doesn't matter whether we say it or not, it's
like the law of gravity. Yes, it's true that it's better to have
declarative semantics for your vocabulary than imperative semantics,
but we don't need to tell you that because nature will teach you
that. Nature will punish imperative semantics in ways that it doesn't
punish declarative semantics.
At least, that is the way I've always thought about it. But one
of the things that started me wondering whether we've hit the right
balance here was a talk I heard earlier this year from someone I like
a great deal and who is very smart, but whose name I'm not going to
mention because I'm about to slag him mercilessly; a talk in which
this very smart guy talked with great pride about his role in the
development of the XInclude spec.
Now, the XLink spec and the XInclude spec actually both talk about
the same thing: I've got a resource here, and I've got other
resources, and logically those other resources form part of a virtual
resource that consists of the conjunction of this top-level resource
and a bunch of others — the way images form part of an HTML
document, or the way included external entities form part of an SGML
The problem, my colleague said, is that the XLink semantics are really
hard to understand. Nobody can figure out what you're supposed to
do. Which is one of the reasons you don't see a lot of
conforming implementations of XLink: implementors look at it and say:
“What am I supposed to do with this?”
In this, at least, I agree with him: implementors have not figured
out what to do in an XLink implementation.
The speaker was very happy, by contrast, with XInclude,
which has a very clear semantics, he said — an
XLink processor sees such-and-such an element, and it does
such-and-such a thing — it has, that is, an imperative semantics.
In the analysis of the difference, my colleague was quite wrong.
There is no difference in clarity.
The XLink semantics are absolutely, perfectly clear: there is no
possibility of misundersatnding. It's not that XLink is less clear
than XInclude. The difference is that XLink carefully maintains a
purely declarative semantics, and XInclude allows itself an imperative
At that point, I began to think, “Maybe the SGML old-timers
among us didn't talk
quite enough about the SGML methodology.” Because, when he
said the XInclude spec was clearer, the earth didn't swallow him up;
lightning didn't strike him. I was kind of disappointed. [laughter]
If, as is easily observable in many other places in data
processing, many people find declarative semantics a little harder to
grasp than imperative semantics, and XML becomes ubiquitous, then XML
will be exposed to a lot of people who find imperative semantics
easier to understand than declarative semantics, and if we are not
careful, those of us who got into this business because of the
advantages of declarative semantics will find that the barbarians
have helped us achieve ubiquity of what we thought was going
to be descriptive markup but what turns out to be only angle
brackets; and that the infrastructure is built around systems that
can't handle declarative semantics, that will only handle imperative
semantics. And then where will we be? (Well, we'll be in a minority
the way we've always been; but still.)
I think in some way XML is all about getting things in writing in
this way; understanding the data, knowing the regularities, knowing
where it varies and where the variation is significant, is information,
and when the variation is not significant, is noise. It's about our
understanding, for a particular information source, the difference
between noise and information.
Economically this makes sense: if you understand the regularities in
the data and if you can enforce any restriction on the kinds of
messages that can be sent, in particular the kinds of erroneous
messages that can be sent, than you reduce the number of cases that
programmers have to prepare for. You make it feasible for the amount
of defensive programming that programmers are prepared to do, to actually
meet the need. You increase the effectiveness of their defensive
But I think the reason that a lot of us are so devoted to XML at a
visceral, emotional level stems not just from its economic benefits
but from the invitation it offers us to try to put down in writing
what we perceive to be essential about the information we exchange and
care about. By putting it in writing we reduce it from an infinitely
subtle intangible to something still perhaps immensely subtle but
infinitely more tractable. It's similar, in a way, to the transition
between analog signals and digital signals, which Keith Devlin
identifies with the act of cognitiion. Cognition is understanding an
analog signal — an image, a situation — in a way clear enough to be
able to put names to what counts for you, in your situation.
By putting it in writing, we externalize it, and we make it more
malleable, we make it easier to share, easier for people we don't
talk to, to read and understand. Edsger Dijkstra spent a lot of
time wondering whether computer science was a discipline and thinking
about the difference between scentific disciplines and
guild-controlled crafts. The single best test he found was this: Are
the results in this field published? Are they written down? Can
people read them, or must you be initiated; must they be transmitted
by oral tradition?[9
By externalizing the knowledge, we make it easier
for ourselves or for others to transform it; to transform it, for
example, from a representation of what I
think is essential to a representation of
think is essential. We establish pathways, in that
way, between different ways of thinking about things.
I suspect that future philosophers and historians of ideas may be
alarmed to discover that, in order to trace the differences in
relations between different ways of thinking about things in the 21st
century, they are going to have to read and understand the
ontological significance of XSLT stylesheets and XQuery
(But I notice that Allen Renear has already positioned himself to
be in the forefront of this new school of philosophy!)
When we manage to get the key things down in writing without
over-restricting things, without over-emphasizing the orderliness that
we perceive, without filtering out signal unintentionally; when we
get that down in writing, we are trying to make it possible for our
systems of digital representation to call things by their true
The correct identification of what we see is one of the first
stages of scientific knowledge. Linguists, trying to justify an
interest in linguistic theory, frequently dismiss it as ‘botanizing’,
but knowing what it is you're seeing is really a crucial first step.
Some of us care about markup for that reason.
Calling things by their true names is also, as my poetry teacher
in college taught me long ago, one of the ancient secrets of magic.[10
Some of us care about it for that reason.
As Arthur C. Clarke taught us, that in turn means that calling
things by their true names is one of the ancient secrets of creating advanced
technology. [laughter and applause
Some of us care for that
And my poetry teacher taught us that, whatever we may think of
magic or advanced technology, it is one of the secrets of
And whatever the King Thamus may think, calling things by their
true names will always be one step towards greater wisdom.
Charlie Mingus, Mingus Ah Um
reissued several times on LP and CD.
The liner notes (available on the Web at
>), show my
memory was slightly off in some details:
Because of the success of this workshop, a Composer's Workshop was
formed, in collaboration with Bill Coss of Metronome, that included
Teddy Charles, John LaPorta and Teo Macero (who, as an A&R man for
Columbia, arranged the date for this album). Mingus believes now that
it got too far away from jazz — spontaneity — since almost
all of the music was written. He remembers one rehearsal at which
Teddy had left several bars open for blowing and everyone jumped on
him with ‘Man, are you lazy? Write it out!’
Note that both sides of the tension are well represented here,
the desire to have it written out, and the desire to leave
some space open for the inspiration of the moment.
] I quote from
The works of Plato
, tr. Benjamin Jowett,
selected and edited by Irwin Erdman
(New York: Modern Library, 1928). In the undated paperback
reprint I am consulting, the passage is on page 323.
Alan M. Turing, “Computing machinery and intelligence,”
49.236 (1950): 433-460;
John Searle, “Minds, brains, and programs,”
Behavioral and brain sciences
3 (1980): 417-457.
These and other episodes in the sometimes amusing and sometimes
alarming history of discussion of the Turing test are reprinted
with commentary in The Turing test: Verbal behavior as the
hallmark of intelligence
, ed. Stuart M. Shieber (Cambridge, Mass.:
MIT Press, 2004).
In fairness to Searle, I should point out that his paper includes
a section designed to refute precisely this accusation. It is
true that in my view he wholly fails to understand the nature
of the objection or provide a persuasive response. But he
] Keith Devlin, Logic and information
(Cambridge: Cambridge UP, 1991). The other book I have been
reading is John R. Pierce, An introduction to information
theory: Symbols, signals and noise
, 2d rev. ed. (New York:
Dover, 1980), which provides a good readable introduction to
Shannon's information theory.
Jon Bosak, “The Universal Business Language (UBL),”
Late-breaking news talk delivered at Extreme Markup Languages
] The maxim is attributed to
Brennan by Anthony Lewis, “Privilege & the press,”
New York Review of Books
52.12 (14 July 2005).
His concern with this question is so pervasive it's hard to choose
just one place to point to, but my copy of his essays falls open by
chance at an essay which makes this point quite explicitly:
Edsger W. Dijkstra, “Craftsman or Scientist?” (EWD840)
in Selected writings on computing: A personal perspective
(New York: Springer, 1982), pp. 104-109. The essay was originally given
as a Luncheon Speech at ACM Pacific 1975, in San Francisco.
Belle Randall's books of poetry include
(Seattle: Wood Works Press, 2003),
Drop Dead Beautiful
(Seattle: Wood Works Press, 1998),
The Orpheus sedan
(Port Townsend, Wash.: Copper Canyon Press, 1980), but
the book I know best is an earlier collection,
101 different ways of playing solitaire, and other poems
(Pittsburgh: University of Pittsburgh Press, 1973).
So many stories involve the idea that knowledge of a thing's
true name conveys power, often magical, that it seems dangerous to single
any out. Sometimes the power is mystical, as in Kabbalistic tradition.
Sometimes it is wholly mundane, as in the science fiction writer
Vernor Vinge's short story “True Names” (first published in
1981, reprinted in Vinge's
True names ... and other dangers,
New York: Baen, 1987). After the talk, Simon St. Laurent observed that in
some tellings, knowledge of the true name can have effects some of us might
find deplorable (e.g. Arthur C. Clarke, “The nine billion names of
God”, first published in 1953 and reprinted several times,
most prominently perhaps in The nine billion
names of God: the best short stories of Arthur C. Clarke,
[New York: Harcourt, Brace & World, 1967]). Simon concluded with the
plea “Save the universe; don't call things by their true names.”
In the 1973 republication of his
Profiles of the future
(New York: Harper & Row, 1973)
Clarke added to the essay “Hazards of prophecy:
The failure of imagination” (originally published in 1962)
the observation that
“Any sufficiently advanced technology
is indistinguishable from magic,”
giving it the name ‘Clarke's Third Law’.
My joke relies only on the fact that indistinguishability is
a symmetric relation.