[20 April 2008]
One often hears, in information technology and in spec development, the injunction to eat one’s own dog food, meaning to use, oneself, the technologies one is developing. By confronting the technology as a user, the designer will become aware sooner of flaws in the implementation, gaps in the functionality, or other usability problems. I hate the metaphor, because I don’t think of the technology I work on as dog food, but it’s good advice.
I’ve been thinking lately that I should be making a lot more use, in my work, of the technologies I’m responsible for. There is a radical difference between the attitude of someone who has designed a system, or engaged with it as an object of study, but never much used it, and someone who has actually used the system to do things they wanted to do, treating the system as a means not an end in itself.
I once sat in working group meetings listening to brilliant computer scientists telling us how the working group should approach various problems, and being struck by the fact that if they showed five XML examples of two to ten lines each, at least two of them would be ill-formed. They had a pretty good theoretical grasp of XML, although they didn’t grok its essence. But their mistakes showed clearly that they had never spent as much as thirty minutes writing XML and running it through a parser. It was no wonder that their view of markup was so different from the view of those of us who worked with markup during most of the hours of our working lives. To them, XML was an object of study. To those of us who actively used it, it was no less an object of study, but it was also a means to achieve other ends. I mentioned to one of the computer scientists that the well-formedness errors in his examples made it hard for me to take his proposals seriously, and to his great credit he did address the problem. He never showed an example in XML notation again; instead he wrote everything down in Haskell. (And since he used Haskell more or less constantly, he didn’t make dumb syntax errors in it.)
In practice, I think the ‘use your own tools’ principle means I should spend some time trying to upgrade my personal tool chain to make use of technologies like XProc and SML. (There are some other technologies I should make more use of, too, but I’m not prepared to identify them in public.)
At the same time, I’m also acutely conscious of the difference between experimental systems and production systems. Experimental systems need to be able to change rapidly and radically as we learn. Production systems need to be stable and reliable, and cannot usually change more quickly than the slowest user who relies on them. The maintainers of a production system seldom have the ability to force their users to upgrade or make other changes. (And if they succeed in acquiring it, they will be regarded by their users as tyrannical oppressors to be deceived and evaded whenever possible.)
Specs in development sometimes need to be experimental systems.
Some people will say no, never standardize anything that requires experimentation: only standardize what is already existing practice. The example that sticks in my head from long-ago reading on the theory of standardization is machine-tool tolerances: if no one sells or buys tools with a given tolerance, it’s because either there’s no market for them (so no need to standardize) or no understanding of how to achieve that tolerance (so a standard would be pointless and might get crucial bits wrong out of ignorance); standardize the tolerances people are actually using and you’ll produce a standard that is useful and based on good knowledge of the domain. This principle may well work for machine tools; I am not a mechanical engineer. But if you wait until a given piece of information technology is already common practice, then by and large you will be looking at a marketplace in the form of a monopoly with one player. If you’re looking for a non-proprietary standard to provide a level playing field for competing implementations, you’re too late.
In information technology, standards that go out in front of existing practice appear to be the only way to define a standard that actually defined a non-proprietary technology. Ideally, you don’t want to be too far out in front of existing practice, but you can’t really be behind it, either.
If you’re out in front of well established practice, the spec needs in some sense to be experimental. If the responsible working group finds a new and better way to say or do something, after issuing their second public draft but before the spec is finished, they need to be free to adopt it in the third working draft.
If I rebuild the tool chain for the XSD 1.1 spec to use XProc, for example, that would be interesting, the re-engineering would probably give us a cleaner tool chain, and it might provide useful feedback for the XProc working group. But when the XProc working group changes its mind about something, and the implementation I’m using changes with it, then my tool chain breaks, and not necessarily at a time when it’s convenient to spend time rebuilding it. (Long ago, my brother was trying to persuade me I should be interested in personal computers, which I regarded as toys compared to the mainframe I did my work on. Nothing he said made any dent until he said “Having a personal computer means you get to upgrade your software when you want to, not when the computer center finds it convenient.” That sold me; we bought a personal computer as soon after that as we could afford one.)
Is there a way to manage the tension between a desire to use the tools one is building and the need for the tool chains one uses in production work to be stable?
I don’t know; but I hope, in the coming months, to find out.