Perhaps the biggest pitfall in digitising documents, for historical conservation or profit, is having too shortsighted an outlook, says David Seaman, director of the electronic text centre at the University of Virginia.
A local parallel to the Virginia organisation, the NZ Electronic Text Centre, has been set up at Victoria University in Wellington, to generate “a searchable online archive of texts and images drawn from New Zealand archives and institutions”.
There is too often an assumption that a document is a “monolithic” entity, which will, for the long term, be put to the specific purposes intended for it at the beginning, says Seaman, who attended last week’s Digital Forum.
“My stock in trade is XML-based text material”, he says. Information in this form is “malleable”, able to be combined and delivered in new forms. Sections might, for example, be extracted from a number of books and pieced together to make a new electronic publication. This could be delivered in the future by methods that go beyond today’s “traditional” web delivery, to e-books, mobile devices, even — a revolutionary proposal — hard copy.
“I’m talking about print on demand”, he says. “To have, in your hand, when you want it, a copy of a book that may be ‘out of print’ [in the conventional catalogues]”. You could have “your book” assembled from pieces of various works, he says, a book that no conventional publisher would think of putting on the market, “a book that no-one else would want.”
Documents should be digitised in as flexible a form as possible, to keep them open to such future, perhaps unenvisaged, uses, Seaman says.
The copyright considerations attached to such treatment are not nearly as complex as is generally imagined, he says. “If the work is still in copyright, you just have to pay the publisher for a usage right and that’s not usually a problem.”