D-Lib Magazine
May 1997

ISSN 1082-9873

From the Editor

Design by Experience

Until about 1940, American manufacturers of large electrical equipment embraced the strategy of design by experience. General Electric or Westinghouse engineers visited the site, studied the problem, and came up with a solution that was partially based on general principles and partially customized to the local utility or generating plant. The resulting machine would then be installed, observed, and refined based on this experience, and the engineering expertise entered the vocabulary or tool set of the company. Faced with a similar requirement, the vendor now had a reliable product to sell. And on the consumer side, the early adopters, rather like beta sites, had access to new technology, support when they needed it, and the advantage of a learning curve. In retrospect, what seemed like incremental advances, historian Richard Hirsh concludes, amounted to major advances.

Most of us can probably recognize in this story a parallel to the standards setting processes of the Internet. But perhaps less obvious to us are the implications for design issues that arise from the social side of digital libraries -- what are frequently understood as alternately user studies or user interface design.

The idea of close observation of users goes back at least to the "thinking-out-loud" protocols, recorded by Alan Newell and Herbert Simon at Carnegie Mellon University in their pioneering studies of human problem-solving. Usability studies are an integral component to engineering, and certainly, there is a substantial literature in the library science community based on observation, interviews, focus groups, surveys, and analysis of user logs in a variety of methodological combinations. Indeed, several of these have appeared in this magazine (see, for example, Van House et al.; Payette and Rieger; and the September 1996 issue). And as Ian Witten and his colleagues illustrate in their story on the New Zealand Digital Library Melody Index (MELDEX) system, this approach can be used effectively in contexts in which the issues are clearly and narrowly focused.

But many of these studies rely on small samples, a limited number of variables, and a reliance on dichotomous variables, which collectively enable application of statistical analytical techniques but exact a price: How far and under what conditions can these results be generalized? Moreover, the various experiments are not strictly comparable, further inhibiting extrapolation. Finally, what happens to the content of the evidence when questions are reduced to ones that can be answered yes or no?

There is an alternative which complements existing methodologies but requires two difficult things: setting aside the rigor of statistical analysis in favor of description and anecdote; and patience. For several years, IBM has worked with a handful of college and university libraries where new products are deployed in highly structured settings. One college, for example, has a "full scale digital library" operating in its reserve room enabling observation not only of the technology but also of the dynamics among faculty, students, and library staff. Representatives of the partnering institutions are convened in a seminar on a regular basis to discuss their experiences with representatives from IBM's various labs where pre-competitive research is in progress.

The commercial advantages to these arrangements are obvious. It is also true that there is significant value to structured, long term observation, which can be captured, discussed, and explored and then integrated into future research. It is a form of design by experience, albeit embodied in intuition, description, and natural language, and quickly recognizable by generations of librarians and teachers who have accumulated wisdom through practice and introduced countless small changes in successive lesson plans and reference interviews.

I remember clearly the arguments in the social sciences over the introduction of computing and quantification in the 1970s, where the code book became the bible and a multiple regression equation the holy grail. Many important advances resulted, bringing clarification to some muddy issues, and interesting work continues in the application of advanced mathematics to modeling social science phenomena at, for example, Stanford University and the Santa Fe Institute. But in the 1970s, we also saw the inappropriate and misleading application of these techniques. The war finally ended with the realization that quantification was sometimes a good thing and sometimes not; the value of the research resided in the questions it asked and the integrity of the findings, not the methods by which those findings were obtained.

One of the pioneers in the notion of digital libraries, J. C. R. Licklider believed that one goal of computing was to free human intellect to do what it does best: to imagine, describe, and intuit. We might do well in digital libraries research to recall these two moments in history. To avoid repeating the social science wars of the 1970s while benefiting from the engineering experience of the 1920s and 1930s, it's okay to set the software to one side and trust what we see.

Amy Friedlander
Editor

Copyright ©1997 Corporation for National Research Initiatives

D-Lib Magazine |  Current Issue | Comments
Previous Story Next Story

hdl:cnri.dlib/may97-editorial