Approaching D-Lib Metrics
1. Scope of Problem:
As argued by several, our group faces an enormous challenge in dealing with
a wide-open opportunity. Thus, in these early days of DL work, when
definitions, prototypes, and commercial systems are just being developed,
we hope to help to develop methodologies, and launch a series of scientific
studies according to those methodologies, that will help advance the state
of the art.
2. Broad Approach:
If we take the union of the definitions advanced for "digital library" we
find that we are confronting some of the most complex and advanced
information systems proposed. This suggests that we conclude:
"D-Lib Metrics WG's broad agenda should be to encourage all studies that
help the DL field, providing guidance in any aspect of a digital library.
Such studies may deal with individual aspects of a digital library (like
searching, or the interface) and draw on methodologies developed in
particular research communities (e.g., IR, HCI) and reported in existing
conferences (e.g., TREC, CHI). However, it is encouraged that studies
targetted to help Digital Libraries consider at least two such specialty
areas, whenever possible, to help with the integration that is at the heart
of the DL field."
3. New Problems
More at the core of interest of the D-Lib Metrics WG should be issues that
have emerged because of DL systems and not before. Some of these have been
mentioned in other position statements: interoperability, scalability, and
heterogeneity. Others are concepts that arose in other fields but take on
new dimensions in the DL area, such as quality, timeliness, usability,
integration, reliability, relevance, and specificity.
As we look at properties and metrics, capabilities and tasks, services and
scenarios, situations and environments, benchmarks and hypotheses, failure
modes and comparisons, we must revisit "metrics" in our title. What can we
measure? How accurately? With what generality (statistical, inferential)?
Since the field is wide open, let us try the most obvious studies first:
* Compare existing digital libraries used in real-life contexts, and apply
a mix of metrics drawn from areas like IR, HCI, Hypertext, distributed
processing, etc., with the aim of better serving users.
* Use performance measurement approaches, including modeling and
simulation, to analyze internal and inter-system communication (especially
through published protocols) of distributed digital libraries, with the aim
of improving operations.
* Deeply think about, and work toward concensus, on understanding the
essence of digital libraries, so the most important metrics and studies can
be proposed. These may extend the current set of Challenge Problems and
Scenarios. One starting point is the architectural work at CNRI. Clearly,
there also is relevance to work on documents, digital objects, collections,
repositories, terms and conditions, shopping models, and federated systems.
4. Related Work
In an advanced graduate class on digital libraries
(http://ei.cs.vt.edu/~cs6604) in Computer Science at Virginia Tech, Fall
1997, a number of students worked on projects relevant to our WG
activities. One group developed a visual simulation of NCSTRL, to allow
testing of the effects of varying topologies, number of backup servers,
number of users, amount of network traffic, server speed, and other
parameters (like timeout interval).
Another group did a classic HCI study comparing 4 digital libraries. They
had 4 comparable tasks for each system, and had 48 users work with all 4
tasks on all 4 systems (balancing on order). We are still mining the
results on timing, failures, pre and post-surveys, synchronized videotape
records of both the user and the screen, and critical incident comments of
A third effort focussed on developing a theoretical framework, based on
sets, to describe DLs.
In addition to these efforts, a number of the students are working with the
W3C effort for HTTP-NG to understand through log analysis how the WWW
operates, and to consider user behavior and proxy/server performance.