Search   |   Back Issues   |   Author Index   |   Title Index   |   Contents

Articles

spacer

D-Lib Magazine
December 2006

Volume 12 Number 12

ISSN 1082-9873

Using the Audit Checklist for the Certification of a Trusted Digital Repository as a Framework for Evaluating Repository Software Applications

A Progress Report

 

Joanne Kaczmarek
Library, University of Illinois at Urbana-Champaign
<jkaczmar@uiuc.edu>

Patricia Hswe
Graduate School of Library and Information Science
University of Illinois at Urbana-Champaign
<phswe@uiuc.edu>

Janet Eke
Graduate School of Library and Information Science
University of Illinois at Urbana-Champaign
<jeke@uiuc.edu>

Thomas G. Habing
Library, University of Illinois at Urbana-Champaign
<thabing@uiuc.edu>

Red Line

spacer

Abstract

Digital library initiatives have encouraged the development and implementation of repository software applications such as DSpace, Eprints, and Greenstone. These applications are being commonly deployed within the context of institutional or digital repositories. As the boundaries of, and landscapes around, institutional or digital repositories become more clearly defined and expressed, there is a greater need to have useful methods for evaluating repository software applications and the role they play in the broader context of repository services. Regarding digital preservation specifically, the 2005 RLG/NARA Audit Checklist for the Certification of a Trusted Digital Repository, Draft for Public Comment (Audit Checklist) is a current document under consideration for determining an institution's ability to be a Trusted Digital Repository. The NDIIPP-sponsored ECHO DEPository project is proposing a framework of evaluation for repository software applications based on the Audit Checklist in conjunction with a common software evaluation scoring methodology. This paper provides an overview of our work to date in this area.

A. Background

A.1. Existing Software Evaluation Methodologies

In last five years there have been a number of evaluations of repository software applications. These evaluations vary in approach. Some evaluations provide general discussions of the kinds of questions to explore prior to implementation of a particular product, while others offer more specific guidance on a methodology for evaluation. Within the general area of software evaluation it is not uncommon to use checklists and scoring methodologies [1]. These approaches have primarily been implemented when considering characteristics of specific functionality and underlying architectures. A consideration that is particularly emphasized is whether the chosen system will facilitate data transfer "relatively easily in order to take advantage of future, unforeseen developments in computer software and technology" [2], essentially a concern for tool interoperability and extensibility. Certainly there is precedence for librarians and archivists taking a checklist approach toward software decisions [3]. Many of the recommended approaches developed years ago are still applicable, such as determining the goals of software implementation to arrive at a selection of products to evaluate, or using an outline for system specification that is continually refined during the selection process [4].

Not surprisingly, the outcomes of recent repository evaluations for digital library services commonly assert that the conditions the software should satisfy must be discussed among all stakeholders prior to making a decision about deployment of any particular solution. Librarians and archivists also must consider who the users of the selected tools will be and look to the experiences of other institutions for guidance and comparison. For example, were there any "early adopters" of the repository software application by "pedigreed" institutions [5]? An affirmative answer points to a community of implementers whose use of the software serves not only as a knowledge resource but also as an application model. The documentation of these evaluations indicates there will be trade-offs in any decision about what software application to install [6].

One recent evaluation implicitly proposes the construction of a matrix for making decisions about repository tools: "To develop a rubric for decision-making about which system should be implemented, the team will have to determine how critical each particular point of functionality is and if that point is absolutely required. Additionally, they will have to rank all of the points in relationship to each other to determine what truly is most important" [7]. Recent projects or groups that have produced repository evaluation reports incorporating some form of a matrix, such as a checklist, include the Open Access Repositories in New Zealand (OARINZ) Project (based in New Zealand) [8]; the Open Society Institute's Guide to Institutional Repository Software v. 3.0 [9]; the Management of Images in a Distributed Environment with Shared Services (MIDESS) Project [10]; and a group based at Nanyang Technological University in Singapore [11]. Reports of these projects all pivot on essential evaluation criteria. These reports also show that the evaluation of a tool typically will be context-specific. For instance, what an institution or organization wants from the application and how it will be evaluated depends largely on what kinds of materials will be ingested for access and preservation and, thus, what kind of overall weight it will receive on a checklist.

A.2. The ECHO DEPository Repository Evaluation Effort

The ECHO DEPository (Exploring Collaborations to Harness Objects in a Digital Environment for Preservation) is a 3-year Library of Congress National Digital Information Infrastructure and Preservation Program (NDIIPP) [12] project at the University of Illinois at Urbana-Champaign [13]. The project is undertaken in partnership with the Online Computer Library Center (OCLC) and a consortium of content provider partners. One component of the project is the evaluation of various open source repository software applications such as DSpace, ePrints, Fedora, and Greenstone.

Unlike the assessment performed by the Open Society Institute, the ECHO DEPository evaluation focuses not on specific technical components of the repository software applications but rather on how these applications might support the activities of an institution or organization interested in digital preservation and in providing services associated with a trustworthy digital repository. Our goal is helping librarians and archivists sift through the multitude of options presented as potential solutions to their need for managing digital resources, prioritizing long-term preservation. This focus led us to use the Audit Checklist, as a starting point for our evaluation framework. By looking at repository software applications and services through the lens of the Audit Checklist we can begin a comprehensive consideration of the multiple factors facing an institution as it embarks upon a commitment to become a trustworthy digital repository.

B. Progress to Date on Repository Evaluation

B.1. The Need to Define Terms and Variables

B.1.1. Digital Archivist, Custodian, Curator, or Steward

Our work, to date, indicates that before constructing a viable solution to meet the needs of archivists and custodians seeking to develop or support digital repositories, basic questions must be answered and basic variables must be identified and defined. Arguably, one variable is the role of the archivist or librarian, whose responsibilities are shifting with the changing nature and use of information resources. The typical library or archives collection today is a hybrid of print and digital materials: archival records in electronic format, electronic journals, and other digitized materials are proliferating and affecting the production and dissemination of scholarship. These developments reflect a continuing shift in the nature of the libraries and archives, and the roles they play in the management of information resources. They also reflect a coalescence of roles. The traditional role of the archivist, who assures authenticity of resources through chain of custody and retains pertinent contextual information, may now converge with the role traditionally assumed by the librarian, who provides broad and reliable access to information resources needed by library patrons. Are the librarians and archivists custodians, curators or stewards of digital resources and collections? How do they work together or independently to provide the best possible oversight of, and access to, hybrid collections of materials? We believe communities will best be served if information professionals from varied backgrounds work together in the long-term management of a prescribed set of digital resources.

B.1.2. Repositories versus Repository Software Applications

Another important variable we take into account is the term "repository" itself. As the authors of one published evaluation assert, the lack of a "universally accepted definition for a digital library" means that a common methodology for choosing digital library software has yet to appear [14]. In the electronic or digital environment, the implementation of "institutional repositories" is still in its infancy [15], and the question of what an institutional repository is, or does, can generate a range of responses. For instance, these implementations have at their core software applications that are often called "repositories." Yet, the traditional notion of a repository is one that conjures up images of well known institutions such as the British Library, the National Archives of the Netherlands, the Library of Congress or the Getty Museum. This notion of a repository implies a larger organizational commitment to the trustworthy oversight of valued and valuable resources that goes far beyond specific software implementations and underlying hardware and software architectures.

Thus, when working with digital resources, the way in which one answers the question "What is a repository?" can produce varying ideas of expected outcomes or services. For this reason, in our evaluation process, we have spent much time reiterating for ourselves places of overlap and distinction between these two generalized notions of a repository. As a clarification aid, we have coined the terms "Big R" and "little r," and applied them in project meetings to distinguish between a repository at the institutional level ("Big R") and an actual repository software application ("little r"). We have also discerned a middle range notion of a repository. This notion is typically represented by services offered through organizations such as OCLC or the California Digital Library aimed at supporting the infrastructure needs of other institutions that must manage digital assets but do not have the resources to manage the technical infrastructure directly themselves.

B.2. Developing the ECHO DEPository Evaluation Framework

B.2.1 Adapting the Audit Checklist

The ECHO DEPository Evaluation Framework has been developed from the Audit Checklist sponsored by RLG and NARA. As articulated by the Digital Preservation Management workshops conducted by Cornell University Library [16], a successful digital preservation management program has elements that can fit into three broad areas: technology, resources, and management. Because none of these can stand alone, the Audit Checklist can be seen as a tool for reviewing the management component of an institution's ability to support digital preservation as a trusted digital repository.

The original Audit Checklist encompasses current thoughts on the provision of repository services and a direction for moving digital preservation forward. The document stems from recommendations in the original 2002 paper, Trusted Digital Repositories: Attributes and Responsibilities [17] and is informed by extensive work across the digital library community. The implicit assumption made by the Audit Checklist document is that there can be no real progress toward reliable access to information resources unless there is some form of metrics by which to measure an institution's trustworthiness when offering support for digital resources. The stated goal of the RLG/NARA Digital Repository Certification Task Force is "to develop criteria to identify digital repositories capable of reliably storing, migrating, and providing access to digital collections" [18].

The original Audit Checklist comprises four sections:

  • Organization;
  • Repository Functions, Processes, and Procedures;
  • Designated Community and the Use of Information;
  • Technologies and Technical Infrastructure.

Each section delineates what a Trustworthy Digital Repository would do or how they would behave related to the category of the section. The document provides substantial narrative to help define the meaning of each checklist item.

The ECHO DEPository project's use of the Audit Checklist was undertaken with a primary interest in evaluating repository software applications as components within a larger organizational commitment toward trustworthy digital preservation.

We first reviewed the Audit Checklist in its entirety to determine which checklist items might be specifically relevant for repository software applications. In other words, for each checklist item we asked, "How might a repository software application support an institution to meet this criterion?" Audit Checklist items that did not seem applicable to repository software applications were marked as out-of-scope, and items that were applicable were annotated to indicate how we expected to apply them. During this review we also considered terminology issues. The Audit Checklist attempts to frame concepts around the language of the OAIS Reference Model [19]. Where we thought the document language was inconsistent, we modified the language to be more conformant with that of the OAIS Reference Model. (For example, "objects" becomes "Digital Objects" and "metadata" becomes "Representation Information.")

Original Audit Checklist item B5.6:

B5.6. Repository enables the dissemination of authentic copies of the original
or objects traceable to originals

Annotated Audit Checklist item B5.6:

B5.6. Repository enables the dissemination of authentic copies of the original or
objects traceable to originals

  • Can the disseminated packages be easily validated against their
    checksums by the person making the request? How secure are the
    checksums? Does the requester have access to the checksums?
  • Does the repository software support provenance metadata? Do the
    dissemination packages include this information? What is included in the
    provenance?
  • Does the repository support versioning? What happens to old versions of
    Reference Information or Digital Objects when they are replaced by new
    versions or modified in some way?

Figure 1: Example of one of the Audit Checklist items (B5.6) in its
original form and then annotated for use by this project

This annotated Audit Checklist was then used by project team members responsible for managing installations of repository software applications or working with the repository service products. Team members independently provided comments through a loose narrative of first impressions, guided by the annotated Audit Checklist. Comments included experiences about installing and/or setting up the applications and depositing the test bed materials into the applications or with the service providers. Team members were also asked to consider how the annotated Audit Checklist might be further annotated and refined.

The results of this exercise revealed variations in the level of detail provided by the team members. We set a series of meetings to review variations in the interpretation of checklist items and to come to a consensus in our interpretations. During these ongoing meetings, we have discussed the distinctions between repository software applications, repository services, and institutions wanting to be seen as trustworthy (trusted) repositories. An early consequence of our discussions has been to draw out the nuances and sensibilities inherent in the language of the original Audit Checklist and align its items more closely to our goal of evaluating repository software applications in the context of a trusted digital repository. This led us to create a separate checklist that excluded all items from sections A and C, which address organizational and community-specific concerns. We call this list our "annotated Audit Checklist for little-r" (annotated Audit Checklist). We believe that the use of this checklist to review repository software applications and services may help librarians and archivists understand how the applications do, and do not, support an organizational commitment toward becoming a trusted digital repository.

B.2.2. Using a Scoring Methodology

As described above, the main focus of our work is the development and application of the annotated Audit Checklist as a framework for evaluating repository software applications. However, we recognized that it could be helpful for institutions engaged in selecting a repository software application to have an additional instrument that could be used to apply the annotated Audit Checklist evaluation framework within their specific organizational context. We decided to investigate applying an existing scoring methodology deployed in software evaluations. Though we later found limitations to this approach (see B.3. "Current Findings > 3.2. Scoring Methodology" section below), the exercise helped us better understand and articulate technical requirements or levels of expertise needed for repository software applications to more fully support the annotated Audit Checklist items.

In this exercise, we patterned our instrument (a spreadsheet containing all pertinent annotated Audit Checklist items) on one used by the Center for Data Insight at Northern Arizona University [20]. The methodology proposes that items in an evaluation framework be weighted by assigning a percentage to each item, totaling 100% per section. As we have applied this approach, the percentage weights are intended to reflect an annotated Audit Checklist item according to the needs and intended uses of the software by the local implementation. Once the weights are assigned to each item on the annotated Audit Checklist, a single software application is selected to be the benchmark application against which all others will be compared. This selection process automatically assigns the selected software application a median score (in our case a '3' on a scale of '1 to 5') regardless of how well the application may or may not perform according to each item. Then, other software applications are compared to this benchmark application and ranked according to how well they compare to the performance of the benchmark application: much worse (1), worse (2), the same as (3), better (4), or much better (5) than the reference. The scores across all criteria are then totaled to give an overall score for each repository software application. The criteria may also be categorized to give subtotals for different categories of criteria with each category potentially having its own weight.

Because this common decision-matrix scoring methodology for evaluating software is only relevant within the context of a specific institutional setting (for example, the weightings assigned to different items will vary depending on the goals and needs of a specific institution), this scoring instrument is not being used to provide quantitative results as part of our general evaluation. Rather, we are examining it to see whether such a scoring methodology may be coupled with the Audit Checklist to be used within a specific organizational context as an additional tool for choosing repository software.

B.3. Current Findings

B.3.1. The Annotated Audit Checklist as a Repository Software Application Evaluation Framework

Within our project, we have found the process itself of adapting the Audit Checklist as a framework for our repository software application evaluation to be a useful learning experience. We find situating our evaluation within the original Audit Checklist provides a framework to discuss repository software applications without losing sight of the larger "Big R" organizational context. As we use it to document our repository installation and experimentation experiences, we find the annotated Audit Checklist provides a good framework for looking at repository software applications within the context of digital preservation. However, information about other aspects of software not directly related to preservation (e.g., ease of installation, ease of maintenance, programming language used) do not fit well into this framework. We are considering other ways to report this type of information. Importantly, we have also found that the process itself of annotating the original Audit Checklist provides a forum for the project team members to begin discussions that have opened up opportunities to explore our individual assumptions about various checklist items and our interpretations of terminology. Through these discussions we have continued to refine the annotated Audit Checklist and establish potential directions to take our evaluation activities further.

Within the digital library and archives community there have been recent discussions about the merits and limitations of establishing an international standard certification process for Trusted Digital Repositories. Regardless of whether such a process is formally established, the Audit Checklist may be used as a self-assessment tool for institutions intending to provide trustworthy management of digital assets. In the real world, decisions are often made early on without the benefit of applying a more thoughtful process such as that suggested by our original evaluation process. For this reason we believe there may be value in using the Audit Checklist items without including a scoring process of any particular software applications but still 'weighting' the items to reflect the priorities of a specific institution at a particular point in the development of the institution's repository services. Because at different stages of development an institution will likely prioritize different components of trustworthiness as expressed by items in the Audit Checklist, a shift in priorities could also be reflected by 're-weighting' the items according to the institution's current priorities. By doing this, the original Audit Checklist can be applied as a self-assessment tool throughout the lifecycle of an institution on a journey toward becoming a Trusted Digital Repository. The exercise of applying weights to the Audit Checklist items even just one time would give decision makers a better means of taking stock of their institutional readiness – their strengths and weaknesses – to meet reasonable expectations of being a Trusted Digital Repository.

B.3.2. Scoring Methodology

We tested the scoring instrument by applying it in an example scenario based on an academic library engaged in implementing an institutional repository. This exercise illuminated some of the limitations of using a scoring methodology in conjunction with the annotated Audit Checklist, but it also suggested other potential uses we had not considered. (Again, such a scoring instrument is intended to be used in conjunction with the -annotated Draft Checklist within a specific institution to assist those responsible for making decisions about supporting digital repository services. This scoring methodology does not provide a general context-independent quantitative comparison of repository software applications.)

First, it became clear that our refinements to the Audit Checklist were not detailed enough to render the application of a scoring methodology particularly useful for comparing repository software applications to one another regarding their ability to support a Trusted Digital Repository. To use the scoring methodology effectively would require a detailed list of technical specifications necessary to support each Checklist item. This could be achieved only by undertaking a methodical process to bring software developers together to propose and vet such specifications, perhaps through a series of forums and meetings. Once such specifications would be identified, groups of developers familiar with each repository software application would need to answer objectively how well each application meets the recommended requirements of the technical specifications.

C. Conclusion

Our goal in this ongoing evaluation process is to suggest a means of supporting the decision-making process of practicing archivists and librarians interested in establishing some degree of trustworthy digital repository services for their community of users. We propose to do this in two central ways. The first way is through illuminating several of the issues one must consider that are often implicit, but not explicit, such as what is a "repository." The second way is by demonstrating a possible use of the Audit Checklist as a framework for guiding repository software application selection decisions. We began this process by annotating the Audit Checklist and enlisting our team members to gauge their software installation experiences against it. Currently we are concluding a series of meetings to reach a consensus on the interpretation of checklist items. Using a test example scenario, we also experimented with applying an existing scoring instrument to the annotated Audit Checklist. This was an exercise that clarified the need for a more meticulous refinement of our annotated Audit Checklist, one that should be undertaken with the developers of the common repository software applications. Our experience thus far suggests that the application of 'weights' to the Audit Checklist items, specifically according to an institution's own needs and priorities, may also provide a framework for guiding a reiterative self-assessment process of an institution's repository services. Aside from this, as more institutions explore the possibility of providing trustworthy digital repository services, the evaluation of repository software applications increasingly will necessitate a more extensive, community-based expression of technical functional specifications needed to support the requirements of Trusted Digital Repositories. With an ever increasing array of potential software tools, services, and infrastructure configurations, the time is ripe for an evaluative approach to repository software that considers the array of items found in the Audit Checklist.

Acknowledgments

Funding for our participation in the NDIIPP partnership is generously provided by the Library of Congress. The following past and present team members have also contributed to various components of the repository evaluation at various points in time: Matt Cordial, Justin Davis, Robert Manaster, Karen Medina, Kyle Rimkus, Yuping Tseng, Richard Urban, Jingjin Yu and Wei Yu.

Notes and References

1. For standard works on software evaluation and the incorporation of checklists, see T. Gilb, S. Finzi & D. Graham (1993). Software inspection. Wokingham, England; Reading, Mass.: Addison-Wesley; and C. P. Hollocker (1990). Software Reviews and Audits Handbook. New York: John Wiley & Sons.

2. J. E. Rowley (1990). Guidelines on the evaluation and selection of library software packages. ASLIB Proceedings, 42(9), pp. 225-235.

3. Society of Archivists. Information Technology Group (1993). Criteria for software evaluation, a checklist for archivists: A paper produced by a working group of the society of archivists information technology group committee. London: The Society.

4. Society of Archivists

5. E. Dill, & K. L. Palmer (2005). What's the big IDeA? considerations for implementing an institutional repository. Library Hi Tech News, 22(6), pp. 11-14.

6. H. F. Cervone (2006). Some considerations when selecting digital library software. OCLC Systems and Services, Vol.22, no.2, Pp.107-110 22, (2): 107-110.

7. Cervone, p. 109.

8. OARINZ Project, <http://www.oarinz.ac.nz/index.php>; repository evaluation available at <http://eduforge.org/docman/view.php/131/1062/Repository%20Evaluation%20Document%20.pdf>.

9. Open Society Institute's Guide to Institutional Repository Software v. 3.0 <http://www.soros.org/openaccess//software/>.

10. MIDESS Project, <http://www.leeds.ac.uk/library/midess/>; repository evaluation available at <http://www.leeds.ac.uk/library/midess/
MIDESS%20workpackage%202%20-%20Functional%20and%20Technical%20Requirements
%20Specification.pdf
>.

11. D. H.-L. Goh, A. Chua, D.A. Khoo, E. B.-H. Khoo, E. B-T. Mak & M. W-M. Ng (2006). A checklist for evaluating open source digital library software. Online Information Review, 30(4), 360-379.

12. Library of Congress National Digital Information Infrastructure and Preservation Program, <http://digitalpreservation.gov>.

13. ECHO DEPository Project, <http://www.ndiipp.uiuc.edu>.

14. Goh et al., p. 365.

15. C. Bailey, K. Coombs, J. Emery, A. Mitchell, S. Simons & R. Wright (2006). Executive summary. SPEC Kit 292. Institutional Repositories. Available at <http://www.arl.org/spec/SPEC292web.pdf>.

16. Cornell Digital Preservation Management Workshop. Available at <http://library.cornell.edu/iris/dpworkshop>.

17. Trusted Digital Repositories: Attributes and Responsibilities (2002). Available at <http://www.rlg.org/en/pdfs/repositories.pdf>.

18. RLG/NARA Audit Checklist for Certifying a Trusted Digital Repository. Available at <http://www.rlg.org/en/pdfs/rlgnara-repositorieschecklist.pdf>.

19. OAIS Reference Model. Available at <http://public.ccsds.org/publications/archive/650x0b1.pdf>.

20. K. Collier, B. Carey, D. Sautter & C. Marjaniemi (1999). A methodology for evaluating and selecting data mining software. In Proceedings of the 32nd Annual Hawaii International Conference on System Sciences.

Copyright © 2006 Joanne Kaczmarek, Patricia Hswe, Janet Eke, and Thomas G. Habing
spacer
spacer

Top | Contents
Search | Author Index | Title Index | Back Issues
Opinion | Next Article
Home | E-mail the Editor

spacer
spacer

D-Lib Magazine Access Terms and Conditions

doi:10.1045/december2006-kaczmarek