Search   |   Back Issues   |   Author Index   |   Title Index   |   Contents

Articles

spacer

D-Lib Magazine
September/October 2009

Volume 15 Number 9/10

ISSN 1082-9873

Establishing Trust in a Chain of Preservation

The TRAC Checklist Applied to a Data Staging Repository (DataStaR)

 

Gail Steinhart and Dianne Dietrich
Cornell University
<{gss1, dd388}@cornell.edu>

Ann Green
Yale University
<ann.green@yale.edu>

Red Line

spacer

Abstract

DataStaR is a data staging repository in development at Cornell University. A data staging repository offers unique opportunities to recruit data into domain and institutional data repositories, but as a transitory curation environment, it demands careful consideration of the role of such a repository in the full life cycle of research data. We describe our experience applying the Trustworthy Repositories Audit & Certification Criteria and Checklist as a framework for specifying system, policy, and documentation requirements to ensure that DataStaR is an effective partner in the entire chain of preservation activities.

Introduction and background

DataStaR (http://datastar.mannlib.cornell.edu/), a data staging repository for digital research data, was conceived of as a platform and related services to support data sharing among colleagues during the research process, and the transmission of completed data sets to domain-specific and institutional repositories. In development at Cornell University's Albert R. Mann Library, with support from the National Science Foundation, DataStaR provides a managed environment where researchers may share data with selected colleagues, create preliminary metadata, produce detailed metadata according to the standards they use most, publish data sets to external repositories, and obtain assistance from library professionals at any point in the process. By offering support for data sharing as research progresses, as well as providing tools and support for publishing data, the data staging repository approach offers new opportunities for partnerships between locally based service providers (academic libraries) and external data centers to recruit and support data owners in publishing data that might otherwise be lost or discarded once research is completed. The DataStaR project grew out of earlier work with Cornell researchers in language acquisition [McCue et al. 2007] and ecologists working in the upper Susquehanna River basin [Steinhart and Lowe 2007] that explored the possibilities for collaboration between a research group and the library to address digital data curation issues as early as possible in the life cycle of digital information, an approach that has received considerable attention recently [Brandt 2007, Martinez-Uribe 2007, Treloar et al. 2007, Steinhart 2007]. This approach also offers important possibilities for new partnerships between local institutional repositories, and domain-based data centers or archives [Green and Gutmann 2007, Bishop 2007, McNeill 2007, Rice 2007].

Mann Library maintains two well-established digital data initiatives: the Cornell University Geospatial Information Repository (CUGIR, http://cugir.mannlib.cornell.edu/), and the USDA Economics, Statistics and Marketing Information System (USDA-ESMIS, http://usda.mannlib.cornell.edu/), as well as numerous other digital collections. With the start-up of a new project, we had the opportunity to consider best practices for repository design and operation in the planning phase. As we set about the process of designing the repository, we recognized we had an opportunity and an obligation to consider the role of DataStaR in the full life cycle of digital data, including preservation. We decided to investigate and incorporate best practices related to digital preservation to the fullest extent possible even though DataStaR is not intended to serve as a long-term preservation repository. This strategy may seem counterintuitive: why would repository planners want to apply a digital preservation framework to a repository that does not perform digital preservation actions? There are good reasons for taking this approach. First, in our own experience, policies and best practices for repositories seem to be best developed in the digital preservation community, and digital preservation frameworks have much to offer that bears on responsible management of repositories, regardless of a repository's stated preservation commitment. Allinson [2006], for example, in reviewing the utility of the Open Archival Information System (OAIS) reference model as a guidance document for repositories, asserts that "if repository developers and administrators are guided by a reference model, they are more likely to consider the right issues."

Digital preservation frameworks also emphasize the importance of establishing trust, and how repositories can demonstrate trustworthiness with certain kinds of evidence. This means articulating the functions a repository intends to be able to fulfill or execute, and then demonstrating that the repository has the capability to fulfill those functions. This article describes our experience attempting to do just that, by applying one available digital preservation framework, the "Trustworthy repositories audit & certification (TRAC): criteria and checklist" [RLG-NARA Digital Repository Certification Task Force 2007]; hereafter the TRAC checklist, or simply TRAC), in the design stage of a staging repository for digital research data.

Digital preservation frameworks and approaches

There are a number of models and frameworks for defining or assessing the attributes of systems and organizations engaged in digital preservation. The Open Archival Information System (OAIS) reference model is one highly visible effort in this area, and provides a high-level model of the attributes of archives operating to ensure the longevity of digital information on behalf of a designated community [Consultative Committee for Space Data Systems 2002].1 Subsequent efforts, including the TRAC checklist, aimed to define the types of evidence required to demonstrate capabilities in this area. The TRAC checklist has its origins in the RLG-OCLC report "Trusted Digital Repositories: Attributes and Responsibilities" [RLG-OCLC Working Group on Digital Archive Attributes 2002]; hereafter "TDR report"). The report defined a trusted digital repository and its necessary attributes and responsibilities, concluding with a recommendation to "develop a framework and process to support the certification of digital repositories." The discussion of options for certifying trusted repositories drew inspiration from the Archival Workshop on Ingest, Identification, and Certification, which drafted standards for digital repositories and organized them in checklist form. The TDR report stated that it would use "both the checklist concept and the certifiable elements envisioned at the workshop" to "provide a base for a certification framework." From this recommendation, in 2003, the newly established Digital Repository Certification Task Force worked to produce a draft audit checklist, which, after a period of public comment, was finalized in 2007 [RLG-NARA Digital Repository Certification Task Force 2007].

Additional frameworks and tools exist. In 2007, representatives from the Digital Curation Center (UK), Digital Preservation Europe, NESTOR (Germany), and the Center for Research Libraries (CRL), met to establish a set of ten very high-level requirements for digital archives to guide the multiple efforts to certify digital repositories [Center for Research Libraries n.d. b]. Of the additional assessment tools listed by CRL [Center for Research Libraries n.d. c], we also considered making use of DRAMBORA, the Digital Repository Audit Method Based On Risk Assessment [Digital Curation Centre and DigitalPreservationEurope 2008] in the planning stages of DataStaR. DRAMBORA, a self-audit tool, takes a very systematic approach to the assessment of risks to digital content held by a repository, while the TRAC checklist aims more broadly to establish the criteria needed to demonstrate trustworthiness. Certainly responsible and diligent identification and management of risk is a part of trustworthiness, but we felt the broader scope of the TRAC checklist made it more suitable for our use in guiding the design of DataStaR.

Approaches to applying OAIS, the TDR report, and TRAC checklist

While OAIS, TDR, and the TRAC checklist share common intellectual underpinnings, their respective approaches range from a fairly high-level and non-prescriptive model (OAIS) to a more prescriptive checklist (TRAC). Here we review selected examples of applications of all three frameworks and consider some of them in relationship to our own experience with the TRAC checklist.

Vardigan and Whiteman [2007] aligned existing archival procedures at the Inter-university Consortium for Political and Social Research (ICPSR) with the OAIS reference model, with an eye towards translating the very general OAIS principles to a social science data archiving context, and identifying areas for improvement. At least one area of improvement, the need for a public preservation policy, has since seen significant development [McGovern 2007], and has influenced some of our own policy development with respect to DataStaR.

Knight and Hedges [2007] offer a particularly interesting interpretation of OAIS in their consideration of the implications for distributed repository services. Their case, and the examples they consider, consist of institutional partnerships with a shared commitment to providing a specified level of services that are more efficiently provided in cooperation, or are beyond the capabilities of any one single partnering institution. Their analysis raises issues of technological interoperability, workflow management, and managerial aspects of cooperative arrangements. While the DataStaR project does not develop formal agreements with destination repositories, it does face issues of technological interoperability and workflow, and must evaluate as fully as possible the preservation capabilities of the repositories to which DataStaR users publish.

Use of the TDR report was the subject of a survey-based study on institutional repository planning and development [Hank et al. 2007]. Survey respondents identified numerous resources used in repository planning. While the TDR report was one of the three most widely cited planning tools and was considered a "valuable resource...providing a solid framework" for repository planning and administration, relatively few of the study participants report actually making use of it at the planning stage. The authors note that the release of the TDR report's successor, the TRAC checklist, might result in greater use for repository planning. They note some of the difficulties in aligning TDR report criteria with evidence of meeting those criteria, a problem at least partly remedied by the development of the TRAC checklist, although Ross and McHugh [2006] note that there is still significant leeway in interpreting TRAC and understanding what constitutes adequate evidence.

HathiTrust [HATHI Trust n.d.] has stated its commitment to complying with established digital preservation standards, and the Center for Research Libraries (CRL) will audit HathiTrust later in 2009 using TRAC as the primary assessment tool [Center for Research Libraries n.d. a]. The HathiTrust has placed several documents on its website outlining compliance with multiple digital preservation frameworks, including a draft response to TRAC. In the same vein, MIT's PLEDGE project compiled a comprehensive list of policies needed in order to comply with the TRAC checklist [Wolfe 2007]. The PLEDGE project has also worked to translate policies into machine-actionable rules that enforce repository policies, a significant development that could greatly aid repositories in meeting TRAC checklist requirements [Smith and Moore 2007].

The TRAC checklist has been used as a rubric for self-assessment in other circumstances where certification was not necessarily a stated goal. Ambacher [2006] points out both the lack of external pressure (from funders and data providers) for government archives to undergo certification and the myriad unique issues government archives face not directly or completely addressed by the TRAC checklist. He notes that the checklist still has value as part of a continuing cycle of self-evaluation, while noting that "self-assessments often failed to critically evaluate the repository's degree of success or the factors required for success" [Ambacher 2008]. A second example of applying the TRAC checklist (and two other digital preservation frameworks) for internal assessment of the H-Net email archive (a humanities and social sciences network featuring 185 moderated email lists, which represent a record of scholarly communication) is described by Schmidt [2009]. As a result of the audit, H-Net staff and stakeholders crafted plans for improvements to various components of the system, including backup procedures and policies, storage, authenticating messages and metadata, and strategies for handling proprietary file attachments. In both of these cases, the "success" of self-evaluation using the TRAC checklist depended on whether the organization used the results to address shortcomings and make improvements – a principle that guided our investigation of how the TRAC checklist should influence the design of DataStaR.

An interesting application of the TRAC checklist in a context for which it wasn't originally intended is the adaption of the draft audit checklist to evaluate repository software applications described by Kaczmarek et al. [2006]. Instead of using the TRAC checklist to assess the attributes of the repository itself, the authors reframed selected TRAC criteria to evaluate how repository software applications support the work of providing trustworthy preservation of digital objects. Not surprisingly, the authors note that some aspects of repository evaluation that don't bear directly on preservation aren't addressed by the checklist (ease of installation and use, for example).

Finally, in cases where meeting a specific set of requirements is less important than implementing widely accepted best practices more generally, efforts such as Beagrie et al.'s [2008] report, aimed at helping universities in the UK develop complete and comprehensive digital preservation policies, are particularly valuable. The study provides a model for preservation policies that is modular and easily adaptable to a range of institutional contexts. Drawing from the literature, including the digital preservation frameworks discussed here, and existing digital preservation policies, the report touches on the organizational, procedural, and technical issues central to a digital preservation system. The report aggregates common themes across multiple preservation frameworks, while providing enough variation in guidance statements for organizations to build policy statements specific to their needs.

Application of the TRAC checklist to DataStaR

Our first task was to define and understand the conceptual space DataStaR might occupy in the context of digital preservation. The OAIS reference model offers a high-level and abstract view of digital preservation infrastructure, so we attempted to clarify where DataStaR operates in relationship to an OAIS system (Figure 1).

Figure 1A. An overview of DataStaR and its operations: researchers may upload data, create metadata, and share with selected colleagues. They may also publish data sets to selected external repositories.

An overview of DataStaR and its operations

Figure 1B. DataStaR with selected OAIS components identified. DataStaR accepts "pre-" submission information packages (SIPs) from researchers, manages content within DataStaR as archival information packages (AIPs), and distributes dissemination information packages (DIPs) to users. Publication of data to an external repository results in the transmission of a SIP from DataStaR to that repository, which, if it is an OAIS-compliant repository, manages content accordingly.

Chart showing DataStaR with selected OAIS components identified

While DataStaR itself is not intended to be a fully-compliant OAIS system, and in fact may operate in what might be designated as a "pre-ingest" space, the terminology and concepts are useful for understanding DataStaR's activities and how they might relate to digital preservation operations and goals.

Once we understood the OAIS model and how OAIS concepts could be applied to DataStaR, we undertook a thorough review of the TRAC checklist requirements. We identified those criteria we considered to be within the scope of the DataStaR project, those that were completely out of scope, and those that could be deferred for consideration at the end of the initial grant-funded project (i.e., the point at which DataStaR might be incorporated into core library operations). This may sound straightforward enough, but the process required considerable effort on our part as we worked to understand each criterion, consider its applicability to DataStaR, and determine what types of evidence could be compiled.

As noted earlier, we reviewed documents and policies from other institutions. While we examined several sets of documents pertaining to establishing the trustworthiness of particular digital repositories, we found two examples to be particularly useful in developing our own approach: the Interuniversity Consortium for Political and Social Research's (ICPSR) Digital Preservation Policy Framework [McGovern 2007], and MIT's PLEDGE project policy list [Wolfe 2007]. ICPSR's document, while it consciously reflects the organization of the original TDR report attributes, is also readable and easily understood by non-experts. The policy documents for the PLEDGE project revealed a very thorough and detailed approach to aligning policies and policy statements with specific TRAC criteria.

Our aim, after engaging in a decomposition of TRAC that was nearly as detailed as the PLEDGE project's, was to develop documents that more closely resembled those of ICPSR: readable statements of policy and descriptions of operations that enable a potential user to understand the extent and limits of DataStaR's management and preservation commitments. We settled on three main vehicles for demonstrating or describing policies and activities that support selected TRAC requirements: a depositor agreement, a repository policies document, and system documentation. Throughout, we attempted to note which documents or specific statements are meant to address a specific requirement. Because we adopted TRAC as a planning tool and repository development is still in progress, some documents (or parts of them) serve as a checklist of system requirements and will mature as development progresses (this is especially true of system documentation), while others are more fully developed and ready for implementation (the depositor agreement).

Development of the depositor agreement was influenced primarily by the license agreement used for Cornell's institutional repository, eCommons@Cornell, the primary purpose of which is to ensure that Cornell University Library has sufficient non-exclusive rights to distribute and preserve content, and that the submission does not violate copyright law. Additional statements were adapted from the data management policy used by the Cornell University Geospatial Information Repository [CUGIR work group 2005]. We also reviewed the Data-PASS Data Deposit Agreement [Data-PASS 2006] because it explicitly addresses the issue of re-dissemination of deposited material. We attempted to keep the agreement to one page or less, referencing more detailed policies in the separate repository policies document, but including in the depositor agreement a statement that the depositor accepts and abides by those policies. When we were satisfied with the agreement, we asked Cornell University Counsel to review the document (as well as the repository policies, described below), and they had no objections. Both documents are available on the DataStaR project website (http://datastar.mannlib.cornell.edu/).

We considered multiple sources and examples in developing the DataStaR repository policies document. The TRAC checklist had the most significant impact on the final document, but we also adapted parts of the data management policy used by the Cornell University Geospatial Information Repository [CUGIR work group 2005]. The DataStaR policies document includes a mission statement, and addresses the following categories of issues:

  • Updates and amendments to the policies and related documents;
  • A general overview describing DataStaR's intended use, benefits of using DataStaR, statements of who may use DataStaR and what content may be deposited, and statements regarding intellectual property rights;
  • Collection development policy;
  • Data and metadata management policy addressing acceptable file formats, required metadata, updates to data sets, exposure of data and metadata to search engines and harvesters, restricted access data, access and delivery options, access integrity and data, policies and responsibilities related to the publication of data to external repositories, and withdrawal of content from DataStaR;
  • Digital preservation commitment, including the OAIS information model as applied to DataStaR;
  • Terms of use for both depositors and content users, including a privacy policy and a statement of distribution liability;
  • Feedback policy.

As noted above, given the relatively early stages of development of the DataStaR platform, the system documentation currently serves as a list of system requirements and is not yet publicly available. In its current form, it provides an overview of system architecture, and sections addressing ingest, access, metadata, publication to an external repository, administrative and maintenance actions, backup, hardware and software inventories and histories, testing and further development, and other functions. Supporting documents include the specification of minimum required metadata, and system errors and responses.

A summary of the TRAC requirements and their alignment or inclusion in the documents described here may offer some insight as to what vehicles are most appropriate for addressing certain groups of requirements (Table 1A). Perhaps not surprisingly, section A items from the TRAC checklist, those that pertain to organizational infrastructure, are largely addressed by policies (in DataStaR's case, the depositor agreement and repository policies). Section B (digital object management) and Section C (technologies, technical infrastructure and security) requirements are generally addressed in system documentation, but information may also be conveyed to users via repository policies, and even the depositor agreement.

Table 1A. Number (and percentage) of TRAC criteria from each section addressed by the depositor agreement, repository policies, and system documentation for DataStaR.

TRAC Section Depositor agreement Repository policies System documentation/
Requirements
A (24 criteria) 6 (25%) 9 (38%) 3 (13%)
B (44 criteria) 4 (9%) 14 (32%) 30 (68%)
C (16 criteria) 0 0 6 (38%)

What is noteworthy about this exercise is that only 9 criteria were considered out of scope for the DataStaR project. Several were identified as requirements to be reconsidered should DataStaR become a core library service (Table 1B), and an effort was made to address nearly two-thirds (64%) of the TRAC criteria in the pilot phase of DataStaR. Requirements to be reconsidered at a later date mostly had to do with resolving questions related to longer-term organizational infrastructure, and monitoring and change policies and procedures (for technology, for repository policies, etc.) that aren't of central importance over the course of a three-year grant but would be over a longer time frame. We also deferred some items from the security section (C3) simply because the library as a whole is currently updating its strategies for this area, and we anticipate applying those when they are fully developed. Items considered entirely out of scope are those that pertain to long-term preservation actions and infrastructure taking place within the DataStaR repository, because DataStaR itself is not a long-term preservation repository.

Table 1B. TRAC criteria selected for reconsideration at the point DataStaR matures into a production system, and criteria identified as beyond the scope of DataStaR: 36% of all criteria were identified for later consideration or as out of scope; 64% were addressed in the pilot phase of the DataStaR project.

TRAC Section Address at transition to
production system
Not relevant to DataStaR
A (24 criteria) 11 (46%) 1 (4%)
B (44 criteria) 0 8 (18%)
C (16 criteria) 10 (63%) 0
Total (84 criteria) 21 (25%) 9 (11%)

Conclusions and observations

The fact that so much of the TRAC checklist was applicable to the pilot phase of a staging repository demonstrates that this framework has significant value for repository planning and management, and that this value extends well beyond issues directly related to digital preservation. Nevertheless, we do have a few practical observations about our experience in applying the TRAC checklist to DataStaR, some of which suggest activities that would simplify the process for others.

Policy development could be made simpler. Early on in our work, we examined the OpenDOAR policies tool [University of Nottingham 2007], an easy to use form-based tool for creating repository policies related to the use of repository contents. As such its scope was too narrow for our purposes, but it remains an excellent example of a tool that could simplify the development of other types of repository policies, including preservation policies.

Repository software could be more "preservation-aware", and preservation functions built into repository platforms could be better documented. Kaczmarek et al. [2006] take at step in the right direction by applying the TRAC checklist to the evaluation of repository software. It makes little sense for every user of a particular platform to have to repeat such an analysis, and software developers (or external reviewers) could evaluate and provide documentation of the relationship between TRAC criteria and software features. Smith and Moore's [2007] work to translate policies into machine-actionable rules will also be helpful in this area.

Tabulating our progress with TRAC required two kinds of documentation, aimed at different audiences. We observed, in the process of examining other efforts to apply the TRAC checklist, that some organizations directed their efforts towards compiling evidence that reflected the content of the checklist itself. This makes sense if the primary goal is to satisfy auditors. We also observed other organizations translating the principles of the TRAC checklist into documents that appear to be intended for users of the repository. We believe both approaches are necessary. There is sufficient latitude in the TRAC checklist, in our view, that language aimed at certification alone doesn't convey fully to users the commitments of the organization managing the repository. Numerous statements in the TRAC checklist suggest specific information that must be conveyed to users, and in that spirit, we focused on developing documents primarily for users.

Finally, an interesting area, and one that we have yet to fully explore, is the implications for selecting and recommending repositories where DataStaR contributors can deposit their data. While our ideal would be to recommend only those repositories that can demonstrate some defined level of trustworthiness, in reality, there may be few choices open to researchers at the time they are ready to deposit a data set. This fact is one of the drivers for our recommendation that researchers deposit data in both a discipline-specific repository (if available, and which may or may not have an explicit preservation commitment), and Cornell's institutional repository (eCommons@Cornell). While eCommons@Cornell does not currently make formal preservation commitments, planning is underway at Cornell University Library to establish a trustworthy repository. Depositing to eCommons@Cornell serves as a sort of insurance should a chosen disciplinary repository close.

In summary, while DataStaR is not a preservation repository and as such would not measure up to all of the standards applied in a formal audit, we've learned a great deal from our effort to understand and apply selected principles from the TRAC checklist. We have a clear understanding of DataStaR's role in a chain of preservation activities, and we're satisfied that we've given sufficient thought to the design and operation of a unique repository. Frameworks such as the TRAC checklist can be very profitably applied to the design and planning of diverse types of digital repositories.

Acknowledgements

The authors gratefully acknowledge the contributions of the DataStaR development team: Brian Caruso, Kathy Chiang, Jon Corson-Rikert, Brian Lowe, and Janet McCue. This material is based upon work supported by the National Science Foundation under Grant No. III-0712989. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Notes

1. An updated version of the OAIS, the candidate for submission to the ISO for review, is available for public examination at <http://cwe.ccsds.org/moims/docs/MOIMS-DAI/Draft%20Documents/OAIS-candidate-V2-markup.pdf>.

References

Allinson, June. 2006. OAIS as a reference model for repositories: An evaluation. UKOLN, <http://www.ukoln.ac.uk/repositories/publications/oais-evaluation-200607/Drs-OAIS-evaluation-0.5.pdf> (accessed 05/08/2009).

Ambacher, B. 2008. Government archives and the digital repository audit checklist. JoDI - Journal of Digital Information 8, (2): 63-73.

Ambacher, Bruce. 2006. Government archives and certification. Chapel Hill, NC, USA, <http://sils.unc.edu/events/2006jcdl/digitalcuration/Ambacher-JCDLWorkshop2006.pdf> (accessed 04/02/2009).

Beagrie, Neil, Najla Semple, Peter Williams, and Richard Wright. 2008. Digital preservation policies study: JISC (accessed 11/6/2008).

Bishop, Libby. 2007. Moving data into and out of an institutional repository: Off the map and into the territory. IASSIST Quarterly 31, (3-4): 13-20.

Brandt, D. Scott. 2007. Librarians as partners in e-research: Purdue university libraries promote collaboration. College & Research Libraries News 68, (6), <http://www.ala.org/ala/mgrps/divs/acrl/publications/crlnews/2007/jun/partnerseresearch.cfm> (accessed 05/08/2009)

Center for Research Libraries. CRL - certification and assessment of portico and HathiTrust. n.d. a [cited 4/30/2009 2009]. Available from <http://www.crl.edu/content.asp?l1=13&l2=58&l3=181> (accessed 4/30/2009).
—. CRL - core requirements for digital archives. n.d. b [cited 4/20/2009 2009]. <http://www.crl.edu/content.asp?l1=13&l2=58&l3=162&l4=92> (accessed 4/20/2009).
—. Metrics for assessing and certifying. n.d. c [cited 4/20/2009 2009]. <http://www.crl.edu/content.asp?l1=13&l2=58&l3=162> (accessed 4/20/2009).

Consultative Committee for Space Data Systems. 2002. Reference model for an open archival information system (OAIS). Washington, D.C.: CCSDS Secretariat.

CUGIR work group. CUGIR data management and distribution policy. in Albert R. Mann Library, Cornell University [database online]. 2005 [cited 11/29 2005]. <http://cugir.mannlib.cornell.edu/CUGIRSecurityAssessment.pdf> (accessed 05/08/2009).

Data-PASS. 2006. Data-PASS data deposit agreement, <http://www.icpsr.umich.edu/DATAPASS/pdf/deposit-agreement.pdf> (accessed 05/08/2009).

Digital Curation Centre, and DigitalPreservationEurope. DRAMBORA interactive: Digital repository audit method based on risk assessment. 2008 [cited 6/12/2009 2009]. <http://www.repositoryaudit.eu/> (accessed 6/12/2009).

Green, Ann G., and Myron P. Gutmann. 2007. Building partnerships among social science researchers, institution-based repositories and domain specific data archives. OCLC Systems & Services 23, (1): 35-53.

Hank, Carolyn, Helen Tibbo, and Heather Barnes. 2007. Building from trust: Using the RLG/NARA audit checklist for institutional repository planning and deployment. Society for Imaging Science and Technology Archiving Conference Proceedings. May 21-24, 2007. Arlington, VA.

HATHI Trust. Accountability. n.d. <http://www.hathitrust.org/accountability> (accessed 4/30/2009).

Kaczmarek, Joanne, Patricia Hswe, Janet Eke, and Thomas Habing. 2006. Using the audit checklist for the certification of a trusted digital repository as a framework for evaluating repository software applications: A progress report: D-Lib Magazine 12(12). <doi:10.1045/december2006-kaczmarek> (accessed 4/30/2009).

Knight, G., and M. Hedges. 2007. Modeling OAIS compliance for disaggregated preservation services. International Journal of Digital Curation 2(1), <http://www.ijdc.net/ijdc/article/view/25/28> (accessed 08/01/2007).

Martinez-Uribe, Luis. 2007. Digital repository services for managing research data: What do Oxford researchers need? IASSIST Quarterly (IQ) 31, (3-4): 28-33. <http://www.iassistdata.org/publications/iq/iq31/iqvol313martinez.pdf> (accessed 6/10/2009).

McCue, Janet, Barbara Lust, Jon Corson-Rikert, Brian Lowe, Joy Paulson, Gail Steinhart, Frances Webb, and Elaine Westbrooks. 2007. SGER: Planning information infrastructure through a new library-research partnership [final report]. <http://hdl.handle.net/1813/7573> (accessed 4/30/2009).

McGovern, Nancy Y. ICPSR digital preservation policy framework. 2007. <http://www.icpsr.umich.edu/DP/policies/dpp-framework.html> (accessed 4/27/2009).

McNeill, Katherine. 2007. Interoperability between institutional and data repositories: A pilot project at MIT. IASSIST Quarterly 31, (3-4): 6-12.

Rice, Robin. 2007. DISC-UK DataShare project: Building exemplars for institutional data repositories in the UK. IASSIST Quarterly 31, (3-4): 21-7.

RLG-NARA Digital Repository Certification Task Force. 2007. Trustworthy repositories audit & certification: Criteria and checklist. <http://www.crl.edu/PDF/trac.pdf> (accessed 4/27/2009).

RLG-OCLC Working Group on Digital Archive Attributes. 2002. Trusted digital repositories: Attributes and responsibilities. Mountain View, CA: Research Libraries Group (RLG). <http://www.oclc.org/programs/ourwork/past/trustedrep/repositories.pdf> (accessed 4/27/2009).

Ross, Seamus, and Andrew McHugh. 2006. The role of evidence in establishing trust in repositories. D-Lib Magazine 12, (7/8) (Aug). <doi:10.1045/july2006-ross> (accessed 4/30/2009).

Schmidt, Lisa M. 2009. Preserving the H-net academic electronic mail lists. Society of American Archivists, SAA Campus Case Study - Case 11, <http://www.archivists.org/publications/epubs/campusCaseStudies/casestudies/Case11-Schmidt.pdf> (accessed 04/27/2009).

Smith, M., and R. W. Moore. 2007. Digital archive policies and trusted digital repositories. International Journal of Digital Curation 2, (1). <http://www.ijdc.net/ijdc/article/view/27/30> (accessed 08/01/2007).

Steinhart, G.S. 2007. DataStaR: An Institutional Approach to Research Data Curation. IASSIST Quarterly (IQ) 31, (3-4): 34-39. <http://www.iassistdata.org/publications/iq/iq31/iqvol313steinhart.pdf> (accessed 6/10/2009)

Steinhart, G. S., and B. J. Lowe. 2007. Data curation and distribution in support of Cornell University's Upper Susquehanna Agricultural Ecology Program. Chapel Hill, NC, <http://dspace.library.cornell.edu/handle/1813/7517> (accessed 05/24/2007).

Treloar, Andrew; David Groenewegen, Cathrine Harboe-Ree. 2007. The Data Curation Continuum: Managing Data Objects in Institutional Repositories D-Lib Magazine 13, (9). doi:10.1045/september2007-treloar (accessed 6/10/2009).

University of Nottingham, UK. n.d. OpenDOAR - policies tool - directory of open access repositories. 2007. <http://www.opendoar.org/tools/en/policies.php> (accessed 5/8/2009).

Vardigan, Mary, and Cole Whiteman. 2007. ICPSR meets OAIS: Applying the OAIS reference model to the social science archive context. Archival Science 7, (1) (03/20): 73-87.

Wolfe, Robert. 2007. PLEDGE policy list. MIT Libraries. <http://pledge.mit.edu/images/1/13/PLEDGEPolicies20070927.pdf> (accessed 05/06/2009).

Copyright © 2009 Gail Steinhart, Dianne Dietrich, and Ann Green
spacer
spacer

Top | Contents
Search | Author Index | Title Index | Back Issues
Editorial | Next Article
Home | E-mail the Editor

spacer
spacer

D-Lib Magazine Access Terms and Conditions

doi:10.1045/september2009-steinhart