Towards Transparent and Scalable OpenURL Quality Metrics
The development of link resolvers and the OpenURL framework over a decade ago paved the way for open-ended, context-sensitive linkage from databases, indexes, and abstracting services to the appropriate resources and services to meet library users' needs. Library patrons can now retrieve more comprehensively linked scholarly information than ever before. However, even with today's link resolver technology and the OpenURLs on which the link resolvers depend, following a reference link all the way to full-text can still frustrate library users all too often. In this article, the authors present research on why OpenURLs fail so frequently. They also describe a model for measuring the quality of OpenURL metadata, using a log processor and reporting software. The results of this study demonstrate the potential applicability of such a system as a scalable, stand-alone service for all libraries, OpenURL and full-text content providers, and link resolver vendors to evaluate and improve the completeness and consistency of their links.
Recognizing the need for open, context-sensitive linkage among secondary sources, catalogs, and primary sources in a hybrid library environment, Herbert Van de Sompel and his colleagues at the University of Ghent researched and developed the first link resolver in the late 1990s. The project, christened "Special Effects," or SFX, aimed to create an open linking framework in which links could be fashioned dynamically, based on relationships and agreements between information providers and libraries, to present scholarly resources to library users in the context of the entire collection available to them (Van de Sompel and Hochstenbach, 1999a; Van de Sompel and Hochstenbach, 1999b). As part of this effort, the research team sought to develop within the SFX framework "extended service-links" for scholarly information resources that would go beyond the traditional notion of reference links that simply connect metadata to the full-text content described by that metadata. What was needed, the team felt, were links that would take into account the context of the individual users who followed those links and the access limitations (or absence thereof) imposed by business agreements among information providers.
To point library users to the "appropriate copy" of a desired item, Van de Sompel, working two years later with Oren Beit-Arie from Ex Libris (USA), introduced the OpenURL (Beit-Arie et al., 2001). The OpenURL framework "provides a standardized format for transporting bibliographic metadata about objects between information services" (Van de Sompel and Beit-Arie, 2001). More specifically, an OpenURL carries the data about a bibliographic item from an information provider to a library's link resolver (an SFX server, for example). The link resolver compares the metadata embedded within the OpenURL against the library's holdings to determine which online and analog options to present to the user who initiated the search. The OpenURL is thus the hook that connects the database, index, or abstracting service holding the metadata to the appropriate services to meet library users' needs through the mediation of the library link resolver. "The OpenURL is the enabling mechanism for open, [context-sensitive] linking from resources" (Van de Sompel and Beit-Arie, 2001).
The development of the OpenURL framework was a much heralded milestone for providing access to full-text resources across discovery systems. It seemed to provide a crucial, and seemingly final, piece of the puzzle of how to provide cost-effective, appropriate services to early 21st-century library users, regardless of the source of the reference citations and other scholarly information used by researchers. Van de Sompel and Beit-Arie closed their 2001 paper on OpenURL optimistically: "It seems reasonable to conclude that the future of open linking in the web-based scholarly information environment looks bright" (Van de Sompel and Beit-Arie, 2001). Two years later, Jenny Walker of Ex Libris observed that the development of the SFX link server and OpenURLs had "revolutionized linking for libraries" (Walker, 2003, p. 87). The standardization of the OpenURL protocol by the National Information Standards Organization (NISO) in 2004 (as ANSI/NISO Z39.88) further reinforced this climate of optimism. Library users were able to retrieve faster, more reliable, and more comprehensively linked scholarly information than they ever had prior to the work of Van de Sompel et al. Indeed, this improved access has been a boon to library users and a number of commercial link resolvers have appeared on the market over the past ten years: SFX (now supported and sold by Ex Libris), EBSCO Linksource, Innovative Interfaces' WebBridge, Serials Solutions' 360 Link, WorldCat Link Manager, and others. In July 2010, Ex Libris celebrated the release of SFX 4.0, noting that the SFX link resolver had been deployed at 1800 institutions in 53 countries (Ex Libris, 2010), and, by one estimate, some one billion OpenURLs are now exchanged globally each year (HangingTogether Blog, 2009). But in spite of the development of link resolver technology and the OpenURLs on which they depend, following a reference link all the way to the full text frustrates users too often. The goal to provide fully reliable, seamless linkage between reference citations and full-text information has not been fully achieved.
OpenURL Metadata: The Issue of Quality
In summarizing the features of the prototype SFX link resolver, Van de Sompel and Hochstenbach remarked that: "Depending on the accuracy of the link-to-syntax provided by the primary publisher's system ... [link resolution] can lead directly to the full-text of the referred paper" (Van de Sompel and Hochstenbach, 1999b). At the time, most link-to-syntax could only manage to link to an appropriate table of contents. With the development and standardization of the OpenURL protocol, the precise linkage to an article's host serial, its year of publication, its volume and issue numbers, and the starting page of the article itself became possible. Possible, but not always achievable.
The dynamic reference linking model assumed that the citation metadata embedded in the OpenURLs would be inherently consistent and accurate. In practice, this has not always been the case. In fact, even as early as 1997, Clifford Lynch anticipated the unreliability of this metadata. In writing about the linkage between abstract and indexing (A&I) databases to serials records in catalogs and union lists, he noted that "experience in practice has shown that the [linking] data are often inaccurate or incomplete" (Lynch, 1997, p. 459). Lynch assumed, however, that market pressures would force providers to improve the quality of their links. Miriam E. Blake and Frances L. Knudson, reflecting on the results of the first test instances of the OpenURL framework in 2002, identified a number of ways in which incomplete or inaccurate metadata embedded in the link-to-syntax of OpenURLs could lead to link failure: variation in ISSN for purposes of matching copies, errors or inconsistencies in the transcription of volume and issue information, incorrect page numbers for the referred article, and incorrect publication dates (Blake and Knudson, 2002). The authors called for increased consistency across databases, increased communication between primary and secondary publishers, increased awareness of bibliographic citation standards by authors, and increased outreach by librarians. This concern was not lost on the members of NISO's Committee AX, which was charged to produce the OpenURL standard. ProQuest's Todd Fegan, a member of that team, noted that the OpenURL standard "isn't perfect. It doesn't fix data discrepancies. It assumes that the metadata that is transported from one system can be properly interpreted and matched in a second system. Differing editorial policies, tagging rules, etc. ... are still problematic. Librarians need to understand that there will be errors" (Hendricks, 2003, p. 131). Research and commentary published since the implementation of the OpenURL standard confirm Fegan's prediction (Machovec and Stockton, 2004; Beall, 2005; Livingston, Sanford, and Bretthauer, 2006; Wakimoto, Walker, and Dabbour, 2006). More recently, a report prepared for the United Kingdom Serials Group (UKSG), entitled "Link Resolvers and the Serials Supply Chain," examined some of the problems stemming from the lack of data standards, as well as inaccurate and incomplete metadata in OpenURLs. In conjunction with this report, the UKSG commissioned a survey to explore the link resolver and serials supply chain landscape. The results of the survey revealed the scale of the OpenURL metadata problem. Writing on behalf of Scholarly Information Strategies, James Culling noted that:
72% of respondents to the online survey either agreed or strongly agreed that a significant problem for link resolvers is the generation of incomplete or inaccurate OpenURLs by databases (for example, A&I products). OpenURLs may be broken on account of insufficient or incorrect metadata that leads to erroneous results in the link resolver's service menu or prevents the resolver from creating a sufficiently deep link to a target site. One librarian interviewed commented that his experience with some sources was so bad that he refused to enable OpenURL links from them as he did not wish to expose his end users to the problems (Culling, 2007, p. 33).
All of these observations and inquiries point either to problems with metadata issued by content providers or to the need to improve this metadata through a more qualitative evaluation. Most existing research does not, however, address the qualitative dimension of this metadata in the OpenURL context, and the UKSG report does not include specific recommendations for how to measure the extent of, and fix, the problems. Instead that report led to the creation of the joint UKSG/NISO Knowledge Bases and Related Tools (KBART) working group (http://www.uksg.org/kbart), which focuses on improving the metadata accuracy within the batch exchange of metadata from content providers to knowledge base vendors. The working group has released recommendations about how to streamline that exchange to minimize errors within link resolver knowledge bases.
Indeed, there has been no systematic research that focuses specifically on OpenURL quality and how to evaluate it. In their review of metadata quality studies published in the early '00s, Thomas R. Bruce and Diane I. Hillmann did indicate the need for metadata quality metrics, in general. The studies suggested a focus on completeness, provenance, accuracy, conformance to expectations, logical consistency and coherence, timeliness, and accessibility (Bruce and Hillmann, 2004). At about the same time, Baden Hughes reported on the design and implementation of a scalable, dynamically-adjustable infrastructure to support the qualitative assessment of the semantic and syntactic content of metadata within a specialized OAI sub-domain: the Open Language Archives Community, or OLAC (Hughes, 2004). The goals of the OLAC initiative were to: (1) establish a baseline against which future evaluative instances could be compared, (2) supply information to data providers, and (3) assess a set of domain-grounded controlled vocabularies. Hughes developed an algorithm that scored individual metadata records based on their adherence to best practice guidelines for Dublin Core metadata elements and codes and for controlled vocabularies specific to the OLAC domain. Two values contributed to the result: a "Code Existence Score" and an "Element Absence Penalty" (Hughes, 2004, p. 322). This metadata quality assessment algorithm was applied through an open source service that lies atop the OLAC Harvester and Aggregator (Simons, 2003; cited in Hughes, 2004, p. 323). This service allowed the developers to evaluate OLAC metadata quality from a number of perspectives: (1) by data provider, (2) across the whole community, and (3) between archives. Hughes concluded that:
While the lack of infrastructural support for qualitative assessment of metadata within the digital archives community is notable, we believe that the provision of tools which assist metadata creators and managers to understand the qualitative aspects of metadata are of critical importance. Such tools enable archive managers to identify specific areas for metadata enrichment activity, and hence derive the greatest return on investment ... We offer our approach and implementation to the broader digital libraries community in the hope that the model and implementation may benefit a larger range of institutional data providers, and ultimately, end-users (Hughes, 2004, p. 327).
It is our intention in the current study to follow the lead of Baden Hughes and his associates and report on similar models and their implementation for the qualitative evaluation of OpenURL metadata. OpenURL link failures stem from some combination of: (1) incomplete, inconsistent, or inaccurate citation metadata, or (2) inaccurate link resolver knowledge base holdings, or (3) inaccurate link-to syntax, dependent on the local context. #16 (2010) summarize the difficulty in trying to identify the cause of OpenURL link failure: "It remains to be seen whether or not generalizations can be made. It is certainly true, however, that for particular combinations of source, resolver, knowledge base, and target, some components are more at fault than others (p. 18)."
These errors reveal a missing component in the original OpenURL model: in shifting the responsibility for the accuracy and maintenance of the linking function of the URL from the content provider to the link resolver, accountability for the accuracy of the citation metadata has been overlooked. It is to the frequent inadequacies and inaccuracies of the citation metadata and the need for an evaluative component in the reference linking model a service that can be deployed between the link resolver and the OpenURLs source that we now turn our attention, leaving issues associated with OpenURL link-to-syntax for future research.
Measuring OpenURL Quality: L'Année philologique
Our investigation began in earnest in 2008, thanks to a planning grant from the Andrew W. Mellon Foundation, secured by Eric Rebillard, a Professor of History and Classics at Cornell University. This grant supported research to improve the linking out of the online reference database Rebillard edits, L'Année philologique. Among the issues Cornell librarians examined was the quality of the OpenURLs the database offers to its users. As managing editor of L'Année philologique, Rebillard was aware that too often the links sent from L'Année philologique did not successfully resolve to the requested resource. Why not? From the perspective of the OpenURL provider, this question is difficult to answer. When users click on a link embedded in an OpenURL from a provider such as L'Année philologique, the provider has no way of knowing where the transactions will end up, because open-ended linking in a context-sensitive model is designed to present users with institutionally-specific digital and analog options (i.e., the "appropriate copy"). Since full-text electronic content is usually locked up behind authentication barriers, systematic feedback regarding link failures is impossible. Did a link fail because of incomplete or incorrect metadata transmitted from the database? Or did the problem lie in the link resolver's knowledge base, the link-to-syntax used by the link resolver, or instability at the content provider's end of the network?
In the grant-funded study, we explored the feasibility of developing a fully automated OpenURL evaluation process, based on the Hughes model (Chandler, 2009). We built a system, including a log processor and reporting software, to analyze and score the OpenURLs in L'Année philologique and give each a "quality" rating, based on the presence or absence of certain citation metadata elements. These scores allowed Rebillard to see precisely where the OpenURLs in L'Année differed from the norm and help him to target metadata improvement efforts in a cost-effective manner. Further, we hoped to use this investigation of OpenURL quality in a single database as a proof of concept to demonstrate the applicability of such a system as a stand-alone service for all libraries, OpenURL and full-text content providers, and link resolver vendors to evaluate and improve the completeness and consistency of their links.
In the experiment, we reviewed the 800,000 pre-computed, unique OpenURLs in the L'Année philologique repository. We discovered that our original aim to give quality points, as well as a pass/fail rating, to each element in the OpenURLs and then to total these points to derive an overall quality score for each OpenURL did not work well (Chandler, 2009, p. 2). How, for example, could we accurately score the element for article or book chapter ("atitle" or "title"), since titles are endlessly variable, and it is nearly impossible to determine if a particular pattern is correct? Moreover, this method did not effectively take into account the frequency with which particular elements appear in the link-to-syntax, which allows link resolvers to connect to the full-text providers. Without comparative weighting of these elements against other parts of the OpenURL, it is unclear how to prioritize efforts to correct and otherwise edit this metadata. Finally, the approach did not generate information about source-specific data patterns in the OpenURLs.
We decided to revise our approach to the evaluative model by creating three new constant values:
Using these variables, we were able to develop and generate two reports on the OpenURLs found in L'Année: one for element frequency by genre and another listing string patterns within element values.
Along with source-specific data, the genre element report provides a benchmark, so that an OpenURL provider - in this case, L'Année philologique - can compare the frequency of these elements in its OpenURLs with that of its peers and competitors. Based on six months of all OpenURL activity in the Cornell University Library's link resolver, the most frequent genre type sought was the article (see Figure 1). For this reason, we developed and focused on that report first.
Figure 1: Genre frequency in Open URLs sent to the Cornell Link Resolver, July - December 2008
Each report contains entries for nine elements. These are:
title [title of the journal or collection]
For each element, the report supplies two constants and one variable. The "% link-tos that recommend or require element" gives the percentage of link-to-syntax strings in the Cornell link resolver sample that recommend or require the inclusion of the element in question. The "% of all openurls that contain element" gives the percentage of OpenURLs in the Cornell link resolver sample in which the element in question appeared. These two constants provide the benchmark for the target OpenURL element. The "% of this origin's openurls that contain element" gives the percentage of OpenURLs from the source (or "origin") examined that contain the target element. The report for the genre article in the 800,000 OpenURLs from L'Année philologique appears in Figure 2.
Figure 2: Article Element Report for L'Année philologique.
The report shows that title, spage, and volume are the most frequently recommended / required and among the most frequently contained elements for the genre article in OpenURLs. The report also reveals that although the elements title and volume are almost always present in OpenURLs found in L'Année, the element spage never appears. The pervasive absence of this element clearly indicates that OpenURLs sent to link resolvers from this database have a lower chance of successful resolution to full-text than those from other databases. Moreover, given that 94% of all OpenURL providers in the Cornell sample include this element, it seems reasonable to assume that the editors of L'Année philologique should be able to find a way to include it in their OpenURLs as well.
The element patterns report, which lists string patterns within element values, is based on the same six-month data set from the Cornell link resolver. In contrast to the genre report's emphasis on element frequency, this report probes more deeply into the string patterns found within the individual element values. For each OpenURL element included in the report, we created a handful of regular expressions for identifying the particular string patterns present in the data. In the volume element patterns report, for example, we established six pattern categories: five regular expression matches, plus an "other" category for those patterns that do not match any of the regular expressions (see Figure 3). For L'Année philologique, we discovered that 50% of the occurrences of the volume element in the database's OpenURLs contained Roman numerals. This common characteristic of the volume element pattern in L'Année contrasts sharply with the patterns for the same OpenURL element found in the six-month data set from the Cornell link resolver. In the latter case, none of the volume elements contained Roman numerals. In fact, 99% of the volume elements in the six-month data set contain an Arabic number only. This information is valuable for the editors of L'Année philologique, because it indicates that their use of the volume element differs from the norm and that OpenURL link resolvers may not be optimized to handle this string pattern. Although the use of Roman numerals in this element is "legal," it could and probably does lead to errors when passed along in the link-to-syntax to the full-text content provider at the other end of the OpenURL exchange.
Figure 3: Volume Element String Pattern Report for L'Année philologique.
Thus, we were able to supply the OpenURL provider (in this case, L'Année philologique) with data about the characteristics and quality of its OpenURLs. This work constitutes a promising first step in the development of a method to generate a systematic evaluation of OpenURLs for any and all providers. In the final report on the work supported by the Mellon grant, the project team concluded that: "we now have an easy to comprehend, scalable OpenURL quality model," and that:
the implementation of such a system as a stand alone service would have significant value to libraries, OpenURL providers, link resolver vendors, full text content providers, and most important of all, library patrons, by filling a critical gap in the OpenURL protocol: objective, empirical, transparent feedback for supply chain participants (Chandler, 2009, pp. 5-6).
The literature on reference linking experiments in the late 1990s and early 2000s is filled with discussion about two dichotomies: (1) static vs. dynamic linking and, (2) closed vs. open linking frameworks. In other words, the research of that period highlighted by Van de Sompel's development of the SFX link resolver focused on how to break out of the proprietary linking silos that existed at the time. Reflecting on the experimental work of that era, one could argue that solving the "appropriate copy problem" obscured, paradoxically, a critical and inherently positive feature of closed linking frameworks: the ability to determine first hand whether or not a link successfully resolves. The evolution of the architecture of reference linking from static to dynamic and from closed to open knocked down the proprietary linking silos and swept libraries forward, but at a cost. And it is this unforeseen and as yet unaccounted-for cost that librarians have been hearing sporadic complaints about ever since - complaints that are symptomatic of users' frustrated expectations. As Van de Sompel and Hochstenbach observed over a decade ago: "When using a library solution, the expectations of the net-traveler are inspired by his hyperlinked Web-experiences. To such a user, it is not comprehensible that secondary sources, catalogues, and primary sources, that are logically related, are not functionally linked" (Van de Sompel and Hochstenbach, 1999a). With the advent of OpenURLs, such a user may still discover that although these resources are functionally linked, the links don't always function.
Buoyed by the success of the grant-funded Cornell experiment, a proposal to expand this research into the wider industry community was reviewed and approved by the NISO Business Information Topic Committee in late 2009. The proposal called for the appointment of an OpenURL Quality Metrics Working Group, otherwise known as IOTA (Improving OpenURLs Through Analytics, http://openurlquality.niso.org). The working group includes representatives from libraries, content providers, and link resolver vendors, with the goal of investigating the feasibility of creating industry-wide, transparent and scalable metrics for evaluating and comparing the quality of OpenURL implementations across content providers.
The authors would like to recognize the contributions of Elaine Westbrooks, Jonathon Shultz, Rick Silterra, David Banush, Pete Hoyt, and Eric Rebillard for their work on the L'Année philologique OpenURL Quality Project. We would also like to thank the Mellon Foundation for funding the grant that made this research possible.
 Beall, Jeffrey. 2005. Metadata and Data Quality Problems in the Digital Library. Journal of Digital Information. 6(3).
 Blake, Miriam E. and Knudson, Frances L. 2002. Metadata and Reference Linking. Library Collections, Acquisitions, and Technical Services. 26(3), pp. 219-230.
 Bruce, Thomas R. and Hillmann, Diane I. 2004. The Continuum of Metadata Quality: Defining, Expressing, Exploiting. In Metadata in Practice. Ed. Diane I. Hillmann and Elaine L. Westbrooks. Chicago: American Library Association, pp. 238-256.
 Chandler, Adam. 2009. Results of L'Année philologique online OpenURL Quality Investigation: Mellon Planning Grant Final Report. http://metadata.library.cornell.edu/oq/files/200902%20lannee-mellonreport-openurlquality-final.pdf.
 Culling, James. 2007. Link Resolvers and the Serials Supply Chain: Final Project Report for UKSG, p. 33. http://www.uksg.org/sites/uksg.org/files/uksg_link_resolvers_final_report.pdf.
 Ex Libris. 2010. A Landmark for Scholarly Linking: Ex Libris Announces the Release of SFX 4.0. Press release, July 19, 2010. http://www.librarytechnology.org/ltg-displaytext.pl?RC=14897.
 Hendricks, Arthur. 2003. The Development of the NISO Committee AX's OpenURL Standard. Information Technology and Libraries. 22(3), pp. 129-133.
 Hughes, Baden. 2004. Metadata Quality Evaluation: Experience from the Open Language Archives Community. In Digital Libraries: International Collaboration and Cross-Fertilization. Ed. Zhaoneng Chen et al. Berlin: Springer-Verlag, 2004, pp. 320-329.
 Livingston, Jill; Sanford, Deborah; and Bretthauer, Dave. 2006. A Comparison of OpenURL Link Resolvers: The Results of a University of Connecticut Libraries Environmental Scan. Library Collections, Acquisitions, and Technical Services. 30(3/4), pp. 179-201.
 Lynch, Clifford A. 1997. Building the Infrastructure of Resource Sharing: Union Catalogs, Distributed Search, and Cross-Database Linkage. Library Trends 45(3), pp. 448-461.
 Machovec, George and Stockton, Melissa. 2004. OpenURL Link Resolvers: A Practical Approach with Gold Rush. Technical Services Quarterly. 21(4), pp. 1-16.
 National Information Standards Organization. 2004 (2010). ANSI / NISO Z39.88: The OpenURL Framework for Context-Sensitive Services.
 Trainor, Cindi and Jason Price. 2010. Digging into the Data: Exposing the Causes of Resolver Failure. Library Technology Reports 46(7), pp. 15-26.
 Van de Sompel, Herbert and Hochstenbach, Patrick. 1999a. Reference Linking in a Hybrid Library Environment. Part 1: Frameworks for Linking. D-Lib Magazine. 5(4). doi:10.1045/april99-van_de_sompel-pt1.
 Van de Sompel, Herbert and Hochstenbach, Patrick. 1999b. Reference Linking in a Hybrid Library Environment. Part 2: SFX, A Generic Linking Solution. D-Lib Magazine. 5(4). doi:10.1045/april99-van_de_sompel-pt2.
 Wakimoto, Jina Choi, David S. Walker, and Katherine S. Dabbour. 2006. "The Myths and Realities of SFX in Academic Libraries." The Journal of Academic Librarianship 2(32): 127-136.
 Walker, Jenny. 2003. OpenURL and SFX Linking. The Serials Librarian. 45(3), pp. 87-100.
About the Authors