Donald W. King
Carol Hansen Montgomery, Ph.D.
An October 2002 D-Lib Magazine article by the authors described the changes in the Drexel University W.W. Hagerty Library's operational costs associated with the migration to a (mostly) all-electronic journal collection. The present article gives the use perspective to determine whether the migration to the electronic collection has had an effect on the number of journal readings, outcomes from reading and information-seeking and reading patterns. Key findings are that amount of reading remains high; outcomes from reading continue to be favorable, particularly from library-provided articles; while 42 percent of faculty reading is from library-provided articles, faculty still rely heavily on readings from personal subscriptions; most of the library-provided reading is from electronic articles; and readers spend much less time locating and obtaining library-provided articles when they are available electronically.
In 1998, Drexel University's W. W. Hagerty Library embarked on a project to migrate to an electronic journal collection. Since Drexel became one of the first universities to begin a fundamental shift to electronic journals, the U.S. Institute for Museum and Library Services (IMLS) extended funding to Drexel to perform a comparative analysis of the library's operational costs and impact associated with print and electronic collections and to develop a model for use by other libraries. A recent D-Lib Magazine article primarily described these costs, usage metrics provided by vendors or publishers and counts from an in-library server, and included some data from a readership survey of Drexel faculty and doctoral students [Montgomery & King, 2002].
The present article reports the results of a comprehensive analysis of a readership survey covering the number of journal readings, outcomes from reading and information-seeking, and reading patterns following implementation of the nearly exclusive electronic journal collection.
When the migration to the electronic collection began in 1998, the collection input consisted of 1,710 print titles and 200 electronic titles (after many expensive core titles had been cancelled) and, by 2002, the collection input included 8,600 unique electronic titles and 370 print titles. The electronic collection actually comprised 13,500 titles, some of which were duplicated among "package" agreements. These titles are distributed as follows:
Individual Subscriptions (266 titles), which are almost always purchased from a subscription agent (e.g., Wiley titles, specialty design arts titles).
Publishers' Packages (2,500 titles), which may or may not be part of a consortium "deal," and are acquired by purchase through a subscription agent, the consortium or from the publisher directly (e.g., ScienceDirect, Kluwer titles).
Aggregator Journals (480 titles), which come from vendors that provide access to different publishers' journals. The aggregators do not drop content; only add (so far). The collections started as full-text content and added searching (e.g., JSTOR, MUSE).
Full-Text Database Journals (10,200 titles), which provide access to electronic journals from different publishers but do not make title or issue level access available (except ProQuest). Examples are WilsonSelect and Lexis/Nexis. Titles are added or removed regularly according to the database vendor's contracts with publishers. They often have an embargo of six months or more on current issues.
There is considerable overlap among the journals in these collections and between the full-text database journals and the two other types.
Number of readings is only one measure of journal usage. The Hagerty Library has maintained careful records of usage data provided by vendors and publishers. The staff also developed their own usage observations from "clicks" to journal titles on their server. Over the years, staff also kept reshelving records from the library's current journals and bound volume collection. The readership survey has an advantage over those methods in providing a common use metric for the electronic collection, print current journals and bound volumes. The survey also provides metrics of the outcomes of reading and of information-seeking and reading patterns. Each of these measures of use has some well-known flaws, but together they provide a useful picture concerning electronic journal use.
The readership survey of faculty and doctoral students conducted in May/June 2002 was based on a questionnaire design that has been applied in nearly 25,000 survey responses since 1977 [Tenopir & King, 2000]. The paper-based questionnaire was distributed by campus mail.
At the time of the survey, the Drexel population included 496 full-time faculty, 342 doctoral students, 2,200 masters students and about 11,000 undergraduate students. The self-administered questionnaire was distributed to the entire faculty with 91 responses (18% response rate) and the 342 doctoral students with 104 responses (30% response rate). (A few of the 104 responses were made by masters students, thought to be entered in the doctoral program.) The survey responses from faculty and doctoral students reasonably reflect the respective populations sampled, although responses from the College of Arts and Sciences for both faculty and doctoral students appear low, perhaps because of the relatively lower priority of the journal literature in relation to books and primary sources for scholarship in the humanities and social sciences.
The survey included questions about such things as the number of scholarly articles read in the past month, number of personal subscriptions, college or school of respondant, degrees earned, and other demographic data. The key component of the questionnaire dealt with a critical incident: the most recent reading of a scholarly article. Critical incident questions included: the form of the article read; time spent reading; how the respondent found out about the article (i.e., browsing various forms, online searches, cited in a publication, someone told, etc.); source of the article (i.e., personal electronic or print subscription, library electronic or personal subscription, various forms of separate copies, etc.); the amount of time spent browsing or searching, locating and obtaining, photocopying, printing, and so on; and the purposes for which the article was read and ways the reading affected the purpose for reading. The critical incident method provided not only a means of observing reading from non-library sources, but also from three library collection services (i.e., electronic journals, current journals, and bound volumes). Reading was defined as going beyond the table of contents, title and abstract to the body of the article.
For some estimates, the 195 responses were treated as simple random sample observations, subdivided by faculty and doctoral students. For example, the average number of faculty readings was estimated to be 16 per person per month or 197 readings per year. The confidence interval for this estimate is 197 + 41 readings per person per year at the 95% level. The figures for doctoral student readings are 248 + 40 readings per student per year at the 95% level of confidence.
The critical incident observation method is potentially biased since the population now sampled is all of the readings done and no longer of people sampled. Each sampled, critical incident reading has a different probability of selection. That is, the most recent reading of a respondent who reads a great deal has a higher probability of entering the sample than the last reading of someone who reads little. One way to address this problem is to post stratify responses into ranges of amount of reading and base estimates on stratified random samples where the total amount of reading in each stratum represents the total population of readings for that stratum. In this way, one can account for differences in estimates among frequent and infrequent readers and their readings. As it turns out, the estimates using "raw" critical incident readings are very close to the post stratified estimates and, therefore, the raw estimates are applied in this article. In this case, the confidence interval for the proportion of readings by faculty from the library electronic collection is 29% + 8% at the 95% level of confidence, and the doctoral student estimate is 58% + 8% at the 95% level of confidence.
Scholarly Journal Reading And Outcomes of Reading
The average number of readings by Drexel faculty and doctoral students is given in Table 1, along with the average time spent per article read and annual average time spent reading per person.
The figures regarding amount of reading and the time spent reading by faculty are similar to that observed elsewhere [Tenopir & King, 2000]. These averages are all less for faculty than for doctoral students, a phenomenon observed in earlier studies done at the University of Tennessee and Johns Hopkins University. It will be shown later that Drexel faculty follow different information seeking and reading patterns from doctoral students as well.
While the amount of reading by faculty is similar to that observed elsewhere [Tenopir & King, 2000], the distribution of reading among faculty is quite varied, ranging from no reading in the last month (3 of the 91 respondents) to 100 readings in the last month. The distribution of doctoral student reading is similar, also ranging from no reading to 100 readings in the last month.
The faculty and doctoral students who read a great deal do so because of the usefulness and value of information obtained from the articles. Both faculty and doctoral students indicate that over half of their reading is done for the principal purpose of conducting primary research (see Table 2 below).
We also asked the respondents to rate how important the information content was in achieving the principal purposes for which the reading was done. Ratings were from 1 (not at all important) to 7 (absolutely essential). As shown, primary research and writing tended to have the highest rating of importance.
Our findings indicate that library-provided articles, over 70 percent of which are electronic, tend to be more useful and valuable than articles obtained from other sources. Library-provided articles consistently had higher ratings of importance for all purposes. That is, overall importance ratings from library-provided articles read by faculty was 5.7 versus 5.2 for non-library readings, and for doctoral students the comparison was 5.4 versus 4.8, respectively.
Another indicator of reading outcome is the way the information affected the principal purpose for which the reading was done. Table 3 gives examples of the ways reading affected primary research done by faculty and doctoral students.
The most common ways readings affected research were inspiring new thinking, improving the research results, and narrowing, broadening or changing the focus of the research. These proportions also tend to be higher for articles provided by the library. For example, 81 percent of all readings from library-provided articles inspired new thinking versus 54 percent of readings from other sources.
A useful assessment of the value of scholarly journal information is what readers are willing to pay for the information as measured by their time spent reading it. As mentioned above, faculty are estimated to spend 130 hours per year reading this information, and doctoral students spend 210 hours. We found, however, that the average time spent reading library-provided articles is somewhat higher than is spent reading articles provided from other sources (42 minutes per reading library articles vs. 38 minutes).
We also looked at whether respondents had, in the past two years, received any awards or special recognition for their research or other profession-related contributions. About 52 percent of faculty and 33 percent of doctoral students had received such recognition and those who did tend to read more than those who did not (see Table 4).
Thus, award recipients tend to read more than others. The award recipients who last read a library-provided article average more readings than award recipients who used non-library sources. That is, faculty who received awards averaged 234 readings per year from library articles versus 191 readings from other sources, and award-winning doctoral students readings are 301 and 273 readings, respectively. Thus, again library-provided articles appear to correlate with success, and most of these articles are from electronic sources.
One of the most heartening outcomes of the survey is that readers report they spend much less time locating and obtaining library-provided articles when they are available electronically. They no longer need to go to the library, but can obtain these articles from their offices, homes or elsewhere. On the other hand, when readers browse personal subscriptions, studies suggest that print versions require less reader time.
Use And Readings of the Drexel Hagerty Library Electronic Collection
From responses to the critical incident survey questions, we estimate that about 41.5 percent of faculty readings and 75.7 percent of doctoral student readings are provided by the Drexel Library. Thus, faculty average about 82 readings per year and doctoral students average 188 readings from library-provided articles. Clearly, doctoral students rely on the library much more than faculty do (more is said about this in the next section). We also established the form of the library-provided articles as shown in Table 5.
Most of the library-provided articles are from the electronic journal collection, and the proportions are similar for faculty and doctoral students.
The average number of readings by faculty of the library electronic collection is 57 per person. Thus, the 496 faculty members account for about 28,300 of the total readings, although about 20 percent of faculty do not ever use the library electronic collection. Nearly 90 percent of the 342 doctoral students use this collection. All doctoral students average about 144 readings per year from the electronic journal collection and masters students 55 such readings, based on the few masters student responses.
Doctoral students are estimated to have a total of 49,200 electronic collection readings and the 2,200 masters students about 120,600 such readings. While we do not have survey results for undergraduate students, we estimate that they may have about 101,000 readings from the electronic collection based on corresponding library collection use observed at the University of Tennessee and Johns Hopkins University. This yields an estimate of about 299,000 readings from the Drexel Hagerty Library electronic journal collection.
The "use" data provided by 90 percent of vendors and publishers indicate that the electronic collection has an annual use of 400,000, and the "click use" data is estimated to be about 160,000. Thus, our readership estimate of 299,000 appears to be partially validated since it is thought that the number 400,000 is probably too high and the click data too low. While subject to the precision and accuracy issues of survey research, one advantage of readership estimates is that they provide a common metric across all three library collection services. Comparison of the various annual use metrics is summarized in Table 6 below.
Use of current journals is estimated to be 33,000 readings, compared with about 15,000 issues re-shelved, and 11,000 readings of the bound volume collection, compared with 9,000 volumes re-shelved. Exit surveys in other libraries [Griffiths & King, 1993] have shown that a single current periodical issue has an average of 3.2 readings per issue re-shelved, and a bound volume averages about 1.2 readings per volume re-shelved. Thus, the estimated readings of these two collection services appear to be reasonably valid. With these reading estimates, we were able to validate estimates of the library's cost per reading of the three library collection services as reported previously [Montgomery & King, 2002].
Information Seeking And Reading Patterns
The proportion of reading by source is given in Table 7 below.
Faculty members continue to read from their personal collections (i.e., from personal subscriptions). They subscribe to an average of 3.6 subscriptions per faculty member, a number close to that observed elsewhere for scientists in universities [Tenopir & King, 2000]. They continue to read these personal subscriptions heavily (about 25 readings per personal subscription), but mostly for browsing and current awareness or keeping up with the literature. Doctoral students rely much less on personal subscriptions in that they subscribe to only 2.5 subscriptions and read much less from them (about 13 readings per personal subscription). Thus, only about 14 percent of doctoral students' readings are from this source. None of the personal subscription readings by faculty or doctoral students are reported to be from electronic subscriptions although some reading from separate copies are electronic.
The formats of articles read are mostly print journals and electronic articles (not necessarily from journals). Table 8 shows that faculty sometime photocopy the articles that are read from print (i.e., 28% of them), but more often will print out the electronic versions of articles (i.e., 68% of them). Although about 30 percent of the electronic readings are read on the screen, these readings tend to be much shorter in duration and to be for the purpose of keeping current than are the readings from printouts. Doctoral students tend to photocopy more and print out less, just over one-half of the electronic articles they read.
As shown in Table 9, faculty discover about one-half of the articles they read from browsing (mostly from their personal subscriptions). Doctoral students rely more on online searching, citations in another publication or another person telling them about the articles. Table 10 shows that faculty reading is mostly from articles published recently, reflecting, in part, their habit of browsing personal subscriptions. The age of articles read by faculty is very similar to that observed for scientists at the University of Tennessee [Tenopir & King, 2002].
The age of articles read is an important issue with the new electronic collection because most of the collection is less than five years old. Thus, one important question is whether older materials will continue to be read, particularly since these articles tend to come from the library articles established by the readers to be more useful and valuable. Generally, the source of articles read varies with the age of the article, as shown in Table 11. Proportionally, the library is used much more frequently as the age of the article increases. For the library-provided articles read, nearly 80 percent were electronic up to five years old, but even 45 percent of the articles over five years old that were read were in electronic format. Thus, older articles are being obtained from electronic sources such as JSTOR even though archives are now a relatively small part of the electronic collection.
One purpose of the readership survey was to determine whether the migration to the electronic collection has had an effect on amount of reading, outcomes from reading, and information-seeking and reading patterns. Results suggest that:
On balance, the electronic collection appears to be well read, with highly favorable outcomes.
The results reported here are part of a long-term ongoing research to compare journal formats. As this research has progressed, a framework of metrics was developed for evaluating the three collection services: (1) electronic journals; (2) current journals; and (3) bound journals (see [King et al., 2002]). The framework consists of five specific measures: input cost of resources, output quantities and attributes, usage (amount of use and factors affecting use), outcomes of reading, and domain measures of the characteristics of the service environment. Derived measures include service performance (i.e., how well the service performs in terms of relationships between service input and output), effectiveness (i.e., how effective the service is on service use), the impact of the services, and cost and benefit comparisons of electronic and print collections and services.
Upcoming results will include comparison of information-seeking and readership patterns for scientists using pre-electronic, evolving and advanced electronic library collections at the University of Tennessee and Drexel University.
[Griffiths & King] Griffiths J. M. and King, D.W. (1993). Special Libraries: Increasing the Information Edge. Washington, D.C.: Special Libraries Association.
[King et al.] King, D.W., Boyce, P., Montgomery, C.H. and Tenopir, C. (2002). "Library Economic Measures: Examples of the Comparison of Electronic and Print Journal Collections and Collection Services." Library Trends, Winter 2003. In Press.
[Montgomery & King] Montgomery, C. H. and King, D. W. (2002). "Comparing Library and User-Related Costs of Print and Electronic Journal Collections." D-Lib Magazine, 8:10. World Wide Web: <http://www.dlib.org/dlib/october02/montgomery/10montgomery.html>.
[Tenopir & King, 2000} Tenopir, C. and King, D.W. (2000). Towards Electronic Journals: Realities for Scientists, Librarians, and Publishers. Washington, DC: Special Libraries Association.
[Tenopir & King, 2002] Tenopir, C. and King, D.W. (2002). "Reading Behaviour and Electronic Journals." Learned Publishing, 15, 259-265.
(Corrected coding on link to one of the references 8/31/05.)
Copyright © Donald W. King and Carol Hansen Montgomery, Ph.D.