D-Lib Magazine
The Magazine of Digital Library Research
transparent image

D-Lib Magazine

January/February 2015
Volume 21, Number 1/2
Table of Contents


Enabling Living Systematic Reviews and Clinical Guidelines through Semantic Technologies

Laura Slaughter
The Interventional Centre, Oslo University Hospital (OUS), Norway

Christopher Friis Berntsen
Internal Medicine Department, Innlandet Hosptial Trust and MAGICorg, Norway

Linn Brandt
Internal Medicine Department, Innlandet Hosptial Trust and MAGICorg, Norway

Chris Mavergames
Informatics and Knowledge Management Department, The Cochrane Collaboration, Germany

DOI: 10.1045/january2015-slaughter


Printer-friendly Version



In clinical medicine, secondary research that produces systematic reviews and clinical practice guidelines is key to sound decision-making and quality care. Having machine-readable primary study publications, namely the methods and results of published human clinical trials can greatly improve the process of summarizing and synthesizing knowledge in medicine. In this short introduction to the problem, we provide a brief review of the related literature on various efforts to produce semantic technologies for sharing and reusing content from clinical investigations (RCTs and other clinical primary studies). Using an illustrative case, we outline some of the necessary metadata that needs to be captured in order to achieve some initial automation in authorship of systematic reviews and clinical guidelines. In addition, we list desiderata that we believe are needed to reduce the time and costs of maintaining these documents. These include linking provenance information to a much longer scientific investigation lifecycle, one that incorporates a single study's role all the way through its use in clinical guideline recommendations for patient treatments.

Keywords: Systematic Reviews, Clinical Guidelines, Evidence-based Medicine, Ontologies, Enhanced Publication, Semantic Web Standards


1 Introduction

The importance of secondary research in clinical medicine cannot be ignored in discussions concerning digital content and machine-readable publications. Sound healthcare decision-making is dependent on efforts that bridge clinical research knowledge to recommendations about what actions to take in clinical practice. The main issue at hand is that the work of summarizing and synthesizing human clinical studies is a time-consuming and expensive process that can benefit from new semantic technologies and data sharing. Supporting this notion, Bechhofer et al. 2009 [1] introduced the idea of Research Objects (RO) as a solution for exchanging reusable, machine-readable scientific publications. They wrote about this issue and and explained that "the information used to support clinical decisions is not dynamically linked to the cumulative knowledge of the best practice from research and audit."

Maintaining systematic reviews and clinical guidelines with the latest knowledge available involves having a notion of a "living publication" rather than a static document. This approach to disseminate knowledge in a format that is easily consumed and shared is compatible with using semantic web technologies, linked data and ontologies. The flow of knowledge from "downstream" (primary publications) to "upstream" (clinical guidelines and decision support systems) must be coordinated through innovative workflow tools. In this position paper, we address the needs for specific semantic technologies to realize the "living systematic review" proposed by Elliott et al., [2] and the dynamic updating of guidelines proposed by Vandvik et al. [3]. The concept of "living" refers to enhancing accuracy and utility of health evidence by developing persistent, dynamic digital publications that are easily adjustable and transportable. Unlike other conceptualizations of "living documents" [4], the idea is not centered on the traditional paper-based structuring of scientific reports. It concerns the dissemination of the living data and information (i.e. the meta-analysis) that forms the basis of formally written frozen documents.

In this position paper, we have created a list of desiderata — our view of the necessary next steps towards development of new "living systematic reviews and clinical guideline" technologies. Firstly, we present a short introduction to the process of authoring/maintaining systematic reviews and clinical guidelines. We then briefly review the most substantial efforts and communities we have identified, including more general initiatives such as the open Research Objects community and nanopublication as well as the specifically biomedical "health informatics" centered work. As part of our evaluation process, we used an illustrative case to find relevant classes and concepts in available biomedical ontologies. We then go into more detail as we discuss the desiderata. We argue that biomedical and life sciences efforts towards realizing the goal of machine-readable scientific publications need to also consider the extended life of primary studies, specifically for secondary research as well as the flow of knowledge to other information systems in medicine such as clinical decision-making systems. We discuss the needs for "threading" [5] data and information across publications. Although it has been proposed that the entire workflow to produce these publications can be automatized [6], there is still a need for human involvement. For instance, humans must be involved in assessing the quality of primary studies to avoid bias in the meta-analyses or to author guideline recommendations about what clinical actions/treatments to take in specific circumstances. Current systems are essentially "workflow support tools" and the desiderata reflect necessary semantic technology contributions to improve these types of tools.


1.1 Authoring and Maintaining Systematic Reviews and Clinical Practice Guidelines

1.1.1 Translating Research Evidence into Clinical Practice Recommendations

As advocates of evidence-based medicine (EBM) have professed for more than two decades, clinicians should strive to base their decisions on the best available evidence [7]—[9]. To answer questions about the relative effectiveness of treatment and prevention interventions, compiling systematic reviews from a series of well-conducted randomized-controlled trials (RCTs) is the preferred methodology for synthesizing such a body of evidence [10]. Meta-analyses may, if applied appropriately, strengthen an inference about the efficacy of a treatment or prevention intervention, due to the added statistical power yielded by the pooling of results from individual studies [11]. For questions about prognosis and prevalence issues, observational study designs such as cohort or cross-sectional studies may be apt, and may form the basis of systematic reviews.

A widespread way of translating research evidence into actionable treatment recommendations is the development of clinical practice guidelines (CPGs) [9], [12]. CPG recommendations are ideally informed by existing or ad hoc systematic reviews [13], or in their absence, by single RCTs, observational studies, or at times physiological studies or random clinical observations, in descending order of methodological strength [10].

During the last one and a half decades, the process of systematically appraising evidence in the process of developing treatment recommendations has received much attention — notably through the GRADE working group [14], [15] and the work of the Cochrane Collaboration [16]. Importantly, treatment recommendations may vary across clinical and geographical settings due to differences in therapeutic traditions, epidemiological situations or availability of interventions [17]. Possible differences in patient preferences should also be considered when issuing treatment recommendations, especially when the presumed benefit of a treatment balances closely with potential negative consequences for the patient [15].

1.1.2 Evidence Retrieval Process

Although the same summarized evidence may give rise to different recommendations in different settings, a thorough process of evidence retrieval is nevertheless at the core of the evidence synthesis process, whether its end-point is a systematic review or a CPG recommendation. This remains a highly labor-intensive process, involving extensive literature searches in a range of databases. For a typical systematic review of the effect of a therapeutical intervention, this would involve a search for reports of clinical trials at the minimum in PubMed, Embase and the Cochrane Central Registry of Randomized Trials (CENTRAL), and possibly more specialized databases, depending on the topic [18].

A basic search in these databases would typically consist of a Boolean OR clause combining all known synonyms, possible spellings as well as database-specific controlled vocabulary terms for the relevant population, which again would be combined with a similar clause for the relevant therapeutical intervention in an AND clause [18]. Optionally, this may be combined in a similar fashion with terms for the comparators and outcomes of interest, although this is generally considered to lower the sensitivity of the search, and not applicable for systematic reviews [18], [19]. This search approach is commonly referred to as a PICO search, an acronym for population — intervention — comparator — outcome, which is a common way to phrase a precise clinical question [10], [20]. The PICO elements reflect the design of a clinical trial, where a population defined by certain inclusion and exclusion criteria is allocated into two (or more) groups (the intervention and comparator groups), and outcomes of interest measured independently for the groups.

An inherent problem of the above-mentioned search approach is that the free-text search strings applied may not be reflected in the title or abstract sections. Terms for the population and intervention may be hidden inside non-searchable text (such as the article PDFs resting on the individual journal providers' sites, outside the searchable databases) [20]. As the indexing process in the above-mentioned study databases is at least partly manual, articles are also prone to be over- or under-indexed with controlled vocabulary terms due to inter-indexer inconsistency [21]. Current automated algorithms for indexing articles are also imperfect [22].

Manual screening of large, unfocused search returns to identify the articles actually relevant to a review question or clinical question in a CPG may mean browsing hundreds to thousands of citations. Indeed, this has been identified as a significant obstacle to the timely updating of CPGs [23]—[25]. Although numerous efforts have been made to develop search filters retrieving relevant studies with greater precision [26]—[31], this inevitably involves a trade-off between controlling the amount of noise in the search output and the risk of missing a relevant study due to the properties of the filter. Other efforts to ease the process of retrieving relevant, methodologically sound studies include the PLUS database maintained by McMaster University, where screeners manually pre-appraise studies, including only studies fitting certain methodological criteria [32].

1.1.3 Evidence Appraisal

Whether for creating a CPG recommendation or for performing systematic reviews or meta-analyses, a thorough knowledge of the methodological strengths and weaknesses of a study is needed to make an inference about the generalizability of its findings [10], [15]. Lacking certain qualities in the study design, like randomization, may make the study prone to bias, thus weakening its predictive value or even leading to wrong conclusions altogether [35]. Erroneous applications of statistical methods or missing data could also lead to invalid conclusions [33]. For systematic reviews, meta-analyses and CPGs, the validity of conclusions or recommendations is no better than the studies that they are based upon.

After the screening process, information about studies that are deemed relevant is generally extracted manually using data extraction forms [16]. At the very least, to even consider including a study in an evidence summary, a reviewer or CPG developer would need to know some basic meta-data about it. The study population and interventions of the study must match the ones of the review or clinical question [34], and the design of the study must be stated clearly to allow a judgment about its methodological quality.

For the analyses itself, the reviewers need to be able to extract information about results including outcome data, computed estimates, statistical uncertainties, as well as the statistical methods applied. Ideally the reviewers would also have access to the raw data collected in the studies, to be able to verify the calculations done and to pool data across studies. Judgments about whether to finally include or exclude studies from reviews and meta-analyses are complex, and should be done in a systematic way according to pre-specified criteria, and in duplicate [34].

1.1.4 Current Practices for Keeping Systematic Reviews and Clinical Practice Guidelines Updated

Partly due to the burden of literature monitoring, practices for updating CPGs and systematic reviews vary greatly. Some groups advocate updating at fixed intervals [35], others having tried to identify literature-monitoring strategies on smaller scales to identify studies with certain properties ("signal studies") that could possibly impact the conclusions of a review in the static form it was last published [36]. Yet other groups aim towards living guidelines [3], [24] or living systematic reviews [2] that are updated dynamically as new evidence is identified.


1.2 Background

The Human Studies Database Project (HSDP) is part of more extensive work that has been on-going for many years in an effort to create the infrastructure needed for institutions to share the design (protocols) of clinical studies. Sim et al. [37] write about the efforts of the project, which is a multi-institutional collaboration. They postulate that federated searching over locally controlled databases is the most feasible data sharing approach. It is based on this assumption that they propose the Ontology for Clinical Research (OCRe) as the common model defining the concepts that will be queried across the various local databases [20]. This ontology attempts to model eligibility criteria, study design types, study outcomes and variables, scientific query (hypothesis), and analysis in general. OCRe is formalized in OWL 2. OCRe is not only a typology for annotating study data, but it also uses OWL logical axioms for reasoning to provide decision support for critiquing studies, specifically to determine risk of bias. The current work is focused on deploying the OCRe in a workflow support system and study results repository, the AHRQ's Systematic Review Data Repository (SRDR) as well as extending the model of OCRe to include clinical study results data. They are also working towards data sharing between the SRDR and Cochrane Collaboration's Central Register of Studies.

One of the main limitations of the OCRe and HSDP is that it lacks some of the relevant classes that are important for supporting the evidence appraisal process described above, and maintaining systematic reviews and CPGs. There also appears to be a "gap" in needed reasoning about statistical methods. Another drawback is the need for evaluation of the ontology in practical applications (e.g. real-life systems for authoring protocols) and adoption by developers of tools for dissemination/reuse of this data.

There have been various proposals for linking external data, such as from health records, to the hypotheses, methods, and/or results from clinical studies. Sim et al. [20] has, in addition, discussed the lifecycle of the clinical study in relation to the "learning health system" concept [38]. In a learning health system, data and information extracted from care documentation (the Electronic Health Record), is reused as a basis for improving quality of care and in forming a basis for clinical trials. Connections can be made between "study and clinical phenotypes that can be matched and integrated in the process of care".

The Clinical Data Interchange Standards Consortium's (CDISC) Object Data Model (ODM) is a format for the interchange and exchange of clinical research study data and metadata. The ODM model includes metadata that describes study setup, operation/procedures, and analysis. There is a recent initiative to convert the ODM to RDF format, which has resulted in a set of schemas that are to be used in various projects, such as EHR4CR [57]. These projects such as EHR4CR as well as another called i2b2 have used ODM for interoperability between information systems. Their main goal is to be able to support physicians conducting clinical studies using data from electronic health records. Meineke et al. [58] and Ganslandt et al. [59] discuss these goals as well as the process of building support tools such as databases for annotated clinical data from health records that are then the basis for clinical studies. The ODM is interesting as a candidate for metadata associated with primary clinical studies that could be used when authoring systematic reviews and guidelines.

A much earlier paper by Chalmers and Altman in 1999 [5] introduced the concept of the "threaded electronic publication" to highlight discrepancies between clinical protocols as they are documented for ethics committees from the reporting of the protocol procedures as written in published methodology sections of articles. The CrossMark tool was developed to demonstrate the "threading" described in the Altmann and Chalmers paper, "linking a trials registration record and all articles reporting the protocol or findings of the trial, published in different journals or by different publishers in a single thread" [39].

The Biotea Project [40] has created a repository of annotated PubMed Central articles as RDF triples, making SPARQL searches of the full-text content a possibility. In conjunction with Bio2RDF and bioportal ontologies, this group has already explored and tested search interfaces with semantic browsing of content [41]—[43]. Scientific publications are seen as an interface to the Web of Data [42].

The open Research Objects (RO) community has focused mainly on the life sciences, where a growing number of life scientists are "calling for models and tools that can be used to organize, package and share research investigations". Belhajjame et al. 2014 [44] describes a suite of ontologies that represent study methods and data, and to some extent the results reported. Research Objects are containers that hold various investigation related resources that are aggregated in one sharable unit. They provide contextual information related to the experiments/studies, and stress that scientific thinking must be captured as the study progresses, dynamically and iteratively through different hypotheses and designs. The ontologies used by ROs include the Open Archive Initiative Object Exchange and Reuse (OAI-ORE) model, annotation ontologies, the W3C standard Provenance Ontology (PROV-O), and the Research Object Core Ontology that is based on ORE but has additional aggregation structures. The use of ROs for primary clinical studies is not discussed in the literature currently, even though the problem of maintaining clinical knowledge was presented in an earlier publication introducing the content [1]. The RO Digital Library and RODL interfaces are centered on life sciences work, but these may be reused and linked to further clinical knowledge if the approach is widely adopted in human clinical studies.

Nanopublications [45], [46] were proposed as a means for specifying key scientific statements or facts that occur in publications, such as "malaria is transmitted by mosquitos". These statements are represented as RDF triples and contain additional annotation statements that contain authorship information and place the nanopublication in the context of the longer, full publication (provenance metadata). Examples of nanopublication are found in the Huntington's Disease case study [47], [48] and contain statements that enables life scientists to expose the results of their analyses as scientific assertions. This use of nanopublication is relevant since it involves asserting results of studies, a needed component to realizing living systematic reviews and clinical guidelines.


2 Illustrative Case

In addition to the ontologies related to projects presented in the background section, there are a number of relevant biomedical ontologies that can be useful and reusable resources. In keeping with the idea of reuse that is stressed in the RO and nanopublications communities, we began to sketch an illustrative case for evaluative purposes. We look at the data needed for the evidence synthesis and appraisal process, and discuss representation of the extracted elements from a single primary study. Currently, during the authoring of systematic reviews and guidelines, authors carry out their work manually. A human being opens a PDF file, reads, and locates text for copy/paste from the primary study into a spreadsheet for further use in reviews or meta-analyses.

For our working example, we have chosen a study that we have found to be well-conducted in terms of methodological rigor, providing extensive information on study design and statistical methods applied. The study by Kahn et al. [49], on the efficacy of compression stockings for preventing the development of post-thrombotic syndrome in patients having suffered a lower-extremity deep vein thrombosis, was recently published in a high-impact medical journal. We chose this study first and foremost because of the quite complete data presented in the paper. We have extracted the results and metadata from the paper, and entered it into a structured, machine-readable format using publicly available ontologies. This illustrative case shows some of the tedious nature of the work to extract metadata from primary studies.

We do provide some RDF code in the discussion. The publication Kahn et al. [49] is represented as :Pub-Kahn14. We skip the RDF representation of the bibliographic information to save space for the issues related to machine-readable metadata for the study design and analysis. The representation of the study data is not exhaustive, but we intend to provide examples from each of the PICO domains.

The key types of information that are necessary to determine whether a primary study is relevant to a systematic review or guideline, are its PICO elements: population, intervention, comparison, and outcome. Overall classes are found in the PICO ontology [50], however this work is still in the development stage, so we have sought out additional ontologies that might be useable.

P (population) is a class in a number of biomedical ontologies. Eligible population is found in the OCRe and is defined as "The collection of screened organisms that satisfies all eligibility criteria for a study." They have also defined a class for Study population as well as various classes for subpopulation. Population is also found in the Ontology for Biomedical Investigation (OBI) and the ACGT master ontology. We find Group assignment as a class that is a process in the Ontology for Biological and Clinical Statistics, and is the process of assigning individuals into specific roles.

Knowing the population is essential, but also, authors of systematic reviews and guidelines need to know the inclusion and exclusion criteria of the study. Inclusion criteria and exclusion criteria are factors that determine whether a patient can be assigned to a study group, if they are eligible or not eligible. These are often medical conditions or specific demographic characteristics, as well as reasons that would prohibit participation. In our example, one inclusion criterion is a medical condition. In Kahn et al., an eligible individual must have a deep vein thrombosis (DVT), the body location of the DVT is "in the popliteal or more proximal deep leg veins", and this needs to be confirmed with a diagnostic test, a confirmation using ultrasound within the time boundary of the past 14 days. OCRe, as stated above, contains Eligible population, but we could not identify classes to explicitly represent inclusion/exclusion. This current representation of study population in the OCRe is not very granular, only allowing a free-text label to be attached. In our view this will need to be augmented to achieve the goals we outline above for searchability, at least allowing the association of controlled vocabulary terms with the population definition. This would also ideally be supplemented by the possibility of defining exclusion and inclusion criteria. The semantics of study eligibility criteria are complex, as have been discussed separately in a paper by Ross et al. [51] but we could argue that only a minor amount of structured vocabulary identifying the population as such might increase search precision. In this case, the MeSH or SNOMED code for deep vein thrombosis might suffice.

We were however able to identify these relevant classes in the ACGT master ontology: EligibilityCriterion, ExclusionCriterion, and InclusionCriterion. In this case, the most important machine-readable information that needs to be conveyed is: :Pub-Kahn-Population hasInclusionCriteria :DeepVeinThrombosis, and the entire representation would include an assertion that connects deep vein thrombosis to being a criterion for inclusion in the study. We would also include coding from ICD, SNOMED or MeSH for the "deep vein thrombosis" concept. A more advanced representation would indicate body location, which for this study has to be "in the popliteal or more proximal deep leg veins" and the diagnostic confirmation required for eligibility "ultrasound confirmed". The ability to represent this information is more difficult since it requires more complex reified statements.

For the I (intervention) and C (comparison) from PICO, the comparison is often made against a placebo, as in our illustrative case. The study intervention is "active 30—40 mmHg graduated elastic compression stockings (ECS), exchanged every 6 months" and is compared with "placebo ECS with identical appearance but less than 5 mmHg compression at the ankle". The PICO ontology has these classes available, and we could include metadata using the MeSH coding of the intervention:

mesh:D053828 a pico:Intervention .

Outcomes can be divided into what are determined to be "primary outcomes" and "secondary outcomes". For the Kahn et al. [49] study, the primary outcome is presence of "post-thrombotic syndrome" (PTS). Other outcome information is a mixed-bag of related concepts that includes the analysis methods, various clinical scales/diagnostic tools used to determine presence of a primary outcome medical condition (like PTS), adverse events, and quality of life. Some necessary classes could be found scattered across various ontologies, including, for example, the adverse events reporting ontology. In the following example, HazardRatio is a statistical outcome measure used to express the relative chance of event-free survival during the study period, and GinsbergScale is a diagnostic tool.

:PTS has_outcome_measure :HazardRatio
:PTS is_assigned_outcome :PrimaryOutcome
:PTS has_snomed_code snomed:C20283957
:PTS has_assessment_method :GinsbergScale

The classes in OCRe ontology let us characterize the study design as a double-blinded randomized-controlled trial. We might have added more characteristics, such as it being a multi-centre study, but will limit this example for the sake of space constraints:

@prefix ocre:<http://purl.org/net/OCRe/OCRe.owl/>

:Pub-Kahn14 a ocre:Study ;
hasDesign ocre:ParallelGroupStudyDesign ;br /> hasBlinding ocre:SubjectBlinded ;br /> hasBlinding ocre:InvestigatorBlinded ;br /> hasAllocationScheme ocre:BlockRandomization ;br /> hasRandomizationScheme ocre:CentralRandomization .

Finally, we outline the dependent (outcome) variables and statistical methods used for arriving at the final summary statistics. This is not an exhaustive description of the above-mentioned study, as several baseline variables (covariates) and secondary outcomes were described in the Kahn et al. paper. For simplicity, we limit this example to the primary outcome, whereas more outcomes and greater detail could easily be added.

:PostThromboticSyndrome a ocre:studyPhenomenon .
:PostThromboticSyndrome a ocre:CD .
:PostThromboticSyndrome ocre:hasConceptId "D054070" .
:GinsbergScale a ocre:AssessmentMethodSpecification .
:GinsbergScale ocre:hasDescription "The Ginsberg PTS scoring scale is..." .
:PrimaryAnalysis a ocre:CoxRegression .

What we found was that even though ontologies contain some necessary classes and concepts, there is not one that is currently satisfactory to use by itself as a standard to represent study data needed for evidence extraction and secondary synthesis. The ontologies we have identified are somewhat lacking on entities suitable for the reporting of final results, and to varying extent suitable for describing the statistical methods applied for arriving at these. Coincidentally, the Cox regression method used in the study by Kahn et al. to derive a hazard ratio is not represented in the STATO ontology, although STATO generally describes a broader range of statistical methods. Classes suitable for representing aggregate statistics such as odds ratios, risk ratios, hazard ratios or mean differences from clinical studies are not usual (although the Ontology of Biological and Clinical Statistics has a 'relative risk' class). Such outcome data are mostly simple literals. Easy identification of these result data would relieve data extraction for secondary use.


3 Desiderata

The living systematic review and the living clinical guideline are currently in a state of nascent development. Institutions that are producing and storing this secondary publication of clinical knowledge are aware of the many benefits of using semantic technologies and linked data. We have created a list of desiderata that will serve as a "roadmap" and a set of topics for further investigation.


3.1 Ontology Development

Like the RO ontology suite described for the life sciences [44], enabling needed environments for sharing and reuse of clinical human studies will require additional ontology resources that are not currently available.

A PICO Ontology

The proposed "PICO ontology" [50], being developed by Cochrane, who are pioneers in systematic reviews and evidence synthesis, represents study hypotheses and clinical questions in the standardized form of Population, Intervention, Comparator, and Outcome(s). The PICO ontology in development from Cochrane will be an effort to represent classes not covered by OCRe. It also will be the basis for "threading" from systematic reviews to clinical guideline recommendations, as well as to the underlying study report data, comparisons and analyses. It will also allow connections to study data in centralized registers such as clinicaltrials.gov. These "threads" are essential.

"Results" and an "Analysis" Ontologies

An ontology for representing results of clinical studies needs to be developed, and in addition, a "statistical-methods-analysis" ontology that includes "risk of bias" models. As discussed above in previous sections of the paper, the results of primary studies should be sharable and interoperable as well as the data and methods. A "statistical methods-analysis + risk-of-bias ontology", or simply an "analysis ontology", covers the analysis of the data and assists with this need to address bias, i.e. that just crunching data into an "automated review" [6] is not reasonable without application of human assessment for methodological reasons. Neither of these ontologies exists in an implementable form, but there have been related projects that provide a foundation.

Ontologies that have modeled aspects of results of clinical studies cover two areas specifically in medicine: cancer and neurosurgery. They were built for project demonstrations of semantic technology and model a subset of patient data and results. Previous work to design a system for managing clinical trial data has taken place in an earlier EU project that produced the ObTiMA system "ontology-based management of clinical trials" [52].This system is composed of two components, one for setting up clinical trials and the other for handling patient data during trials. Both components depend on an ontology (the ACGT master ontology) [53] that covers the research spectrum specifically in the area of cancer trials. Zaveri et al. [54] built an ontology for reporting diagnostic study results in neurosurgery. Their computational diagnostic ontology contains classes that are required to conduct SR-MA (Systematic Review — Meta Analysis) for diagnostic studies that assist in standardized reporting of non-RCT diagnostic study publications.


3.2 A Much Longer "Lifecycle" in an Ecosystem

We have observed that many of the definitions of "publication lifecycle" seen across all the literature on machine-readable scientific publications have been too limited. For our own needs, we propose to map out the current lifecycle of clinical study publications, and extend this lifecycle to the entire evidence ecosystem. This includes demonstrating how elements from the study protocol are carried through "threaded publications" [39], which include results and their role in secondary publications like the systematic review and meta-analyses, add connections to clinical guidelines decision support systems, and relationships to clinical practice documentation. In addition, provenance needs to be linked throughout the extended lifecycle, using the recently created PROVO or PROV ontology that "provides a set of classes, properties, and restrictions that can be used to represent and interchange provenance information generated in different systems and under different contexts".

The work on Research Objects has had a great deal of input from the life sciences. Ontologies used in Research Objects, such as the Ontology of Biomedical Investigation (OBI) can be extended as a model allowing reuse of data by clinical research with traditional clinical data types in combination with the use of genomic technologies. A related project that might provide some resources for infrastructure is i2b2, "Informatics for Integrating Biology and the Bedside" [55].

We propose nanopublication throughout the lifecycle. We need to identify where nanopublications will be most productive. This leads to further discussion of both credit as well as accountability. For instance, one means to provide incentive for keeping guidelines up to date is to give credit to the authors writing recommendations by allowing them to nanopublish them individually.


3.3 Intelligent Workflow Support for Secondary Research and Improved Search Tools

3.3.1 Minimum Contribution of the PICO, Results and Analysis Ontologies in the Process of Systematic Review and Clinical Practice Guideline Development

Even a rudimentary registration of metadata about a study report in a searchable database may improve the performance of the evidence-retrieval process. We propose that at the time of publication of a clinical trial report, machine-readable study properties using suitable ontologies that model PICO, statistics and methods/analysis, and results can be registered in the living systematic review workflow system, making it possible to do more precise searching and browsing of content related to the review.

Given the amount of resources medical journals spend on reviewing, proofreading and reformatting manuscripts before publication, we propose that they include registration of a limited amount of study data and metadata in a structured manner. If this was done using a standardized tool it would not necessarily add to the workload of the publication process, as the authors of the report, being familiar with the contents of their work, could perform parts of the this registration. The quality control of these data could and should then become an integral part of the proofreading and peer review process.

Journals may decide individually about the amount of content they are willing to share in freely searchable databases. If they are willing to make structured information about the PICO elements, study design/methods, and results available in the same fashion that abstracts presently are searchable, the precision of the evidence retrieval could improve significantly. The process of second-hand indexing of content would be bypassed, and the noise produced by wide free-text searches for PICO terms in title and abstract fields would diminish. The publication of the structured data and metadata could effectively be nanopublished or wrapped in a Research Object container. Likewise, a PICO and methods/analysis ontology could be applied to secondary medical literature, such as systematic reviews and meta-analyses, guidelines, editorials or comments where applicable.

In any case, a structured registration of clinical trial results including computed measures of statistical uncertainty, information of data processing, statistical methods and study design at the time of publication of a study report would significantly ease the data extraction process of systematic review and clinical practice guideline production. As outlined above, this may not necessarily add substantially to the workload of the publication process.

3.3.2 The Need for Tools to Manage Linked Clinical Study Data

Having tools to work with the clinical linked data publications will promote adoption and widespread use as it does in ISA commons used by life scientists. We propose development of tools, such as an enhanced "clinical trials management and reporting system" that will assist clinical trial planners in writing protocols and report data in a structured fashion, helping them to contribute to a wider community instead of working in isolation. Search tools for federated search over linked data repositories should be developed. Sim et al. [20] state that it would be better to improve search with "interactive visualizations of the scientific structure of human studies like the tools that biomedical researchers have for visual exploration and query of gene sequences, pathways, and protein structures" We agree that linking the various clinical knowledge to other related sources requires search tools as well as visual analytics. This has been attempted by some groups, for instance, Saleem et al. 2010 [56] created a user interface for integrating PubMed publications with the Linked Cancer Genome Atlas dataset.


4 Limitations

We have not completely considered how the body of medical literature might be searched differently when we have access to more precise metadata and RDF triples as machine-readable publications. Widespread adoption of the PICO and methods ontologies in the future will greatly improve search functionality. However, even if only limited adoption of suggested semantic web technologies occurs, sharing structured extracted data across databases may partially allow the backlog of medical literature to be more efficiently synthesized and will speed up the process of authoring. Efforts to share such data are currently made through the Cochrane Collaboration and the US Systematic Review Data Repository (SRDR), Agency for Healthcare Research and Quality.


5 Conclusions

Secondary research, involving summarizing and synthesizing existing studies, is essential to clinical medicine. The need for basing patient care and treatment decisions on current medical knowledge drives the production of two types of documents that are key: systematic reviews and clinical practice guidelines. Systematic reviews cover primary research studies that are protocol-based, RCT studies for clinical questions, while the practice guidelines generally summaries these clinical questions into documents that provide recommendations for practitioners to follow on a certain condition or care topic. The authoring process and maintenance of these documents is both costly and labor intensive. We have therefore looked towards related communities for "enhanced publication", "executable papers", "nano-publications", "research objects" in addition to the healthcare informatics projects that are relevant, to find the "missing elements" and topics for further discussion. There are already ontologies that are applicable to certain phases of the evidence-to-recommendation translation process, notably the OCRe, but more work is needed especially on metadata and models for study analysis and results. The development of new tools for reporting on and searching for structured data from clinical trials are also required to implement these technologies in practice.



[1] S. Bechhofer, J. Ainsworth, J. Bhagat, I. Buchan, P. Couch, D. Cruickshank, M. Delderfield, I. Dunlop, M. Gamble, C. Goble, D. Michaelides, P. Missier, S. Owen, D. Newman, D. De Roure, and S. Sufi, "Why Linked Data is Not Enough for Scientists," presented at the Sixth IEEE e—Science conference (e-Science 2010), 2010. http://doi.org/10.1016/j.future.2011.08.004

[2] J. H. Elliott, T. Turner, O. Clavisi, J. Thomas, J. P. T. Higgins, C. Mavergames, and R. L. Gruen, "Living Systematic Reviews: An Emerging Opportunity to Narrow the Evidence-Practice Gap," PLoS Med, vol. 11, no. 2, p. e1001603, Feb. 2014. http://doi.org/10.1371/journal.pmed.1001603

[3] P. O. Vandvik, L. Brandt, P. Alonso-Coello, S. Treweek, E. A. Akl, A. Kristiansen, A. Fog-Heen, T. Agoritsas, V. M. Montori, and G. Guyatt, "Creating clinical practice guidelines we can trust, use, and share: a new era is imminent," Chest, vol. 144, no. 2, pp. 381—389, Aug. 2013. http://doi.org/10.1378/chest.13-0746

[4] A. G. Castro, L. J. García-Castro, A. Labarga, O. Giraldo, C. Montaña, K. O'Neil, and J. A. Bateman, "Annotating Atomic Components of Papers in Digital Libraries: The Semantic and Social Web Heading towards a Living Document Supporting eSciences," in Database and Expert Systems Applications, S. S. Bhowmick, J. Küng, and R. Wagner, Eds. Springer Berlin Heidelberg, 2009, pp. 287—301. http://doi.org/10.1007/978-3-642-03573-9_24

[5] I. Chalmers and D. G. Altman, "How can medical journals help prevent poor medical research? Some opportunities presented by electronic publishing," The Lancet, vol. 353, no. 9151, pp. 490—493, Feb. 1999. http://doi.org/10.1016/S0140-6736(98)07618-1

[6] G. Tsafnat, A. Dunn, P. Glasziou, and E. Coiera, "The automation of systematic reviews," BMJ, vol. 346, no. Jan. 2010 1, pp. f139—f139, Jan. 2013. http://doi.org/10.1136/bmj.f139

[7] Evidence-Based Medicine Working Group, "Evidence-based medicine. A new approach to teaching the practice of medicine," JAMA J. Am. Med. Assoc., vol. 268, no. 17, pp. 2420—2425, Nov. 1992. http://doi.org/10.1001/jama.1992.03490170092032

[8] S. E. Straus, W. S. Richardson, P. Glasziou, and R. B. Haynes, Evidence-based medicine. How to practice and teach EBM. Edinburgh: Elsevier Churchill Livingstone, 2011. ISBN-10: 0702031275.

[9] M. B. Harrison, F. Legare, I. D. Graham, and B. Fervers, "Adapting clinical practice guidelines to local context and assessing barriers to their use," Cmaj, vol. 182, pp. E78—84, Feb. 2010. http://doi.org/10.1503/cmaj.081232

[10] G. Guyatt, D. Rennie, M. O. Meade, and D. J. Cook, Users' guide to the medical literature. Essentials of evidence-based clinical practice. New York: Jama Archives and Journals/McGraw Hill Medical, 2008. ISBN-10: 0071590382.

[11] J. J. Deeks, J. P. T. Higgins, and D. G. Altman, "Chapter 9: Analysing data and undertaking meta-analyses.," in Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011)., G. S. Higgins JPT, Ed. The Cochrane Collaboration, 2011.

[12] A. Qaseem, F. Forland, F. Macbeth, G. Ollenschlager, S. Phillips, and P. van der Wees, "Guidelines International Network: toward international standards for clinical practice guidelines," Ann Intern Med, vol. 156, pp. 525—31, Apr. 2012. http://doi.org/10.7326/0003-4819-156-7-201204030-00009

[13] R. Graham, M. Mancher, D. M. Wolman, S. Greenfield, and E. Steinberg, Clinical practice guidelines we can trust. National Academies Press, 2011.

[14] G. H. Guyatt, A. D. Oxman, H. J. Schunemann, P. Tugwell, and A. Knottnerus, "GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology," J Clin Epidemiol., vol. 64, pp. 380—2, Apr. 2011. http://doi.org/10.1016/j.jclinepi.2010.09.011

[15] G. H. Guyatt, A. D. Oxman, G. E. Vist, R. Kunz, Y. Falck-Ytter, P. Alonso-Coello, and H. J. Schunemann, "GRADE: an emerging consensus on rating quality of evidence and strength of recommendations," Bmj, vol. 336, pp. 924—6, Apr. 2008. http://doi.org/10.1136/bmj.39489.470347.AD

[16] J. P. T. Higgins and J. J. Deeks, "Chapter 7: Selecting studies and collecting data," in Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011), J. P. T. Higgins and S. Green, Eds. The Cochrane Collaboration, 2011.

[17] B. Fervers, J. S. Burgers, M. C. Haugh, J. Latreille, N. Mlika-Cabanne, L. Paquet, M. Coulombe, M. Poirier, and B. Burnand, "Adaptation of clinical guidelines: literature review and proposition for a framework and procedure," Int J Qual Health Care, vol. 18, pp. 167—76, Jun. 2006. http://doi.org/10.1093/intqhc/mzi108

[18] C. Lefebvre, E. Manheimer, and J. Glanville, "Chapter 6: Searching for studies." in Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011), G. S. Higgins JPT, Ed. The Cochrane Collaboration, 2011.

[19] T. Agoritsas, A. Merglen, D. S. Courvoisier, C. Combescure, N. Garin, A. Perrier, and T. V. Perneger, "Sensitivity and predictive value of 15 PubMed search strategies to answer clinical questions rated against full systematic reviews," J Med Internet Res, vol. 14, p. e85, 2012. http://doi.org/10.2196/jmir.2021

[20] I. Sim, S. W. Tu, S. Carini, H. P. Lehmann, B. H. Pollock, M. Peleg, and K. M. Wittkowski, "The Ontology of Clinical Research (OCRe): An informatics foundation for the science of clinical research," J Biomed Inf., Nov. 2013. http://doi.org/10.1016/j.jbi.2013.11.002

[21] M. E. Funk and C. A. Reid, "Indexing consistency in MEDLINE," Bull Med Libr Assoc, vol. 71, pp. 176—83, Apr. 1983. PMCID: PMC227138.

[22] A. J. Yepes, J. G. Mork, D. Demner-Fushman, and A. R. Aronson, "Comparison and combination of several MeSH indexing approaches" AMIA Annu Symp Proc, vol. 2013, pp. 709—18, 2013. PMCID: PMC3900212.

[23] E. Clark, E. F. Donovan, and P. Schoettker, "From outdated to updated, keeping clinical guidelines valid," Int J Qual Health Care, vol. 18, pp. 165—6, June 2006. http://doi.org/10.1093/intqhc/mzl007

[24] P. Alonso-Coello, L. Martinez Garcia, J. M. Carrasco, I. Sola, S. Qureshi, and J. S. Burgers, "The updating of clinical practice guidelines: insights from an international survey," Implement Sci, vol. 6, p. 107, 2011. http://doi.org/10.1186/1748-5908-6-107

[25] L. Martinez Garcia, I. Arevalo-Rodriguez, I. Sola, R. B. Haynes, P. O. Vandvik, and P. Alonso-Coello, "Strategies for monitoring and updating clinical practice guidelines: a systematic review," Implement Sci, vol. 7, p. 109, 2012. http://doi.org/10.1186/1748-5908-7-109

[26] R. B. Haynes and N. Wilczynski, "Finding the gold in MEDLINE: clinical queries," ACP J Club, vol. 142, pp. A8—9, Feb. 2005. http://doi.org/10.1136/ebm.10.4.101

[27] N. L. Wilczynski and R. B. Haynes, "Response to Corrao et al.,: Improving efficacy of PubMed clinical queries for retrieving scientifically strong studies on treatment," J Am Med Inf. Assoc, vol. 14, pp. 247—8, Apr. 2007. http://doi.org/10.1197/jamia.M2297

[28] K. A. McKibbon, N. L. Wilczynski, and R. B. Haynes, "Retrieving randomized controlled trials from medline: a comparison of 38 published search filters," Health Info Libr J, vol. 26, pp. 187—202, Sep. 2009. http://doi.org/10.1111/j.1471-1842.2008.00827.x

[29] C. Lokker, R. B. Haynes, N. L. Wilczynski, K. A. McKibbon, and S. D. Walter, "Retrieval of diagnostic and treatment studies for clinical use through PubMed and PubMed's Clinical Queries filters," J Am Med Inf. Assoc, vol. 18, pp. 652—9, Oct. 2011.

[30] N. L. Wilczynski, K. A. McKibbon, and R. B. Haynes, "Sensitive Clinical Queries retrieved relevant systematic reviews as well as primary studies: an analytic survey," J Clin Epidemiol, vol. 64, pp. 1341—9, Dec. 2011. http://doi.org/10.1016/j.jclinepi.2011.04.007

[31] N. L. Wilczynski, K. A. McKibbon, S. D. Walter, A. X. Garg, and R. B. Haynes, "MEDLINE clinical queries are robust when searching in recent publishing years," J Am Med Inf. Assoc, vol. 20, pp. 363—8, Apr. 2013. http://doi.org/10.1136/amiajnl-2012-001075

[32] B. J. Hemens and R. B. Haynes, "McMaster Premium Literature Service (PLUS) performed well for identifying new studies for updated Cochrane reviews," J Clin Epidemiol, vol. 65, pp. 62—72.e1, Jan. 2012. http://doi.org/10.1016/j.jclinepi.2011.02.010

[33] J. P. T. Higgins, D. G. Altman, and J. A. C. Sterne, "Chapter 8: Assessing risk of bias in included studies," in Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011), J. P. T. Higgins and S. Green, Eds. The Cochrane Collaboration, 2011.

[34] D. O'Connor, S. Green, and J. P. T. Higgins, "Chapter 5: Defining the review question and developing criteria for including studies," in Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011), J. P. T. Higgins and S. Green, Eds. The Cochrane Collaboration, 2011.

[35] P. Shekelle, M. P. Eccles, J. M. Grimshaw, and S. H. Woolf, "When should clinical guidelines be updated?," Bmj, vol. 323, pp. 155—7, Jul. 2001. http://doi.org/10.1136/bmj.323.7305.155

[36] N. Ahmadzai, S. J. Newberry, M. A. Maglione, A. Tsertsvadze, M. T. Ansari, S. Hempel, A. Motala, S. Tsouros, J. J. Schneider Chafen, R. Shanman, D. Moher, and P. G. Shekelle, "A surveillance system to assess the need for updating systematic reviews," Syst Rev, vol. 2, p. 104, 2013. http://doi.org/10.1186/2046-4053-2-104

[37] I. Sim, S. Carini, S. Tu, R. Wynden, B. H. Pollock, S. A. Mollah, D. Gabriel, H. K. Hagler, R. H. Scheuermann, H. P. Lehmann, K. M. Wittkowski, M. Nahm, and S. Bakken, "The Human Studies Database Project: Federating Human Studies Design Data Using the Ontology of Clinical Research," AMIA Summits Transl. Sci. Proc., vol. 2010, pp. 51—55, Mar. 2010. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3041546/

[38] C. P. Friedman, A. K. Wong, and D. Blumenthal, "Achieving a nationwide learning health system," Sci. Transl. Med., vol. 2, no. 57, p. 57cm29, Nov. 2010.

[39] D. Shanahan, "Threaded Publications: one step closer," BioMed Central blog, 31-Jan-2014.

[40] L. J. G. Castro, C. McLaughlin, and A. Garcia, "Biotea: RDFizing PubMed Central in support for the paper as an interface to the Web of Data," J. Biomed. Semant., vol. 4, no. Suppl 1, p. S5, Apr. 2013. http://doi.org/10.1186/2041-1480-4-S1-S5

[41] L. J. García Castro, O. X. Giraldo, and A. Garcia, "Using annotations to model discourse: an extension to the Annotation Ontology," pp. 13—22.

[42] L. J. García-Castro, A. G. Castro, and J. Gómez, "Conceptual Exploration of Documents and Digital Libraries in the Biomedical Domain," in Proceedings of the 5th International Workshop on Semantic Web Applications and Tools for Life Sciences, Paris, France, November 28-30, 2012, 2012, vol. 952.

[43] L. J. García-Castro, R. B. Llavori, D. Rebholz-Schuhmann, and A. G. Castro, "Connections across Scientific Publications based on Semantic Annotations," in Proceedings of the 3rd Workshop on Semantic Publishing, Montpellier, France, May 26th, 2013, vol. 994, pp. 51—62.

[44] K. Belhajjame, J. Zhao, D. Garijo, K. Hettne, R. Palma, Ó. Corcho, J.-M. Gómez-Pérez, S. Bechhofer, G. Klyne, and C. Goble, "The Research Object Suite of Ontologies: Sharing and Exchanging Research Data and Methods on the Open Web," ArXiv14014307 Cs, Jan. 2014.

[45] P. Groth, A. Gibson, and J. Velterop, "The anatomy of a nanopublication," Inf Serv. Use, vol. 30, no. 1—2, pp. 51—56, 2010. http://doi.org/10.3233/ISU-2010-0613

[46] P. Sernadela, E. van der Horst, M. Thompson, P. Lopes, M. Roos, and J. L. Oliveira, "A Nanopublishing Architecture for Biomedical Data," in 8th International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB 2014), J. Saez-Rodriguez, M. P. Rocha, F. Fdez-Riverola, and J. F. D. P. Santana, Eds. Springer International Publishing, 2014, pp. 277—284. http://doi.org/10.1007/978-3-319-07581-5_33

[47] E. Mina, M. Thompson, R. Kaliyaperumal, J. Zhao, K. M. Hettne, E. Schultes, and M. Roos, "Nanopublications for Exposing Experimental Data in the Life-sciences: A Huntington's Disease Case Study," in SWAT4LS, 2013.

[48] S. Magliacane and P. T. Groth, "Towards Reconstructing the Provenance of Clinical Guidelines," in Proceedings of the 5th International Workshop on Semantic Web Applications and Tools for Life Sciences, Paris, France, November 28-30, 2012, 2012, vol. 952. http://ceur-ws.org/Vol-952/paper_36.pdf

[49] S. R. Kahn, S. Shapiro, P. S. Wells, M. A. Rodger, M. J. Kovacs, D. R. Anderson, V. Tagalakis, A. H. Houweling, T. Ducruet, C. Holcroft, M. Johri, S. Solymoss, M. J. Miron, E. Yeo, R. Smith, S. Schulman, J. Kassis, C. Kearon, I. Chagnon, T. Wong, C. Demers, R. Hanmiah, S. Kaatz, R. Selby, S. Rathbun, S. Desmarais, L. Opatrny, T. L. Ortel, and J. S. Ginsberg, "Compression stockings to prevent post-thrombotic syndrome: a randomised placebo-controlled trial," Lancet, vol. 383, no. 9920, pp. 880—8, 2014. http://doi.org/10.1016/S0140-6736(13)61902-9

[50] C. Mavergames, S. Oliver, and L. Becker, "Systematic Reviews as an Interface to the Web of (Trial) Data: using PICO as an Ontology for Knowledge Synthesis in Evidence-based Healthcare Research," in Proceedings of the 3rd Workshop on Semantic Publishing, Montpellier, France, May 26th, 2013, 2013, vol. 994, pp. 22—26.

[51] J. Ross, S. Tu, S. Carini, and I. Sim, "Analysis of Eligibility Criteria Complexity in Clinical Trials," AMIA Summits Transl. Sci. Proc., vol. 2010, pp. 46—50, Mar. 2010.

[52] H. Stenzhorn, G. Weiler, M. Brochhausen, F. Schera, V. Kritsotakis, M. Tsiknakis, S. Kiefer, and N. Graf, "The ObTiMA system — ontology-based managing of clinical trials," Stud. Health Technol. Inform., vol. 160, no. Pt 2, pp. 1090—1094, 2010. doi: 10.3233/978-1-60750-588-4-1090 http://doi.org/10.3233/978-1-60750-588-4-1090

[53] M. Brochhausen, A. D. Spear, C. Cocos, G. Weiler, L. Martín, A. Anguita, H. Stenzhorn, E. Daskalaki, F. Schera, U. Schwarz, S. Sfakianakis, S. Kiefer, M. Dörr, N. Graf, and M. Tsiknakis, "The ACGT Master Ontology and its applications—towards an ontology-driven cancer research and management system," J. Biomed. Inform., vol. 44, no. 1, pp. 8—25, Feb. 2011. http://doi.org/10.1016/j.jbi.2010.04.008

[54] A. Zaveri, J. Shah, S. Pradhan, C. Rodrigues, J. Barros, B. T. Ang, and R. Pietrobon, "Center of Excellence in Research Reporting in Neurosurgery — Diagnostic Ontology," PLoS ONE, vol. 7, no. 5, p. e36759, May 2012. http://doi.org/10.1371/journal.pone.0036759

[55] E. K. Johnson, S. Broder-Fingert, P. Tanpowpong, J. Bickel, J. R. Lightdale, and C. P. Nelson, "Use of the i2b2 research query tool to conduct a matched case-control clinical research study: advantages, disadvantages and methodological considerations," BMC Med. Res. Methodol., vol. 14, p. 16, 2014. http://doi.org/10.1186/1471-2288-14-16

[56] M. Saleem, "Big Linked Cancer Data: Integrating Linked TCGA and PubMed."

[57] J. Doods, F. Botteri, M. Dugas, F. Fritz, and EHR4CR WP7, "A European Inventory of Common Electronic Health Record Data Elements for Clinical Trial Feasibility." Trials 15 (2014): 18. http://doi.org/10.1186/1745-6215-15-18

[58] F. A. Meineke, S. Stäubert, M. Löbe, and A. Winter, "A Comprehensive Clinical Research Database Based on CDISC ODM and i2b2." Studies in Health Technology and Informatics 205 (2014): 1115—19. http://doi.org/10.3233/978-1-61499-432-9-1115

[59] T. Ganslandt, S. Mate, K. Helbing, U. Sax, and H. U. Prokosch, "Unlocking Data for Clinical Research — The German i2b2 Experience." Applied Clinical Informatics 2, no. 1 (2011): 116—27. http://doi.org/10.4338/ACI-2010-09-CR-0051


About the Authors


Laura Slaughter (PhD) received both her MLS and PhD from the College of Information Studies at the University of Maryland, College Park. She defended her dissertation in 2002 on the topic of "Semantic Relationships in Health Consumer Questions and Physicians' Answers: A Basis for Representing Medical Knowledge and for Concept Exploration Interfaces. " She spent two years at Columbia University as a postdoc in the Dept of Biomedical Informatics. She is currently employed at Oslo University Hospital, Oslo, Norway. Her research expertise is within healthcare informatics, and covers areas including: knowledge representation, controlled vocabulary & ontologies, semantic web, intelligent information systems, and clinical decision support.


Christopher Friis Berntsen (MD, BA) is a hospital physician specializing in internal medicine at Lovisenberg Hospital in Oslo, Norway. He is also part-time employed as a researcher at Sykehuset Innlandet Hospital Trust in Gjøvik, Norway, through the MAGIC (Making GRADE the Irresistible Choice) research and innovation program, focusing on the optimization of clinical practice guideline updating processes and evidence retrieval strategies.


Linn Brandt (MD) is a hospital physician specializing in internal medicine at Diakonhjemmet Hospital in Oslo, Norway. She is also part-time employed as a researcher at Sykehuset Innlandet Hospital Trust in Gjøvik, Norway, through the MAGIC (Making GRADE the Irresistible Choice) research and innovation program, focusing on the digitizing of clinical practice guidelines and their use in clinical decision support systems.


Chris Mavergames (MLIS) is Head of Informatics and Knowledge Management, The Cochrane Collaboration. Chris is charged with leading and improving Cochrane's information, technology, and knowledge management systems. Chris' background is in information science and knowledge management with an MLIS degree from Long Island University in New York, USA. He has more than fourteen years' experience working with XML and related technologies, database management systems, web development technologies, metadata schema and controlled vocabularies and, most recently, linked data and semantic web technologies. Chris began his career at CNN and Time Inc., in the relatively early days of digital and online publishing. He went on to become Director of Multimedia Resources at the New York City College of Technology, and then spent a year working on a digitisation project at the British Library, St. Pancras, London. Chris joined The Cochrane Collaboration in 2006 and took over leadership of the Web Team in 2009. In addition to his work with the Web Team, he is an author in the Cochrane Airways Group and a member of the Information Retrieval Methods Group.

transparent image