SCIENTIFIC NEWS AND
INNOVATION FROM ÉTS
Analyzing and Assessing Research: Bibliometrics and its Drawbacks - By : Adèle Paul-Hus, Held Barbosa De Souza,

Analyzing and Assessing Research: Bibliometrics and its Drawbacks


Held Barbosa De Souza
Held Barbosa De Souza Author profile
Held Barbosa de Souza is a librarian at ÉTS. She holds a master’s degree in Information Sciences from Université de Montréal, and the subject of her thesis was about the contribution of postdoctoral fellows to the advancement of knowledge.

SUMMARY

Publications, citations, impact factor: what role do they have in research evaluation? Bibliometrics is a research method that uses scholarly publications to analyze and assess academic activity. This article aims to identify the main limitations of bibliometric indicators and the repercussions of using them to assess researchers.

Researchers are constantly under assessment, either explicitly, as when they apply for research funding, or unknowingly, as when the research performance of institutions is ranked. A key component of these assessments involves compiling indicators of research impact and productivity based on the researchers’ academic publication record.

Definitions

Bibliometrics is a means of conducting a quantitative analysis of the scholarly publications of an individual researcher, research group or institute, or even an scientific journal. Scholarly publications and citations are used as bibliometric indicators of scientific output and impact.

Citations are the key concept of bibliometric measures. The idea is that researchers’ influence (or impact) in the academic community can be measured by the number of citations their work has received in a given time period.

The impact factor of journals is calculated by dividing the number of citations to articles a given journal publishes in a given period (2 or 5 years) by the total number of articles published in the same period by the same journal. Journal impact factors are published annually in the  Journal Citation Reports using data from the Web of Science.

Application

Bibliometric data can be used for a number of purposes:

  • Identify the most important or influential journals in a given field
  • Assess the productivity of an individual researcher or research group or institution
  • Measure the impact of an article, a researcher, or a research institution
  • Measure the collaboration of an individual researcher or research group or institution
  • Monitor developments in a research field or on a research subject

Assessment

Though bibliometrics attracts numerous criticisms, it remains a key component of research evaluation. However, bibliometrics is no substitute for peer review as a means of assessing publication quality. Indicators that consider other types of inputs, such as grants and other funding, and outputs, such as patents and number of graduates, are also important in assessing the research activity of institutions or individual researchers.

Data sources

Many bibliometric indicators are based on citations. But citation numbers can vary depending on the source used: the numbers are directly tied to the coverage of the database that compiles the citations. Each source has its strengths and limitations. Below is a brief overview of the main sources of bibliometric data:

Web of science

The Web of Science database, a pioneer in the domain, indexes the core journals in every field, with a coverage period reaching back to the early 20th century. However, its coverage presents a bias toward publications in English, while publications in other languages are underrepresented. Its coverage of conference proceedings and books is also relatively limited. This bias is a significant limitation in fields where scientific articles are not the dominant mode of communication (ACUMEN, 2014).

SScopuscopus, a newer player in the market, shares many of the objectives, and limitations, of Web of Science. It covers a larger number of journals and conferences, but over a shorter time frame (starting in 1995). Scopus also exhibits a certain overrepresentation of the publications of its owner, the commercial publisher Elsevier (ACUMEN, 2014).

Google ScholarOf the sources discussed here Google Scholar appears to have the most extensive coverage of academic documents and a less pronounced linguistic bias. It also may be more effective in measuring the impact of articles in the very short term, owing to the indexing lag of both Web of Science and Scopus. But the absence of quality control of Google Scholar means that multiple versions of a single document may be indexed, and Google Scholar data have been demonstrated to be vulnerable to falsication (López-Cózar, Robinson-García and Torres-Salinas, 2014). Another difference is that all types of documents can be indexed, whether or not they have undergone peer review (ACUMEN, 2014).

Overall, Web of Science and Scopus are the best choices, given Google Scholar’s serious limitations in terms of data reliability. Another point to consider is that Google Scholar is the only free source; Web of Science and Scopus are available by subscription only (ACUMEN, 2014).

 It is important to always mention the data source used when compiling bibliometric indicators, because each source has different strengths and limitations.

 

Perverse effects of bibliometrics

The widespread use of bibliometric indicators for assessment is coming under criticism from numerous researchers because of its many drawbacks, which can be better understood when one examines the origins of bibliometrics.

In the beginning, bibliometrics was used primarily by librarians to manage scientific journal collections. Its use spread with the creation of the Science Citation Index (SCI) in 1963, which became later Web of Science. SCI gathered not only traditional bibliographic information (author, title, journal name), but also the references cited, from which compilation of citation measures was possible.

According to Yves Gingras, sociologist of science and director of UQAM’s Observatoire des sciences et technologies (OST), the primary perverse effect associated with the rise of SCI pertains to scientific journals’ impact factor. Today, impact factor has become a promotional tool for journals, and many adopt ethically questionable strategies designed not to improve the quality of their publication but rather to boost its impact factor.

Journals’ impact factors are also erroneously considered a measure of the quality of the individual articles they publish. But in fact citations to articles published in a given journal follow the Pareto curve—around 20% of the articles receive 80% of the citations (Gingras, 2014). Publishing an article in a journal whose impact factor is high is thus no guarantee of citations.

Evaluation is the very basis of the scientific process; however when it goes wrong the culprit is often poorly designed indicators or inappropriate use of impact factors. University rankings are a clear example of this phenomenon. According to Gingras (2014), university rankings employ very different indicators combined in a subjective manner to produce rankings that are often little more than marketing tools.

Productivity assessments put researchers under great pressure to publish. The “race against the clock” pushing researchers to publish more and more may be connected with the growing number of retractions in scientific publications, as was recently exemplified by the retraction of 43 papers by major publisher BioMed Central.  While no link of causality has been demonstrated between the pressure to publish and scientific error (which includes both unintentional errors and willful falsification of data), it has been shown that the number of retractions has increased substantially in recent years (Zhang and Grieneisen, 2012).

Furthermore, the primary indicator used to assess researchers should be the number of citations, not the total number of publications (ACUMEN 2014, p.14). Moreover, since the 1990s the correlation between the publication’s impact factor and the number of citations articles receive has been weakening (Lozano, Larivière and Gingras 2012). What, then, are the factors that truly influence the number of citations? Find out in an article on this very question, “10 Tips for Increasing the Visibility of Your Publications.”

 

 

Adèle Paul-Hus

Author's profile

Author profile

Held Barbosa De Souza

Author's profile

Held Barbosa de Souza is a librarian at ÉTS. She holds a master’s degree in Information Sciences from Université de Montréal, and the subject of her thesis was about the contribution of postdoctoral fellows to the advancement of knowledge.

Author profile


Get the latest scientific news from ÉTS