Can we count on journal metrics?

How do you rank science, how do you rate scientists, what kudos do you give their papers and what metrics do you attach to the impact of a paper? They’re questions as old as the scientific literature itself. But, no one has resolved them. Independent organisations and publishers have attempted with the likes of the ISI Impact Factor. Academics weary of the prominent journals and the prominent researchers getting all the “gold stars” have attempted to overturn such metrics and devise their own in the form of the H-index. But, getting the measure of metrics is difficult, especially in today’s climate.

In the current journal market and particularly given the economic climate, institutional purchasing is severely constrained by economic factors. For publishers outside the coteries of the three or four most well known, establishing prestige and validating the research articles within their pages is critical but difficult. In terms of survival of the fittest this applies equally to journals published and paid for by both traditional, open access and other models.

From the researchers’ perspective they want to publish in journals that will give their science and their team the most prominence and so give them more pulling power when it comes to filling in that next grant application or research assessments. Librarians, researchers including those in specialist niches have attempted to apply pressure on the way funding bodies, governments and companies place reliance on the standard metric.

Impact Factors are a double-edged sword, of course. If yours is high, you will be happy for yourself whether author or publisher. But if it’s low it is difficult to remedy that situation and without gaming the system there are few ways for important work from researchers that lack prominence and exist in niche areas to have a big impact.

Institutions have recognised the problems and the biases to some extent and have begun to evaluate a title’s significance beyond the conventional approach. Perhaps a system like PeerIndex might be extended to researchers, their papers and journals in some way. Indeed, some publishers have devised their own systems, e.g. Elsevier with Scopus and online scientific communities are beginning to find ways to rank research papers in a similar way to social bookmarking sites like and

For some institutions and countries following the Impact Factor is nevertheless obligatory. A recent paper by Larsen and von Ins investigates Impact Factor and others have looked at how publishing the right thing in the right place can affect careers (Segalla et al). There is a whole growth area in the research literature on assessing Impact Factor and other metrics. Instances where Impact Factor seems not to work particularly well have been reported recently. The Scientist explains how a single paper in a relatively small journal boosted that journal’s position in the league tables so that it overtook one of the most prominent and well-known journals, but only for a short period while that paper was topical and being widely cited.

There are many academics arguing for a change in the way papers and journals are assessed, among them Cameron Neylon who is hoping that the scientific community can build an alternative for the diverse measurement of research.

Unfortunately, there seems to be no simple answer to the problem of assessing research impact. Indeed, what is needed is some kind of ranking algorithm that can determine which of the various alternative impact factors systems would have the greatest…well…impact…

Research Blogging IconLarsen, P., & Ins, M. (2010). The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index Scientometrics, 84 (3), 575-603 DOI: 10.1007/s11192-010-0202-z


Research Blogging IconSegalla, M. (2008). Publishing in the right place or publishing the right thing: journal targeting and citations’ strategies for promotion and tenure committees European J. of International Management, 2 (2) DOI: 10.1504/EJIM.2008.017765

4 thoughts on “Can we count on journal metrics?”

  1. Higher the impact, does that mean higher visibility of your article. High impact factor does not in any way indicate that more number of people have read your article. What is important is counting both the citation as well as visibility of scientific article.

  2. The saddest thing is sometimes in 3rd world countries, journal papers are written just for the sake of having more publications to your names. That’s what I see around

  3. When I worked in journals, the back and forth of the refereeing process was a big part of the job. Usually, however, two independent referees lambasting some nonsensical paper in detail was enough for the Editor to reject the paper without feeling that their integrity was being compromised by subjectivity nor that an “interloper” with valid data was being overlooked. One subjective comment from one referee did not scuttle the paper. If one referee was positive and the other negative then a third referee was consulted to adjudicate and if that came to a rejection, the author still had the right to appeal to the Editorial Board.

    I assume something akin to this approach is still used, it seems to weed out interlopers with nonsensical data quite well regardless of their concerns regarding referee objectivity. Maybe there are alternative systems that should be explored, such as preprint crowdsourcing and maybe that would spot the fraudsters even more effectively.

  4. It seems to me that many journals have a culture of accepting some papers since the reviewers are always supportive, (same research area). If however you are an interloper and come up with something that goes against current wisdom, it is very hard to get published. One subjective dislike comment from a referee scuttles the paper. I agree a lot of papers should be rejected, but in terms of referees, subjective comments should not enough to reject science papers. I believe Editors should demand a reviewer to give detailed objective reasons for rejection. It is easy to express dislike, but objective reasons allow the authors to understand their error or provide a rebuttal.

Comments are closed.

If you learned something from Sciencebase, enjoyed a song, snap, or the science, please consider leaving a tip to cover costs. The site no longer runs Google ads or similar systems, so your visit is untainted.