PLoS ONE Impact Factor and Page Rank

As far as I am aware, the online, interactive, open access journal PLoS ONE (Public Library of Science ONE), has not yet been given any love by ISI, the Thomson Reuters company that doles out impact factors to journal publishers.

Impact Factor is strange. It shares some of the characteristics with the so-called page rank value adopted by the Google search engine to rank websites (more on that later). As with pagerank (PR), a journal’s impact factor (IF), or indeed an individual paper’s impact factor is all about inbound links. For Google PR, an inbound link refers to just that, a URL on another site pointing to the page being evaluated whereas, the analogous phenomenon regarding impact factor is the citation in another research paper.

The impact factor of a journal is calculated based on a rolling two-year period. It can be viewed as the average number of citations in a year given to those papers in a journal that were published during the two preceding years.

For PR, links/citations from pages that themselves have a high rank are weighted more heavily than inbounds from lowly ranked pages. So, having a link to your page from one on a high PR page on a “trusted” domain, such as many .edu or .gov pages would be more worthwhile in terms of your pagerank than from some smalltown .com or .biz site. By analogy, a citation in a popular and well-read paper in Nature, Science, Cell, or PNAS, say, is going to give you more impact kudos than one in a less well-known journal because more people will read and ultimately cite it. Indeed, IF is calculated only on the basis of citations in the limited number of journals that ISI Thompson actually indexes. (It’s about 15% of the total number of the world’s journals).

Why does any of this matter? Well reputation and rank are equally as important in the world of search engine results pages (SERPs) as they are in scientific citation. A pagerank usually means you will be near the top of the SERPs for your keywords, which may be reflected in higher traffic to your site, and if you’re a commercial concern more sales. A high impact factor likewise for a publisher or author means more prestige, and more prestige usually translates into a greater chance of a successful grant application, tenure, or other worthy outcome for a researcher scrabbling their way along the laboratory bench.

So, back to PLoS ONE. In the last release of ISI’s impact factors it wasn’t given one. In search engine parlance, it’s presumably still “sandboxed”. It’s in that limbo between first appraisal and public listing, just as happens with new websites when Google first spiders their content. They will usually have to wait several weeks, sometimes months, before they show any green in Google’s PR toolbar.

Back in April, when we were anticipating PLoS ONE’s first show on the impact factor tables published in June, Pedro Beltrão on Public Rambling did an analysis of the journal’s presence on Scopus and guesstimated its impact factor. The blog suggested that PLoS ONE would get an IF about half that for PLoS Computational Biology. Seemingly, it didn’t happen.

Beltrão also points out an often argued problem with IF:

I think the IFs of the journal where a paper is published is a very poor measure of a papers importance. Although it is probably a good measure of the relative value of a journal (within a given field) we should be striving to pick what we read based on the value of a paper instead of the journal.

This is akin to assessing all content on a website as being good, simply because the homepage has a high pagerank, not always a good idea as a domain’s homepage may be high, like that of Twitter or, but the content of each individual page within that domain can vary considerably.

Incidentally, Google plays a tough game with websites and so-called blackhat SEOs (search engine optimisation people), where the green that shows in the toolbar pagerank tool may actually be very different from the pagerank it uses in its algorithm to order entries in the SERPs. A site may look like it has a low PR, but this value may have been lowered with a view to publicly penalising the site for whatever reason, whereas the actual ranking for the site’s keywords may not change despite that drop. Imagine if ISI were tweaking impact factors for its own ends or because of lobbyists…perish the thought.

Chris Surridge, former managing editor of PLoS ONE, had much to say about impact factors too on the PLoS blog some time ago:

It doesn’t take a great deal of thought to see why the ‘worth’ of a paper isn’t well assessed by the Impact Factor of the journal in which it is published. Impact Factors are essentially the average number of citations for papers in a particular journal. Problem is citations aren’t normally distributed across those papers making the power of that average to predict the likely citations of an individual paper very low. As a rule of thumb 80% of a journals impact factor is determined by 20% of the papers published.

Of course, as with anyone whose Google PR has dropped or has never been lifted out of the sandbox, all this theorising and the admonitions for a poorly setup system usually vanish once a high impact factor or pagerank is achieved. At that point, the evaluation becomes so much more important and publishers, authors, and webmasters alike, whatever colour their hats, will be pleased to flaunt their shiny new rank.

PLoS Medicine discussed the impact factor game, way back in June 2006 where they suggested that “measures such as these will become outmoded as the Internet allows for users to interact more directly with published articles.”

Well, it’s July 2009, and it doesn’t look like things have moved on too far, yet. Impact factors are still being discussed at all levels and the number of visitors continualy hitting Sciencebase looking for PLoS ONE impact factor would suggest that it’s not just of fleeting interest. Indeed, early this year some wags had a wager (pizza prize) regarding the PLoS ONE impact factor as well as some criticism of the quality of the individual papers in the journal and perhaps a hint as to why ISI has not yet ranked it.

More seriously, Erik Postma of the School of Biological, Earth and Environmental Sciences, at the University of New South Wales, Sydney, Australia, has pointed out what he considers a serious flaw in the determination of journal impact factors, ironically/obviously enough his paper: Inflated Impact Factors? The True Impact of Evolutionary Papers in Non-Evolutionary Journals was published in PLoS ONE.

Postma’s abstract begins, not without some irony, I assume:

Amongst the numerous problems associated with the use of impact factors as a measure of quality are the systematic differences in impact factors that exist among scientific fields.

He concludes that, “while journal impact factors may thus indeed provide a meaningful qualitative measure of impact, a fair quantitative comparison requires a more sophisticated journal classification system, together with multiple field-specific impact statistics per journal.” It’s the same kind of argument one hears from whitehat, as opposed to blackhat SEOs, disgruntled by their lowly Google pagerank.

Deepak Singh on “business|bytes|genes|molecules” recently agreed that it doesn’t make sense to use a metric (the impact factor) that attempts to measure the average citations to a whole journal, but, “unfortunately no alternatives have been presented, and in our world you always need some metrics”.

Chronobiologist Bora Zivkovic puts the issue perhaps more bluntly than anyone else in a post on his personal blog to which he alerted me this month. Zivkovic is Online Discussion Expert for PLoS.

Everyone and their grandmother knows that Impact Factor is a crude, unreliable and just wrong metric to use in evaluating individuals for career-making (or career-breaking) purposes.

He adds that despite this, many institutions (or rather, their bureaucrats – scientists would abandon it if their bosses would) cling to impact factor anyway.

Alternatives to impact factor are being attempted, and in today’s online world of social bookmarking, forums and preprint archives it is not without irony that the Google pagerank model may offer a new approach. A version of PageRank has recently been proposed as a replacement for the traditional Institute for Scientific Information (ISI) impact factor. It has been implemented at Instead of simply counting the total citations of a journal, the importance of each citation is determined in a PageRank fashion based on incoming links.

Essentially, we all want our work to be loved and we all want everyone else to know that our work is loved. (Un)fortunately, love is an ephemeral thing and like impact factors and pagerank just how much you get does not necessarily depend on you, but on those who are giving it.

Research Blogging Icon. (2006). The Impact Factor Game PLoS Medicine, 3 (6) DOI: 10.1371/journal.pmed.0030291

Post script Perhaps one of the reasons so many people seem to be searching for “PLoS ONE impact factor” is that they want to know whether it is a valuable journal to which they should submit their research. That is a judgement I cannot make for you, although others do seem to have opinions about it.

15 thoughts on “PLoS ONE Impact Factor and Page Rank”

  1. As of January this year, Thomson Reuters is listing PLoS One and thus it will have an IF soon.

    Personally, I think this is deplorable as the IF is negotiated, irreproducible and not mathematically sound. Even if it were not negotiable, reproducible and mathematically sound, any journal rank is still nonsense as where you publish doesn’t matter – what you publish matters. Journal rank is bogus, opium for the scientists.

  2. If PLoS ONE is not getting any impactor factor (IF) from ISI, then there should not be any respite for this journal till it gets IF. I can not accept that it desrves to have similar impact of PLoS Journals.

  3. Seeing the 80/20 rule rear its head above put me in mind of Clay Shirky’s TED talk, which I just saw yesterday. Basically, he says that institutions (like ISI) are happy to get 80% of the rank from 20% of the journals, but a non-institutional ranking (via aggregated user activity on academic bookmarking sites such as Mendeley, for example) could capture all the ranking input without incurring the cost ISI does to do it as an institution.

    In summary, ISI’s impact factor is now an anachronism. We just don’t need them anymore. You can get better ranking without having a whole organization devoted to compiling it and making people wait 2 years for the privilege of paying to see it.

  4. Article level metrics – a variety of scores and other bits of information attached to each article in the publications of the Public Library of Science could shift attention from journals to articles, particularly for the academic bean counters anxious to find a convenient and low cost way of ranking academics, so says Richard Smith in the British Medical Journal

  5. The main objectives of a scientific paper is to be useful to other scientist and to be cited if it brings useful information. The Pub Med give you a very good tool to survey your garden of interest. But a lot of journals are for their suscribers only. If think that PLOS One is a interesting media for very high quality science, without the burden of limitation of pages.
    The members of star system go will go elsewere with their IF metrics. Free science, granted by public money, will expand everywhere, specially in the poor countries that cannot afford the suscribing of every journal.

  6. That the IF is still around is a farce and a disgrace to any and all scientists. When I show third-term science students how it is negotiated and ‘calculated’, they stare at me in disbelief. They all ask: ‘why is it still around?’ and shake their heads when I explain the reality of science today.

  7. @Andrew You’re right, of course, Andrew, journals, scientists, and others all have their own internal indicators of the worthiness of any given research team, paper, or journal. But, for those who need to number crunch, such as funders, they need something that ticks boxes and looks statistical to feed into algorithms and charts…

    @Steve Interesting point, yes pagerank is definitely a black box, in fact I think it is two black boxes, the actual PR used by Google and the toolbar PR. We do need a consistent and open way to get a quick evaluation of citation trends though. Hirsch is good too, of course.

  8. I never enable the PageRank button in my Google Toolbar. I don’t need to know a website’s PR, neither, I believe, do people who know exactly what they are looking for.

    This applies to IF, too. People who need IF in fact don’t know exactly what’s good or bad by themselves. When they look for something they don’t know what exactly they are looking for. Thus a general scale for reference is necessary.

    David, you have once been a journal editor. You must have a scale for your journal to determine which submission should be accepted or rejected. Once an editor from IOP gave a lecture in my university. He said IOP had a graded quality scale of submission, from “Q1” to “Q5”. These scales implemented by peer-reviewed journals may be implicit. But it defines the very real thing concerned — how ‘good’ the science is in this paper, rather than such intermediate factors as citation numbers or citing rates.

  9. There was an editorial in PNAS earlier this year (doi:10.1073/pnas.0903307106) discussing Eigenfactors and total number of cites. It isn’t all that surprising to note that PNAS wants to put a positive spin on its own position in the journal rankings using Eigenfactors.

    It’s interesting to note that there is apparently a straight line correlation between total citations and Eigenfactor. This being the case you have to question whether Eigenfactors are really adding anything useful to the mix. Which brings me to another point, while IFs are flawed – I won’t argue about that – they have one very important advantage: anyone can see how they work. The calculation of Eigenfactors (like that of Google pagerank) is all a bit black box.

    Finally like any statistical measurement you have to be aware of what you’re comparing. Physics and Chemistry, Apples and Oranges…

    My favourite measure of any is definitely the (Hirsch) h-index but you still have to compare like-with-like or its worthless.

Comments are closed.

If you learned something from Sciencebase, enjoyed a song, snap, or the science, please consider leaving a tip to cover costs. The site no longer runs Google ads or similar systems, so your visit is untainted.