Faculty of 1000

Post-publication peer review

Posts Tagged ‘Scopus’

How many more times?

Posted by rpg on 14 September, 2009

…what dreams may come
When we have shuffled off this mortal coil,
Must give us pause

Thomson, in a commentary in the Journal of the American Medical Association, reckon there ain’t nowt wrong with the Journal Impact Factor:

The impact factor has had success and utility as a journal metric due to its concentration on a simple calculation based on data that are fully visible in the Web of Science. These examples based on citable (counted) items indexed in 2000 and 2005 suggest that the current approach for identification of citable items in the impact factor denominator is accurate and consistent.

Well, they would say that.

And they might well be right, and you and I and Thomson Reuters might argue the point endlessly. But there are a number of problems with any citation-based metric, and a pretty fundamental one was highlighted (coincidentally?) in the same issue of JAMA.

Looking at thre general medical journals, Abhaya Kulkarni at the Hospital for Sick Children in Toronto (shout out to my friend Ricardipus) and co. found that three different ways of counting citations come up with three very different numbers.

Cutting to the chase, Web of Science counted about 20% fewer citations than Scopus or Google Scholar. The reasons for this are not totally clear, but are probably due to the latter two being of wider scope (no pun intended). Scopus, for example, looks at ~15,000 journals compared with Web of Science’s ten thousand. Why? The authors say that Web of Science ’emphasized the quality of its content coverage’: which in English means it doesn’t look at non-English publications, or those from outside the US and (possibly) Europe, or other citation sources such as books and conference proceedings. And that’s before we even start thinking about minimal citable units; or non-citable outputs; or whether blogs should count as one-fiftieth of a peer-reviewed paper.

Presumably some of the discrepancy is due to removal of self-cites, which strikes me as being just as unfair: my own output shouldn’t count for less simply because I’m building on it. It’s also difficult to know how to deal with the mobility of sciences: do you only look at the last author? or the first? I don’t know how you make that one work at all, to be honest.

That aside, I think curation of citation metrics is necessary: Kulkarni et al. report that fully two percent of citations in Google Scholar didn’t, actually, cite what they claimed to. That is a worrying statistic when you realize that people’s jobs are on the line. You have to get this right, guys.

But it’d be nice if we could all agree on the numbers to start with.

Read the rest of this entry »

Advertisements

Posted in Indicators, Journals, Literature, Metrics | Tagged: , , , , , | 3 Comments »