Faculty of 1000

Post-publication peer review

More than a number (in my little red book)

Posted by rpg on 31 July, 2009

Shirley Wu kicked off an interesting conversation on Friendfeed yesterday, reporting on a conversation that questioned the ‘quality’ of our old friend PLoS One. Now there’s a debate that’s going to go round and round, possibly generating rather more heat and noise than useful work.

The conversation, thanks to Peter Binfield, then turned onto article level metrics as a way of assessing ‘quality’, and that’s when I got interested.

Euan Adie wants to know,

Do [Nascent readers] think that article downloads stats should be put on academic CVs?

and provides a very cogent argument as to why paying too much attention to single article metrics is not necessarily a good idea.

I’m not saying that download stats aren’t useful in aggregate or that authors don’t have a right to know how many hits their papers received but they’re potentially misleading (& open to misinterpretation) and surely that’s not the type of data we want to be bandying about as an impact factor replacement?

Now, I’m paying attention to this not just because Faculty of 1000 is a stakeholder in this sort of discussion, but also of  Science Online London. This is the successor to last year’s blogging conference, and this year I’m on the organizing committee.  Together with Victor Henning of Mendeley and Ginny Barbour of PLoS I’m running a session on… article level metrics.

Which means it’s going to get interesting, as there are strong feelings on both sides—and maybe I should trawl through the attendee list and make sure they are all briefed.

Bring it on.

I’m going to say that I agree with Euan that download stats are a bit suss, actually. Just like citations, they don’t tell you how the article is being used; they don’t tell you if the user (or the citer) thinks this paper is good or bad. As Euan puts it,

A download counter can’t tell if the person visiting your paper is a grad student looking for a journal club paper, a researcher interested in your field or… somebody who typed in an obscure porn related search that turned up unconnected words in the abstract.

(That reminds me of a story my boss in Cambridge told me. He was an editor on a particular well-respected cell biological journal, and one day they found that one article was getting massively hit, and it was all search engine traffic. On examining the referrer stats it turns out that the paper was all about our friend C. elegans.  The cause of all the traffic was not a sudden upsurge in global nematode research, but rather the juxtaposition of the words ‘sex’ and ‘anal’  in the abstract. Quite.)

The problem with the Impact Factor, and also with article level metrics, is that there is no indicator of quality. About the only automated system that has any hope of doing this is the kind of network analysis that Johan Bollen is doing. Briefly, these network maps show you how people are using papers by following a user’s click history.

funky network connections

funky network connections

In any event, I suspect that people are going to have to learn to be more sophisticated in their use of metrics/indicators, and the tyranny of the impact factor will be shaken. It starts here.

Advertisements

2 Responses to “More than a number (in my little red book)”

  1. Bob O'Hara said

    I’m still waiting for someone to define “quality” specifically enough that it can be measured. I guess I’ll be waiting some time.

    TBH, I think we can do a much better job of describing citations and downloads, but we need to look at them and how they vary, rather than pulling another equation from our rear ends. I have some ideas about how to do this, but need some funding. Would F1000 like to sponsor a PhD student? 🙂

Sorry, the comment form is closed at this time.

 
%d bloggers like this: