Faculty of 1000

Post-publication peer review

Archive for July, 2009

More than a number (in my little red book)

Posted by rpg on 31 July, 2009

Shirley Wu kicked off an interesting conversation on Friendfeed yesterday, reporting on a conversation that questioned the ‘quality’ of our old friend PLoS One. Now there’s a debate that’s going to go round and round, possibly generating rather more heat and noise than useful work.

The conversation, thanks to Peter Binfield, then turned onto article level metrics as a way of assessing ‘quality’, and that’s when I got interested.

Euan Adie wants to know,

Do [Nascent readers] think that article downloads stats should be put on academic CVs?

and provides a very cogent argument as to why paying too much attention to single article metrics is not necessarily a good idea.

I’m not saying that download stats aren’t useful in aggregate or that authors don’t have a right to know how many hits their papers received but they’re potentially misleading (& open to misinterpretation) and surely that’s not the type of data we want to be bandying about as an impact factor replacement?

Now, I’m paying attention to this not just because Faculty of 1000 is a stakeholder in this sort of discussion, but also of  Science Online London. This is the successor to last year’s blogging conference, and this year I’m on the organizing committee.  Together with Victor Henning of Mendeley and Ginny Barbour of PLoS I’m running a session on… article level metrics.

Which means it’s going to get interesting, as there are strong feelings on both sides—and maybe I should trawl through the attendee list and make sure they are all briefed.

Bring it on.

I’m going to say that I agree with Euan that download stats are a bit suss, actually. Just like citations, they don’t tell you how the article is being used; they don’t tell you if the user (or the citer) thinks this paper is good or bad. As Euan puts it,

A download counter can’t tell if the person visiting your paper is a grad student looking for a journal club paper, a researcher interested in your field or… somebody who typed in an obscure porn related search that turned up unconnected words in the abstract.

(That reminds me of a story my boss in Cambridge told me. He was an editor on a particular well-respected cell biological journal, and one day they found that one article was getting massively hit, and it was all search engine traffic. On examining the referrer stats it turns out that the paper was all about our friend C. elegans.  The cause of all the traffic was not a sudden upsurge in global nematode research, but rather the juxtaposition of the words ‘sex’ and ‘anal’  in the abstract. Quite.)

The problem with the Impact Factor, and also with article level metrics, is that there is no indicator of quality. About the only automated system that has any hope of doing this is the kind of network analysis that Johan Bollen is doing. Briefly, these network maps show you how people are using papers by following a user’s click history.

funky network connections

funky network connections

In any event, I suspect that people are going to have to learn to be more sophisticated in their use of metrics/indicators, and the tyranny of the impact factor will be shaken. It starts here.

Advertisements

Posted in Indicators | Tagged: , , , , | 2 Comments »

Who are you?

Posted by rpg on 28 July, 2009

Who are we?

We’re the Faculty of 1000. We specialize in post-publication peer review. What this means is that we publish brief summaries of what our Faculty think are interesting, exciting, or otherwise noteworthy published articles in the biomedical literature. That’s the ‘post-publication’ and ‘review’ bits.

The Faculty consists of about 5,000 senior scientists and clinicians around the world. They are respected and authoritative within their specialties and disciplines. That’s the ‘peer’ bit. They are assisted by ‘Associate’ Faculty; less senior people (say, experienced post-docs) within the Faculty Member’s group. Associate Faculty are crucial to our scanning project, which I should talk about in a future blog entry.

I am F1000’s ‘Information Architect’. Essentially, I run the web-side service of F1000. Until March 2009 I was an active research scientist, and you can find some (out-dated, whoops) information about me on my personal website, and follow some random bloggy goodness at Nature Network.

Feel free to email me—richard.grant at f1000.com—or leave a comment here, if you have any feedback. I promise to read it, even if I can’t respond immediately. You can also find us on Twitter (@f1000).

Currently I’m the only one writing here and on Twitter, but I’m hoping to get more of the team on board soon.

Housekeeping

Look, we all know it’s a jungle out here. I’d love to read your comments here, and see you following us on Twitter. But we need to keep the spammers at bay, so when you comment, if it’s your first time you’ll go into the approval queue. Subsequently, if you are legit, your comments should appear straightaway. Sorry about the inconvenience.

I recommend you read our Policy document too, especially with regard to commenting. Nothing too unusual in there, but it keeps the lawyers happy. It’s likely to develop a little as time goes on. If you’re unsure about anything, please ask here.

See you around…

Posted in Meta | Tagged: , | Comments Off on Who are you?

Somewhere over the rainbow

Posted by rpg on 27 July, 2009

Somewhere in the depths of PLoS One an article lurks…

Liz Allen and her friends at the Wellcome performed an interesting study on papers that came out of labs that were at least partially funded by Wellcome grants. What they did was to figure where each of the nearly 700 papers ended up being published, and then looked at bibliometric indictors (cough impact factors cough).

Nothing new there, really. The clever bit was to persuade a ‘college’ of experts (I hate that word, sorry) to review the papers, and then compare this with the bibliometric indicators… and with the Faculty of 1000. Funnily enough, highly-rated papers did not necessarily get cited highly (so the Impact Factor doesn’t help here), but the F1000 ratings agreed pretty well with their college of experts, and was also able to predict citation rates to some extent.

We were pretty stoked about this paper, I can tell you: and we hope to have a chat with Liz next month and show her some interesting stuff we’ve been up to. It plugs directly into this:

Tools that link expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation.

I’ll write more about that nearer the time. Or even, given the bonkers-mad workflow I’ve got going on, after the time. Until then, you can check out the more in-depth analysis and a fascinating discussion over at my personal blog.

Posted in Indicators | Tagged: , , , , | 6 Comments »