Faculty of 1000

Post-publication peer review

Posts Tagged ‘Indicators’

On the run-04Feb10

Posted by rpg on 5 February, 2010

What was that? I think it was the sound of a week flashing past.

I keep saying things like “We’ve got a brand new website… but you can’t see it yet.” This must be quite frustrating. Truth is, the dev team are working very hard (and specs have changed and changed again—but let’s not go there now) and a lot of stuff has to come together all at once. There’s actually no point showing you what we have at the moment because it’d all “ignore that, we’re changing it” and “the design is going to be different than this” and “oh, yeah, we know about that bug”.

But I can tell you that the new search is very funky and we all like it, and that the new design is very spiffy (hang on, I did that already). On Monday we’re going to work out once and for all what we can deliver and work to that. So far, the ‘definite’ list contains the new design (both what it looks like and functionally), the improved search, comments on evaluated articles and RSS. There are a heap of other behind-the-scenes changes too. Then after we go live we can add on all the other things that are on the backlog, so you will see new things appear as we keep building and tweaking and rolling out new features.

I spent some more time on our journal rankings this week. The critical thing appears to be the timing: as I’ve said before, most of our evaluations are published quite quickly after the original article appears. We get around 90% of all evaluations within about three months of the publication date. So what we want to do, for yearly journal league tables, is capture as many as possible while making our stats as up-to-date and relevant as possible. The issue is that if we took April, say, as the cut-off for the previous year we’d end up giving the journals that publish more stuff towards the beginning of the year an unfair advantage. So we’re going to implement a rolling cut-off, with a provisional ‘current’ ranking and publish the official f1000 stats somewhere around May each year, which gives us four months to collect evaluations for each original article.

However, the big news this week is that we welcomed Sarah Greene into the office. This is part of the move to bring f1000 and The Scientist closer together: f1000 is going to start seeding The Scientist‘s scientific content, and use it to help build a community around the service.

As part of this, my own role is changing. I’m going to move away from web development (although I’ll still have input into the design and user experience), which will free me up to be more editorial/journalistic. I’ll still be running the Twitter feed and Facebook page and wittering about things that catch my eye in f1000 (perhaps even more so). There’ll also be the ‘special projects’, such as the rankings, federated comments and various research collaborations. I guess Eva will still be wanting me to make logos for her too.

And finally

The late pick-up of the disenchantment of a small number of researchers with the peer review process is still making waves this week. Cameron Neylon gives his own take on the matter at his blog. I’m not at all sure that I agree with his analysis, having had my own manuscripts subject to both what I might call ‘good’ and ‘bad’ review. I think that too many people view peer review as a stamp on the ‘rightness’ of the paper, rather than a technical check that the experiments and controls are done correctly and that the literature has been read.

Cameron has also been having a go at Nature Communications. The argument hinges on the Creative Commons licences they ask authors to choose. You can sign up and join the conversation at Nature Network.

With that, have a great weekend. And sorry, no cytoskeletal porn this week. Maybe next time.

Richard

Advertisements

Posted in Friday afternoon, Website | Tagged: , , , , , , , | Comments Off on On the run-04Feb10

You can’t always get what you want

Posted by rpg on 18 October, 2009

I was at the Internet Librarian International on Thursday and Friday of last week. Not the sort of conference I’m used to attending but as we were sponsoring it we had a speaking slot, and I seemed the obvious choice!

Rather than talk about f1000 or give a ‘corporate’ presentation I talked about the Journal Impact Factor, about alternative metrics, about the difficulties in assessing the literature, discovering quality, and locating what’s important. (This is actually what we’re in the business of doing, but aside from the branding I only mentioned what we do in passing. This was appreciated by the audience and organizers, as it turns out: and we stood out from the crowd because of it!)

I may have mentioned ‘Web 3.0’ a couple of times. As I see it, Web 1 (which it was never known as) was typified by information coming from a more or less authoritative source to many consumers. Web 2.0 was where you got people producing and sharing the information on a more level playing field; between themselves, as I told the Research Information Network back in May:


rpg at RIN, May ’09

And yeah, the Web is not the internet, and ‘Web 2.0’ was happening off-Web as it were for years before anyone thought of the term: through bulletin boards, Usenet etc. The wonders of marketing.

Web 3, I think, is when we figure out how to use all this funky technology that enables peer-to-peer, all the tagging and RSS and search and everything, and actually start finding stuff. To be more precise: all the power of web 2.0 gets brought to us where we are in useful, digestible chunks. A guy can dream, can’t he?

That’s what we’re trying to achieve at f1000, in our small corner. To find the important literature (currently biology and medicine, although that might expand) and to bring what you’re interested to you, in the way you want to see it. We’re not there yet, and the new site won’t hit it straight off, but we have lots of ideas and times are beginning to look exciting.

Anyway, the talk seemed to be well-received (I got spontaneous applause!) and Tom was on hand to record it. I spent yesterday afternoon trimming the video and inserting (some of) my slides as cutaways. And then about four hours uploading them because the internets don’t seem to reach south of the river…

Here’s me from Thursday (in two parts):

Enjoy!

Posted in Conferences, Literature, Metrics | Tagged: , , , , , , , , , , , , | 3 Comments »

Big Bad John

Posted by rpg on 28 August, 2009

I’ve been remiss.

I should have talked a bit about the events of last Saturday: truth is I was struck by a stomach bug on Tuesday night and have been a little bit out of things. If you’re interested, there is a video of the ‘Fringe Frivolous‘ event of the Friday evening and lots and lots of photos on Flickr.

Martin Fenner has summarized all the blog links he could find, in a kind of citation-stealing Annual Review way. Yeah, we talked about indicators and metrics in the session with Victor and Ginny Barbour (PLoS), saying among other things that usage data and network metrics and our own F1000 factor aren’t necessarily replacements for the journal impact factor: rather they’re all complementary, and tell you diffirent things.

I’ll actually be talking about that a bit more at two upcoming meetings. The first is Internet Librarian International in London, 15-16 October; the second is the XXIX Annual Charleston Conference, 4-7 November in Charleston SC. That’s actually going to be a hell of a trip as it’s my youngest’s birthday on 5th. She’ll be ten: I’m going to miss it but there should be fireworks on the Friday or Saturday night that we can see.

Interestingly, a chap from Thomson collared me on Saturday after our session. As someone remarked to me later, this was quite a scoop: apparently Thomson don’t usually bother with small fish: I wonder if we spooked them?

Talking of which, there’s a fascinating paper about the ‘Matthew Effect’ in arXiv (‘the arXiv’?), The impact factor’s Matthew effect: a natural experiment in bibliometrics. Turns out (surprise, surprise) that papers published in high impact factor journals garner twice as many citations as the identical paper in a low IF journal. I don’t know if that’s because more people read high IF journals or because there truly is the impression that papers in them must be better, or what. Either way, I’d just like to say…

broken glass

broken glass

Posted in Indicators, Metrics | Tagged: , , , | Comments Off on Big Bad John

More than a number (in my little red book)

Posted by rpg on 31 July, 2009

Shirley Wu kicked off an interesting conversation on Friendfeed yesterday, reporting on a conversation that questioned the ‘quality’ of our old friend PLoS One. Now there’s a debate that’s going to go round and round, possibly generating rather more heat and noise than useful work.

The conversation, thanks to Peter Binfield, then turned onto article level metrics as a way of assessing ‘quality’, and that’s when I got interested.

Euan Adie wants to know,

Do [Nascent readers] think that article downloads stats should be put on academic CVs?

and provides a very cogent argument as to why paying too much attention to single article metrics is not necessarily a good idea.

I’m not saying that download stats aren’t useful in aggregate or that authors don’t have a right to know how many hits their papers received but they’re potentially misleading (& open to misinterpretation) and surely that’s not the type of data we want to be bandying about as an impact factor replacement?

Now, I’m paying attention to this not just because Faculty of 1000 is a stakeholder in this sort of discussion, but also of  Science Online London. This is the successor to last year’s blogging conference, and this year I’m on the organizing committee.  Together with Victor Henning of Mendeley and Ginny Barbour of PLoS I’m running a session on… article level metrics.

Which means it’s going to get interesting, as there are strong feelings on both sides—and maybe I should trawl through the attendee list and make sure they are all briefed.

Bring it on.

I’m going to say that I agree with Euan that download stats are a bit suss, actually. Just like citations, they don’t tell you how the article is being used; they don’t tell you if the user (or the citer) thinks this paper is good or bad. As Euan puts it,

A download counter can’t tell if the person visiting your paper is a grad student looking for a journal club paper, a researcher interested in your field or… somebody who typed in an obscure porn related search that turned up unconnected words in the abstract.

(That reminds me of a story my boss in Cambridge told me. He was an editor on a particular well-respected cell biological journal, and one day they found that one article was getting massively hit, and it was all search engine traffic. On examining the referrer stats it turns out that the paper was all about our friend C. elegans.  The cause of all the traffic was not a sudden upsurge in global nematode research, but rather the juxtaposition of the words ‘sex’ and ‘anal’  in the abstract. Quite.)

The problem with the Impact Factor, and also with article level metrics, is that there is no indicator of quality. About the only automated system that has any hope of doing this is the kind of network analysis that Johan Bollen is doing. Briefly, these network maps show you how people are using papers by following a user’s click history.

funky network connections

funky network connections

In any event, I suspect that people are going to have to learn to be more sophisticated in their use of metrics/indicators, and the tyranny of the impact factor will be shaken. It starts here.

Posted in Indicators | Tagged: , , , , | 2 Comments »

Somewhere over the rainbow

Posted by rpg on 27 July, 2009

Somewhere in the depths of PLoS One an article lurks…

Liz Allen and her friends at the Wellcome performed an interesting study on papers that came out of labs that were at least partially funded by Wellcome grants. What they did was to figure where each of the nearly 700 papers ended up being published, and then looked at bibliometric indictors (cough impact factors cough).

Nothing new there, really. The clever bit was to persuade a ‘college’ of experts (I hate that word, sorry) to review the papers, and then compare this with the bibliometric indicators… and with the Faculty of 1000. Funnily enough, highly-rated papers did not necessarily get cited highly (so the Impact Factor doesn’t help here), but the F1000 ratings agreed pretty well with their college of experts, and was also able to predict citation rates to some extent.

We were pretty stoked about this paper, I can tell you: and we hope to have a chat with Liz next month and show her some interesting stuff we’ve been up to. It plugs directly into this:

Tools that link expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation.

I’ll write more about that nearer the time. Or even, given the bonkers-mad workflow I’ve got going on, after the time. Until then, you can check out the more in-depth analysis and a fascinating discussion over at my personal blog.

Posted in Indicators | Tagged: , , , , | 6 Comments »