Faculty of 1000

Post-publication peer review

Posts Tagged ‘citations’

You can’t always get what you want

Posted by rpg on 18 October, 2009

I was at the Internet Librarian International on Thursday and Friday of last week. Not the sort of conference I’m used to attending but as we were sponsoring it we had a speaking slot, and I seemed the obvious choice!

Rather than talk about f1000 or give a ‘corporate’ presentation I talked about the Journal Impact Factor, about alternative metrics, about the difficulties in assessing the literature, discovering quality, and locating what’s important. (This is actually what we’re in the business of doing, but aside from the branding I only mentioned what we do in passing. This was appreciated by the audience and organizers, as it turns out: and we stood out from the crowd because of it!)

I may have mentioned ‘Web 3.0’ a couple of times. As I see it, Web 1 (which it was never known as) was typified by information coming from a more or less authoritative source to many consumers. Web 2.0 was where you got people producing and sharing the information on a more level playing field; between themselves, as I told the Research Information Network back in May:


rpg at RIN, May ’09

And yeah, the Web is not the internet, and ‘Web 2.0’ was happening off-Web as it were for years before anyone thought of the term: through bulletin boards, Usenet etc. The wonders of marketing.

Web 3, I think, is when we figure out how to use all this funky technology that enables peer-to-peer, all the tagging and RSS and search and everything, and actually start finding stuff. To be more precise: all the power of web 2.0 gets brought to us where we are in useful, digestible chunks. A guy can dream, can’t he?

That’s what we’re trying to achieve at f1000, in our small corner. To find the important literature (currently biology and medicine, although that might expand) and to bring what you’re interested to you, in the way you want to see it. We’re not there yet, and the new site won’t hit it straight off, but we have lots of ideas and times are beginning to look exciting.

Anyway, the talk seemed to be well-received (I got spontaneous applause!) and Tom was on hand to record it. I spent yesterday afternoon trimming the video and inserting (some of) my slides as cutaways. And then about four hours uploading them because the internets don’t seem to reach south of the river…

Here’s me from Thursday (in two parts):

Enjoy!

Advertisements

Posted in Conferences, Literature, Metrics | Tagged: , , , , , , , , , , , , | 3 Comments »

Where the streets have no name

Posted by rpg on 4 August, 2009

Alejandro brings my attention to ScienceWatch’s list of most-cited institutions in science.

This is the list of the ‘top’ twenty institutions out of just over four thousand. For some value of ‘top’, he says snarkily. Now, we know there are serious problems with citation metrics, but essentially it’s all we’ve got to go on, so it’s not a bad list.

The Most-Cited Institutions Overall, 1999-2009 (Thomson)

Rank Institution Citations Papers Citations
Per Paper
1 HARVARD UNIV 95,291 2,597,786 27.26
2 MAX PLANCK SOCIETY 69,373 1,366,087 19.69
3 JOHNS HOPKINS UNIV 54,022 1,222,166 22.62
4 UNIV WASHINGTON 54,198 1,147,283 21.17
5 STANFORD UNIV 48,846 1,138,795 23.31
6 UNIV CALIF LOS ANGELES 55,237 1,077,069 19.5
7 UNIV MICHIGAN 54,612 948,621 17.37
8 UNIV CALIF BERKELEY 46,984 945,817 20.13
9 UNIV CALIF SAN FRANCISCO 36,106 939,302 26.02
10 UNIV PENN 46,235 931,399 20.14
11 UNIV TOKYO 68,840 913,896 13.28
12 UNIV CALIF SAN DIEGO 40,789 899,832 22.06
13 UNIV TORONTO 55,163 861,243 15.61
14 UCL 46,882 860,117 18.35
15 COLUMBIA UNIV 43,302 858,073 19.82
16 YALE UNIV 36,857 833,467 22.61
17 MIT 35,247 832,439 23.62
18 UNIV CAMBRIDGE 43,017 811,673 18.87
19 UNIV OXFORD 40,494 766,577 18.93
20 UNIV WISCONSIN 50,016 760,091 15.2

Or is it?

Because as you know, we give the articles evaluated at F1000 a score. And it has not escaped our notice that once you start doing such a thing, you can start asking interesting questions. Admittedly we only look at biology and medicine (so far…), but according to this Excel spreadsheet I’ve just opened we have over five thousand unique institutions in our database. Hmm… I wonder if we might be doing anything with that?

Rrankings

And talking of authors I’d like to take this opportunity to shout out to my friend Åsa, whose recent work on inhibiting protein synthesis in secondary pneumonia was evaluated on F1000 Medicine (and who might one day get a nonymous blog cough).

Posted in Competition, Indicators, Journals | Tagged: , | 4 Comments »

More than a number (in my little red book)

Posted by rpg on 31 July, 2009

Shirley Wu kicked off an interesting conversation on Friendfeed yesterday, reporting on a conversation that questioned the ‘quality’ of our old friend PLoS One. Now there’s a debate that’s going to go round and round, possibly generating rather more heat and noise than useful work.

The conversation, thanks to Peter Binfield, then turned onto article level metrics as a way of assessing ‘quality’, and that’s when I got interested.

Euan Adie wants to know,

Do [Nascent readers] think that article downloads stats should be put on academic CVs?

and provides a very cogent argument as to why paying too much attention to single article metrics is not necessarily a good idea.

I’m not saying that download stats aren’t useful in aggregate or that authors don’t have a right to know how many hits their papers received but they’re potentially misleading (& open to misinterpretation) and surely that’s not the type of data we want to be bandying about as an impact factor replacement?

Now, I’m paying attention to this not just because Faculty of 1000 is a stakeholder in this sort of discussion, but also of  Science Online London. This is the successor to last year’s blogging conference, and this year I’m on the organizing committee.  Together with Victor Henning of Mendeley and Ginny Barbour of PLoS I’m running a session on… article level metrics.

Which means it’s going to get interesting, as there are strong feelings on both sides—and maybe I should trawl through the attendee list and make sure they are all briefed.

Bring it on.

I’m going to say that I agree with Euan that download stats are a bit suss, actually. Just like citations, they don’t tell you how the article is being used; they don’t tell you if the user (or the citer) thinks this paper is good or bad. As Euan puts it,

A download counter can’t tell if the person visiting your paper is a grad student looking for a journal club paper, a researcher interested in your field or… somebody who typed in an obscure porn related search that turned up unconnected words in the abstract.

(That reminds me of a story my boss in Cambridge told me. He was an editor on a particular well-respected cell biological journal, and one day they found that one article was getting massively hit, and it was all search engine traffic. On examining the referrer stats it turns out that the paper was all about our friend C. elegans.  The cause of all the traffic was not a sudden upsurge in global nematode research, but rather the juxtaposition of the words ‘sex’ and ‘anal’  in the abstract. Quite.)

The problem with the Impact Factor, and also with article level metrics, is that there is no indicator of quality. About the only automated system that has any hope of doing this is the kind of network analysis that Johan Bollen is doing. Briefly, these network maps show you how people are using papers by following a user’s click history.

funky network connections

funky network connections

In any event, I suspect that people are going to have to learn to be more sophisticated in their use of metrics/indicators, and the tyranny of the impact factor will be shaken. It starts here.

Posted in Indicators | Tagged: , , , , | 2 Comments »