Faculty of 1000

Post-publication peer review

Archive for the ‘Metrics’ Category

On the run-29Jan10

Posted by rpg on 29 January, 2010

Vitek quotes a Polish proverb,

If you’re going to fall off a horse, make it a big one.

In that vein, take a look at this graph (don’t look too closely; it’s deliberately a tiny bit obscure):Graph of ffJ and Journal Impact factor

What I’ve been doing this week is mostly hacking away in Perl at some of the information in our database. As you may know, each evaluation in f1000 has a score associated with it, based on the rating given an article by Faculty Members. We’ve redone the scoring and I’ve worked out a way of combining those scores, or ‘article factors’ as we’ve taken to calling them, for each journal that is represented at f1000. This gives us a ‘journal factor’, ffJ. It’s our answer to the journal Impact Factor, in fact; and the graph above is the top 250 journals according to our ratings (in blue) with the appropriate Impact Factor in red.

You’ll notice right away that there isn’t a one-to-one correlation, and of course we’d expect that (the Impact Factor has serious problems, which I’ve talked about previously). I’m currently analysing our data a bit more deeply, and I’ll be writing a paper with that analysis, as well as talking about it at the March ACS conference in San Francisco.

Last Friday evening I went down the Driver with a couple of the dev team and a bunch of people from places as varied as BioMed Central, Nature Publishing Group, Mendeley and Mekentosj. We talked about what we’re variously calling cross- or federate-commenting. On the whole we’ve decided it’s a good idea, and we simply have to figure out how to do it. What this implies of course is that we’re actually going to allow user comments at f1000—and indeed that’s the plan. I’m looking forward to rolling out this functionality to you, not least because when people want to raise questions about articles evaluated on f1000, they’ll be able to.

While we’re on the mythical new site, we asked another web designer what they could come up with for us. And for the first time, all of us who saw the design liked it. So hopefully we’ll be able to get that implemented real soon now and I’ll be able to start showing you what you’re going to get, you lucky people. (Rumours that someone said “It looks like a proper website!” are completely unfounded.)

Interesting reviews

A couple of things you may have missed.

First, the (possible) mechanism behind photophobia in migraines. Turns out that people who are functionally blind, but sensitive to circadian input and pupillary light reflexes are susceptible to photophobia. Work in rats published in Nature Neuroscience implicates a (previously uncharacterized) multisynaptic or heavily networked retinal path.

In Biology, the problem of de novo assembly of DNA sequence reads into sensible contigs from massively parallel sequencing technologies has been addressed. This, if it works, would bring exciting concepts such as personal genomics that little bit closer. The paper is in Genome Research (subscription required) and you can read the evaluation for free.

And finally

Faculty of 1000  is big in Italy—or at least on Facebook. My post on the recycling of integrins drew an excited response from one Grazia Tamma, who was then mocked mercilessly by her brother!

Hang in there, Grazia; science is great and the cytoskeleton rocks.

Posted in Friday afternoon, Indicators, Metrics | Tagged: , , , , | 4 Comments »

You can’t always get what you want

Posted by rpg on 18 October, 2009

I was at the Internet Librarian International on Thursday and Friday of last week. Not the sort of conference I’m used to attending but as we were sponsoring it we had a speaking slot, and I seemed the obvious choice!

Rather than talk about f1000 or give a ‘corporate’ presentation I talked about the Journal Impact Factor, about alternative metrics, about the difficulties in assessing the literature, discovering quality, and locating what’s important. (This is actually what we’re in the business of doing, but aside from the branding I only mentioned what we do in passing. This was appreciated by the audience and organizers, as it turns out: and we stood out from the crowd because of it!)

I may have mentioned ‘Web 3.0’ a couple of times. As I see it, Web 1 (which it was never known as) was typified by information coming from a more or less authoritative source to many consumers. Web 2.0 was where you got people producing and sharing the information on a more level playing field; between themselves, as I told the Research Information Network back in May:


rpg at RIN, May ’09

And yeah, the Web is not the internet, and ‘Web 2.0’ was happening off-Web as it were for years before anyone thought of the term: through bulletin boards, Usenet etc. The wonders of marketing.

Web 3, I think, is when we figure out how to use all this funky technology that enables peer-to-peer, all the tagging and RSS and search and everything, and actually start finding stuff. To be more precise: all the power of web 2.0 gets brought to us where we are in useful, digestible chunks. A guy can dream, can’t he?

That’s what we’re trying to achieve at f1000, in our small corner. To find the important literature (currently biology and medicine, although that might expand) and to bring what you’re interested to you, in the way you want to see it. We’re not there yet, and the new site won’t hit it straight off, but we have lots of ideas and times are beginning to look exciting.

Anyway, the talk seemed to be well-received (I got spontaneous applause!) and Tom was on hand to record it. I spent yesterday afternoon trimming the video and inserting (some of) my slides as cutaways. And then about four hours uploading them because the internets don’t seem to reach south of the river…

Here’s me from Thursday (in two parts):

Enjoy!

Posted in Conferences, Literature, Metrics | Tagged: , , , , , , , , , , , , | 3 Comments »

How many more times?

Posted by rpg on 14 September, 2009

…what dreams may come
When we have shuffled off this mortal coil,
Must give us pause

Thomson, in a commentary in the Journal of the American Medical Association, reckon there ain’t nowt wrong with the Journal Impact Factor:

The impact factor has had success and utility as a journal metric due to its concentration on a simple calculation based on data that are fully visible in the Web of Science. These examples based on citable (counted) items indexed in 2000 and 2005 suggest that the current approach for identification of citable items in the impact factor denominator is accurate and consistent.

Well, they would say that.

And they might well be right, and you and I and Thomson Reuters might argue the point endlessly. But there are a number of problems with any citation-based metric, and a pretty fundamental one was highlighted (coincidentally?) in the same issue of JAMA.

Looking at thre general medical journals, Abhaya Kulkarni at the Hospital for Sick Children in Toronto (shout out to my friend Ricardipus) and co. found that three different ways of counting citations come up with three very different numbers.

Cutting to the chase, Web of Science counted about 20% fewer citations than Scopus or Google Scholar. The reasons for this are not totally clear, but are probably due to the latter two being of wider scope (no pun intended). Scopus, for example, looks at ~15,000 journals compared with Web of Science’s ten thousand. Why? The authors say that Web of Science ’emphasized the quality of its content coverage’: which in English means it doesn’t look at non-English publications, or those from outside the US and (possibly) Europe, or other citation sources such as books and conference proceedings. And that’s before we even start thinking about minimal citable units; or non-citable outputs; or whether blogs should count as one-fiftieth of a peer-reviewed paper.

Presumably some of the discrepancy is due to removal of self-cites, which strikes me as being just as unfair: my own output shouldn’t count for less simply because I’m building on it. It’s also difficult to know how to deal with the mobility of sciences: do you only look at the last author? or the first? I don’t know how you make that one work at all, to be honest.

That aside, I think curation of citation metrics is necessary: Kulkarni et al. report that fully two percent of citations in Google Scholar didn’t, actually, cite what they claimed to. That is a worrying statistic when you realize that people’s jobs are on the line. You have to get this right, guys.

But it’d be nice if we could all agree on the numbers to start with.

Read the rest of this entry »

Posted in Indicators, Journals, Literature, Metrics | Tagged: , , , , , | 3 Comments »

Big Bad John

Posted by rpg on 28 August, 2009

I’ve been remiss.

I should have talked a bit about the events of last Saturday: truth is I was struck by a stomach bug on Tuesday night and have been a little bit out of things. If you’re interested, there is a video of the ‘Fringe Frivolous‘ event of the Friday evening and lots and lots of photos on Flickr.

Martin Fenner has summarized all the blog links he could find, in a kind of citation-stealing Annual Review way. Yeah, we talked about indicators and metrics in the session with Victor and Ginny Barbour (PLoS), saying among other things that usage data and network metrics and our own F1000 factor aren’t necessarily replacements for the journal impact factor: rather they’re all complementary, and tell you diffirent things.

I’ll actually be talking about that a bit more at two upcoming meetings. The first is Internet Librarian International in London, 15-16 October; the second is the XXIX Annual Charleston Conference, 4-7 November in Charleston SC. That’s actually going to be a hell of a trip as it’s my youngest’s birthday on 5th. She’ll be ten: I’m going to miss it but there should be fireworks on the Friday or Saturday night that we can see.

Interestingly, a chap from Thomson collared me on Saturday after our session. As someone remarked to me later, this was quite a scoop: apparently Thomson don’t usually bother with small fish: I wonder if we spooked them?

Talking of which, there’s a fascinating paper about the ‘Matthew Effect’ in arXiv (‘the arXiv’?), The impact factor’s Matthew effect: a natural experiment in bibliometrics. Turns out (surprise, surprise) that papers published in high impact factor journals garner twice as many citations as the identical paper in a low IF journal. I don’t know if that’s because more people read high IF journals or because there truly is the impression that papers in them must be better, or what. Either way, I’d just like to say…

broken glass

broken glass

Posted in Indicators, Metrics | Tagged: , , , | Comments Off on Big Bad John