Faculty of 1000

Post-publication peer review

Posts Tagged ‘Literature’

Private investigations

Posted by rpg on 20 January, 2010

One of the really great things about science is its potential for self-correction. If you have an hypothesis, a result (strange or otherwise), a set of data, it can be tested by anyone. This is encouraged, in fact: when you publish you’re not just saying ‘look how clever I am’ but also ‘here’s something new! Can you do it too?’. This philosophy is diametrically opposed to that behind Creationism, say; or homeopathy. In those belief systems whatever the High Priest says is of necessity true, and experiment must bend around them until the results fit.

This means that, in science, a finding or publication that people get very excited about at the time can be shown to be wrong—either through deliberate fraud, experimental sloppiness (although the boundary between the two can be fuzzy) or simply because we’re as scientists wiser now than we were then. This happens, and it’s normal and part of the process. We should welcome it; indeed, my friend Henry Gee has claimed that everything Nature publishes is wrong, or at least provisional.

So what we have to do is be completely open about this, no matter how embarrassing it is for the journal that published the work in the first place.

You know where I’m going with this, don’t you?

It was Derek Lowe who first alerted me to a paper published in Science last year, with the ome-heavy title Reactome Array: Forging a Link Between Metabolome and Genome. This was flagged as a ‘Must Read‘ (free link) back in November, because according to our reviewer Ben Davis

If this worked it could be marvellous, superb.

However, as Ben said in his evaluation,

this work should be read with some caveats. Try as we might, my group, as well as many colleagues, and I have tried to determine the chemistry described […] In my opinion, this is a work that deserves a “Must Read” rating and I strongly encourage the reader to read the source material and reach their own conclusions.

And as Derek points out, Science published an ‘Editorial expression of concern‘, noting a request for  evaluation of the original data and records by officials at the authors’ institutions, as well as mentioning it on their blog. Heavy. Immediately I saw this, I let our Editorial team know we might have a problem and we published a note to warn our readers that the work described in the paper was suspect.

Today we published a dissent to the evaluation from Michael Gelb, who says

There are many reactions shown that seem unusual and controversial […] My colleagues and I have tried to decipher the chemistry shown in Figure 1 of the main text and in the supplemental material. Many of the indicated reactions seem highly unlikely to occur, and the NMR data showing that some of the structures that were made are confusing and controversial.

We’ve also published a follow-up from Ben:

I agree wholeheartedly with the sentiments expressed in the Dissenting Opinion. The chemistry presented in this paper and in the online SI has varied in its description and content worryingly over the last 2 months.

and, rather tellingly,

as yet no chemical samples or key reagents have yet been made generally available.

(One of the usual conditions of publishing in reputable journals is that you make reagents available to other scientists, so that they can repeat your work. Failing to honour this commitment is not playing to the rules.)

It’ll be interesting to see when, not if, the original paper is retracted; and by whom.

And this, people, is the self-correcting wonder of science. Remember this, next time someone starts rabbiting about scientific conspiracies, or sends you their new theory of general relativity, or anything else that sounds crazy. It probably is.

Read the rest of this entry »

Advertisements

Posted in Journals, Literature, Science | Tagged: , , , , , , , | 3 Comments »

On being systematic

Posted by rpg on 16 November, 2009

Over on another planet blog Darren Saunders asks what is an Associate Faculty Member (AFM).

There was some sales training on this subject last week and I sat in, so I should know. I’ve also been re-writing the FAQs for the new f1000 website and have just realized that there isn’t an FAQ relating to AFMs there, either. (Meta: how many times does a question have to be asked before it becomes ‘frequent’?) So, let’s have a stab at explaining it.

As you may or may not know, what Faculty of 1000 does is publish short reviews of the scientific (currently biology and medicine) literature. How this works is through our eponymous Faculty of over 5000 top scientists and medics, all over the world. These people are principle investigator level or higher. When they read a paper that they consider interesting, important, or otherwise worthy of wider recognition they write a review (or ‘evaluation’), assign a score (or rating) to the original article, and submit it to our editorial team (usually via a web interface). The piece is then edited in the usual way, coded to appropriate sections (i.e. sub-disciplines), and published on the website at f1000biology.com or f1000medicine.com, depending on the specialty of the contributing Faculty Member.

This system has been pretty successful for a few years now, and we know that people really like the service (because they tell us!). It lets scientists and medics see very quickly what’s happening in their own field, and rapidly get at what’s considered important in other communities (whether simply out of interest or because they’re moving into unfamiliar territory). Identifying important papers quickly and easily gauging the opinion of a field easily are not trivial tasks: f1000 is intended to help everyone, from students through to vice-chancellors, achieve this.

Critically, choice of articles to review is left entirely to the Faculty, and may come from any journal. Any journal: even the Harvard Business Review. Naturally there are a high proportion of articles from the usual suspects—Nature, Cell, NEJM, etc.—although about 80% come from ‘second tier’ or less popular journals (he says, desparately avoiding the ‘I’ word).You might expect this, seeing as certain journals review editorially before a paper goes anywhere near peer review, and actually are quite successful at it.

In a sense, we don’t care about the providence of the articles reviewed at f1000. If they’re good, we want to know (and ‘good’ means 1-2% of the current eligible literature). However, there are a lot of journals publishing good stuff, and how do we know we’re scanning the right ones if we’re just leaving it to serendipitous reading by the Faculty?

Enter the Associate Faculty. Currently about a thousand Faculty Members have one or more Associates: less senior members of their lab or practice (which can mean anything from a post-doc to a PI in their own right). Once a month we send these Associates a table of contents from two journals: one general, one ‘specific’; both self-nominated. The Associate checks the table against their own reading, and selects articles that they have already read that they will review. They also let us know if there are any articles that they think should be reviewed but that they will not do themselves: these then go into a ‘pot’ which we send (a couple of weeks later) to Associates who haven’t committed to producing a review that month.

When the Associate commits to reviewing an article, it’s pretty much between them and their Faculty Member as to how it’s handled. Sometimes the Associate will do the bulk of the writing, other times the Faculty Member will. In either case, the full Faculty Member has to approve the evaluation and has final say—they are the corresponding author.

We cover, at the last count, about 660 journals in this fashion. We’ve asked the Faculty to tell us what journals they think should be scanned in this scheme, and eventually we’ll be covering over a thousand different journals. This does not mean that we won’t be evaluating articles outwith this ‘core’ of journals: Faculty Members have complete freedom to evaluate papers regardless of where they are published. Our Associate Faculty help them identify the good stuff, and we help them to choose by providing the tables of content with a selection system (somewhat arcane, but we are working on it). The buzzphrase is ‘systematic and comprehensive’: we’re certainly systematic and are working on the comprehensive.

Hope that clears some things up for Darren.

Read the rest of this entry »

Posted in FAQs, FMs | Tagged: , , | Comments Off on On being systematic

You can’t always get what you want

Posted by rpg on 18 October, 2009

I was at the Internet Librarian International on Thursday and Friday of last week. Not the sort of conference I’m used to attending but as we were sponsoring it we had a speaking slot, and I seemed the obvious choice!

Rather than talk about f1000 or give a ‘corporate’ presentation I talked about the Journal Impact Factor, about alternative metrics, about the difficulties in assessing the literature, discovering quality, and locating what’s important. (This is actually what we’re in the business of doing, but aside from the branding I only mentioned what we do in passing. This was appreciated by the audience and organizers, as it turns out: and we stood out from the crowd because of it!)

I may have mentioned ‘Web 3.0’ a couple of times. As I see it, Web 1 (which it was never known as) was typified by information coming from a more or less authoritative source to many consumers. Web 2.0 was where you got people producing and sharing the information on a more level playing field; between themselves, as I told the Research Information Network back in May:


rpg at RIN, May ’09

And yeah, the Web is not the internet, and ‘Web 2.0’ was happening off-Web as it were for years before anyone thought of the term: through bulletin boards, Usenet etc. The wonders of marketing.

Web 3, I think, is when we figure out how to use all this funky technology that enables peer-to-peer, all the tagging and RSS and search and everything, and actually start finding stuff. To be more precise: all the power of web 2.0 gets brought to us where we are in useful, digestible chunks. A guy can dream, can’t he?

That’s what we’re trying to achieve at f1000, in our small corner. To find the important literature (currently biology and medicine, although that might expand) and to bring what you’re interested to you, in the way you want to see it. We’re not there yet, and the new site won’t hit it straight off, but we have lots of ideas and times are beginning to look exciting.

Anyway, the talk seemed to be well-received (I got spontaneous applause!) and Tom was on hand to record it. I spent yesterday afternoon trimming the video and inserting (some of) my slides as cutaways. And then about four hours uploading them because the internets don’t seem to reach south of the river…

Here’s me from Thursday (in two parts):

Enjoy!

Posted in Conferences, Literature, Metrics | Tagged: , , , , , , , , , , , , | 3 Comments »

Modern way

Posted by rpg on 7 August, 2009

You might have noticed that I’ve been tweeting random recent evaluations. I do this a couple of times a day (well, that’s the plan, at least), simply highlighting stuff that I find interesting, without having the time to write a proper post about the original articles. (This, by the way, is what I find to be one of the greatest things about Twitter. And with CoTweet I can go through my archive if ever I want to follow up on something that caught my attention.)

I try to mix stuff from the medicine and the biology sites, and squeeze in a little comment or teaser (plus a link to the paper itself or abstract on PubMed). I aim for evaluations of articles in ‘obscure’ journals, and recently-published work.

The Lovén reflex is “a reaction in which a local dilation of vessels accompanies a general vasoconstriction, e.g. when the central end of an afferent nerve to an organ is suitably stimulated, its efferent vasomotor fibers remaining intact, a general rise in blood pressure occurs together with a dilation of the vessels of the organ”

Just now, I saw something that I simply cannot do justice to in a tweet. It’s not a new paper—in fact it’s possibly the oldest paper on the site—but it is in a very obscure (to me) journal.

The paper, of which the English translation of the title is

On the vasodilation of arteries as a consequence of nerve stimulation

is written by one Christian Lovén, who died in 1904. It describes vasodilation and vasoconstriction in rabbits as a consequence of nervous stimulation. Faculty Member Wilfrid Jänig, at the Institute of Physiology, Kiel, selected the paper—143 years after it was published.

And that’s interesting because I’ve been re-writing the About pages for the website, and one of the things I was looking at this morning was how long it takes us to publish evaluations, compared with the original article publication dates (according to Hoyle PubMed). I’m going to talk about that a little next week, but I’d just like to say now that a datum at 52 thousand days really skews my stats.

Read the rest of this entry »

Posted in f1000, Literature | Tagged: , , , , | 1 Comment »

More than a number (in my little red book)

Posted by rpg on 31 July, 2009

Shirley Wu kicked off an interesting conversation on Friendfeed yesterday, reporting on a conversation that questioned the ‘quality’ of our old friend PLoS One. Now there’s a debate that’s going to go round and round, possibly generating rather more heat and noise than useful work.

The conversation, thanks to Peter Binfield, then turned onto article level metrics as a way of assessing ‘quality’, and that’s when I got interested.

Euan Adie wants to know,

Do [Nascent readers] think that article downloads stats should be put on academic CVs?

and provides a very cogent argument as to why paying too much attention to single article metrics is not necessarily a good idea.

I’m not saying that download stats aren’t useful in aggregate or that authors don’t have a right to know how many hits their papers received but they’re potentially misleading (& open to misinterpretation) and surely that’s not the type of data we want to be bandying about as an impact factor replacement?

Now, I’m paying attention to this not just because Faculty of 1000 is a stakeholder in this sort of discussion, but also of  Science Online London. This is the successor to last year’s blogging conference, and this year I’m on the organizing committee.  Together with Victor Henning of Mendeley and Ginny Barbour of PLoS I’m running a session on… article level metrics.

Which means it’s going to get interesting, as there are strong feelings on both sides—and maybe I should trawl through the attendee list and make sure they are all briefed.

Bring it on.

I’m going to say that I agree with Euan that download stats are a bit suss, actually. Just like citations, they don’t tell you how the article is being used; they don’t tell you if the user (or the citer) thinks this paper is good or bad. As Euan puts it,

A download counter can’t tell if the person visiting your paper is a grad student looking for a journal club paper, a researcher interested in your field or… somebody who typed in an obscure porn related search that turned up unconnected words in the abstract.

(That reminds me of a story my boss in Cambridge told me. He was an editor on a particular well-respected cell biological journal, and one day they found that one article was getting massively hit, and it was all search engine traffic. On examining the referrer stats it turns out that the paper was all about our friend C. elegans.  The cause of all the traffic was not a sudden upsurge in global nematode research, but rather the juxtaposition of the words ‘sex’ and ‘anal’  in the abstract. Quite.)

The problem with the Impact Factor, and also with article level metrics, is that there is no indicator of quality. About the only automated system that has any hope of doing this is the kind of network analysis that Johan Bollen is doing. Briefly, these network maps show you how people are using papers by following a user’s click history.

funky network connections

funky network connections

In any event, I suspect that people are going to have to learn to be more sophisticated in their use of metrics/indicators, and the tyranny of the impact factor will be shaken. It starts here.

Posted in Indicators | Tagged: , , , , | 2 Comments »

Somewhere over the rainbow

Posted by rpg on 27 July, 2009

Somewhere in the depths of PLoS One an article lurks…

Liz Allen and her friends at the Wellcome performed an interesting study on papers that came out of labs that were at least partially funded by Wellcome grants. What they did was to figure where each of the nearly 700 papers ended up being published, and then looked at bibliometric indictors (cough impact factors cough).

Nothing new there, really. The clever bit was to persuade a ‘college’ of experts (I hate that word, sorry) to review the papers, and then compare this with the bibliometric indicators… and with the Faculty of 1000. Funnily enough, highly-rated papers did not necessarily get cited highly (so the Impact Factor doesn’t help here), but the F1000 ratings agreed pretty well with their college of experts, and was also able to predict citation rates to some extent.

We were pretty stoked about this paper, I can tell you: and we hope to have a chat with Liz next month and show her some interesting stuff we’ve been up to. It plugs directly into this:

Tools that link expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation.

I’ll write more about that nearer the time. Or even, given the bonkers-mad workflow I’ve got going on, after the time. Until then, you can check out the more in-depth analysis and a fascinating discussion over at my personal blog.

Posted in Indicators | Tagged: , , , , | 6 Comments »