Faculty of 1000

Post-publication peer review

Archive for August, 2009

Big Bad John

Posted by rpg on 28 August, 2009

I’ve been remiss.

I should have talked a bit about the events of last Saturday: truth is I was struck by a stomach bug on Tuesday night and have been a little bit out of things. If you’re interested, there is a video of the ‘Fringe Frivolous‘ event of the Friday evening and lots and lots of photos on Flickr.

Martin Fenner has summarized all the blog links he could find, in a kind of citation-stealing Annual Review way. Yeah, we talked about indicators and metrics in the session with Victor and Ginny Barbour (PLoS), saying among other things that usage data and network metrics and our own F1000 factor aren’t necessarily replacements for the journal impact factor: rather they’re all complementary, and tell you diffirent things.

I’ll actually be talking about that a bit more at two upcoming meetings. The first is Internet Librarian International in London, 15-16 October; the second is the XXIX Annual Charleston Conference, 4-7 November in Charleston SC. That’s actually going to be a hell of a trip as it’s my youngest’s birthday on 5th. She’ll be ten: I’m going to miss it but there should be fireworks on the Friday or Saturday night that we can see.

Interestingly, a chap from Thomson collared me on Saturday after our session. As someone remarked to me later, this was quite a scoop: apparently Thomson don’t usually bother with small fish: I wonder if we spooked them?

Talking of which, there’s a fascinating paper about the ‘Matthew Effect’ in arXiv (‘the arXiv’?), The impact factor’s Matthew effect: a natural experiment in bibliometrics. Turns out (surprise, surprise) that papers published in high impact factor journals garner twice as many citations as the identical paper in a low IF journal. I don’t know if that’s because more people read high IF journals or because there truly is the impression that papers in them must be better, or what. Either way, I’d just like to say…

broken glass

broken glass


Posted in Indicators, Metrics | Tagged: , , , | Comments Off on Big Bad John

Friday I’m in love

Posted by rpg on 21 August, 2009

I’ve been struggling to get some ‘About’ pages in shape for the new site, and all of a sudden Broad has got four wickets, and things are looking a lot more exciting for England.

But over the last few weeks we (that is F1000, not the England XI) have been getting some very flattering emails and I should drag myself away from these distractions (Katich has gone! 109 for 6!) and tell you about them.

Certain publishers have been writing to us, asking us to evaluate their journals. E.g.

…we are enquiring if any of our medical journals could be considered for selection to the Faculty of 1000 journal selection. I have written a few paragraphs describing what we as a publisher of scholarly medical journals wish to achieve. Following this I have listed a selection of our longest running journals we hope you will consider, including the descriptors you may find useful. I have also attached a spreadsheet with similar descriptors of our other journals.

We are a relatively small open access publishing company that specializes in peer-reviewed scientific and medical journals that are made freely available to researchers, academics, professionals and organizations engaged in science and medicine. We pride ourselves in only producing medical and scientific journals of the highest quality.

Now the thing is, this is double edged. On the one hand it’s great that people want to be evaluated by the Faculty of 1000, and validates what we’re doing to some extent. On the other it’s not quite how we work.

To date, at least, our Faculty (currently about five thousand in number across Biology and Medicine, with two and a half thousand assistants or ‘Associate’ Faculty) choose what they evaluate. In their own reading, they choose what is important, and write about it. They are independent. This is important, so I’m going to put it on a line all by itself:

We do not influence the Faculty members’ choice of papers.

So while it’s all right for publishers to write to us (and really, we appreciate it. We want to increase our coverage), we will not interfere with the (111-7! A follow-on looming!) independence of the Faculty.

What is happening, however, is that we have a ‘scanning’ project. Using the awesome power of yeast genetics goodwill and skill of our Associate Faculty, we email out tables of contents of around 500 different, ‘non-obvious’ journals. The AFMs then scan these eToCs for important and interesting papers, and will write evaluations (in conjunction with their Faculty Member) on anything that takes their fancy. We do not impose a quota: if there is nothing of note in a particular issue we don’t get an evaluation.

So I guess the message to publishers is, if your journal really is important in a particular community, then (a) make sure it has online ToCs and (b) get back to us later this year, when you’ll be able to propose journals to be included in the scanning project. We aim to increase the number of journals that we scan quite substantially—by about 50 every couple of months.

We’re hoping that this will completely lay to rest the rumours that we only evaluate stuff from ‘top-tier’ journals. We may not be comprehensive yet, but we’re definitely making progress on ‘systematic’.

Now, back to the cricket, and preparing for Science Online London.

Posted in f1000, Journals | Tagged: , , , , , , | Comments Off on Friday I’m in love

Don’t stop me now

Posted by rpg on 13 August, 2009

Excuse me while I indulge in some shameless company promotion here.

Scientific research is a pretty fast-moving place to be. In some places I’ve worked in there were unofficial competitions to notice a hot paper and send it round to your colleagues. Not sure what the prize was, to be honest: a golden computer mouse or something.

Seriously, if there’s hot news, you want to hear about it pretty quickly. Especially if it’s directly related to your own research. So one of the potential criticisms of F1000 is that there is a delay between research being published and you hearing about it. That’s pretty fair actually: someone has to read the paper, write about it; then our editorial team have to check it over and then it gets published to the website.

But how long does that all take? It’s reasonably difficult to get real numbers retrospectively, but I had a go.

Time to publication of first evaluation—Medicine

Time to publication of first evaluation—Medicine

Time to publication of first evaluation—biology

Time to publication of first evaluation—biology

These are pretty interesting data. First, it looks like we’re publishing evaluations before the papers themselves are published. This is simply because we had to take the publication dates as given by PubMed and sometimes (ofttimes?) the publication date is after the actual date the article was published for real.

The main point however is that we publish the first evaluation of a paper pretty rapidly after its publication. Integrating the area under the graph, we can estimate that half of all the first evaluations go live within about a month of the original publication date. It’s a bit slower for Medicine, and Medicine is slower overall, but I’m impressed.

(There are, as always, exceptions.)

And it’s important to remember that selection by F1000 predicts citation rates, beating the impact factor by two years or more.

That’s fast.

Here’s a closer look, just for gits and shiggles. This is combined Medicine and Biology counts, zooming in on that first month or two.

Rapid evaluation by f1000

Rapid evaluation by f1000

Look at that! We’re hot! And we’re fast!

F1000 is Ferrari-fast!

F1000 is Ferrari-fast!

Posted in f1000 | 2 Comments »

Modern way

Posted by rpg on 7 August, 2009

You might have noticed that I’ve been tweeting random recent evaluations. I do this a couple of times a day (well, that’s the plan, at least), simply highlighting stuff that I find interesting, without having the time to write a proper post about the original articles. (This, by the way, is what I find to be one of the greatest things about Twitter. And with CoTweet I can go through my archive if ever I want to follow up on something that caught my attention.)

I try to mix stuff from the medicine and the biology sites, and squeeze in a little comment or teaser (plus a link to the paper itself or abstract on PubMed). I aim for evaluations of articles in ‘obscure’ journals, and recently-published work.

The Lovén reflex is “a reaction in which a local dilation of vessels accompanies a general vasoconstriction, e.g. when the central end of an afferent nerve to an organ is suitably stimulated, its efferent vasomotor fibers remaining intact, a general rise in blood pressure occurs together with a dilation of the vessels of the organ”

Just now, I saw something that I simply cannot do justice to in a tweet. It’s not a new paper—in fact it’s possibly the oldest paper on the site—but it is in a very obscure (to me) journal.

The paper, of which the English translation of the title is

On the vasodilation of arteries as a consequence of nerve stimulation

is written by one Christian Lovén, who died in 1904. It describes vasodilation and vasoconstriction in rabbits as a consequence of nervous stimulation. Faculty Member Wilfrid Jänig, at the Institute of Physiology, Kiel, selected the paper—143 years after it was published.

And that’s interesting because I’ve been re-writing the About pages for the website, and one of the things I was looking at this morning was how long it takes us to publish evaluations, compared with the original article publication dates (according to Hoyle PubMed). I’m going to talk about that a little next week, but I’d just like to say now that a datum at 52 thousand days really skews my stats.

Read the rest of this entry »

Posted in f1000, Literature | Tagged: , , , , | 1 Comment »

Half The Lies You Tell Ain’t True

Posted by rpg on 6 August, 2009

You’ve probably seen all the fuss over Wyeth and the ghost-writing of medical articles, along with the associated smugness of certain commentators. According to my contacts in the medical comms industry, the practice as such is nothing new, and there are very, very strong guidelines. The creative outrage we’re seeing is really rather misplaced:

Well, this is 1998 information and back then, things were a LOT slacker and this kind of thing did go on. The last 5 years have seen a big change and the policy that Wyeth now has is pretty much in line with everyone else


Pharma has sorted this out and anyone behaving like that gets fired. In fact, not stating the source of funding for writing invokes the OIG Federal Anti-kickback Statute, and that is two years in chokey.

{REDACTED} have EXAMS on compliance and anyone breaching compliance in a way that results in negative press for us or our clients is fired.

Teapot, there’s a storm a-brewin’.

Anyway, I didn’t want to talk about that, except it’s a nice hook on which to hang examples of a different-but-similar kind of spin.

Via John Graham-Cumming (go sign his Turing petition) I found this wonderful, wonderful site that shows you just how medical comms, pharma companies, eco-terriers, homeopaths, publishers, bloggers, GPs, PR agencies, newspapers, organic interest groups and in fact just about anyone can lie to you without really lying. It’s precisely the sort of thing that Ben Goldacre tries to get the numpty public (and let’s face it, most scientists/medics) to understand, except with pretty graphs and funky webby clicknology.

2845 ways to spin the risk uses an interactive animation to show exactly how drugs, interventions, whatever can be made to look good, bad or indifferent, simply by displaying the same data in different ways.


Play with the animation for a while, then go read the explanations. The whole ‘relative risk/absolute risk/number needed to treat’ thing is pretty well explained, along with bacon butties:

Yet another way to think of this is to consider how many people would need to eat large bacon sandwiches all their life in order to lead to one extra case of bowel cancer. This final quantity is known as the number needed to treat (NNT), although in this context it would perhaps better be called the number needed to eat. To find the NNT, simply express the two risks (with and without whatever you are interested in) as decimals, take the smaller from the larger and invert: in this case we get 1/(0.06 – 0.05) = 100. Now the risks do not seem at all remarkable.

Mmm. Bacon.

Interesting tidbits abound:

One of the most misleading, but rather common, tricks is to use relative risks when talking about the benefits of a treatment, for example to say that “Women taking tamoxifen had about 49% fewer diagnoses of breast cancer”, while harms are given in absolute risks – “the annual rate of uterine cancer in the tamoxifen arm was 30 per 10,000 compared to 8 per 10,000 in the placebo arm”. This will tend to exaggerate the benefits, minimise the harms, and in any case make it unable to compare them. This is known as ‘mismatched framing’

which is quite intriguing, but then we find that it

was found in a third of studies published in the British Medical Journal.


It’s a splendid breakdown of all the things we need to understand when, for example—oh I don’t know—understanding clinical trial results, perhaps; and I certainly haven’t got to grips with it all yet.

I should, but it’s lunchtime and I now fancy some bacon…

Posted in Communication, Statistics | Tagged: , , , | Comments Off on Half The Lies You Tell Ain’t True

Where the streets have no name

Posted by rpg on 4 August, 2009

Alejandro brings my attention to ScienceWatch’s list of most-cited institutions in science.

This is the list of the ‘top’ twenty institutions out of just over four thousand. For some value of ‘top’, he says snarkily. Now, we know there are serious problems with citation metrics, but essentially it’s all we’ve got to go on, so it’s not a bad list.

The Most-Cited Institutions Overall, 1999-2009 (Thomson)

Rank Institution Citations Papers Citations
Per Paper
1 HARVARD UNIV 95,291 2,597,786 27.26
2 MAX PLANCK SOCIETY 69,373 1,366,087 19.69
3 JOHNS HOPKINS UNIV 54,022 1,222,166 22.62
4 UNIV WASHINGTON 54,198 1,147,283 21.17
5 STANFORD UNIV 48,846 1,138,795 23.31
6 UNIV CALIF LOS ANGELES 55,237 1,077,069 19.5
7 UNIV MICHIGAN 54,612 948,621 17.37
8 UNIV CALIF BERKELEY 46,984 945,817 20.13
9 UNIV CALIF SAN FRANCISCO 36,106 939,302 26.02
10 UNIV PENN 46,235 931,399 20.14
11 UNIV TOKYO 68,840 913,896 13.28
12 UNIV CALIF SAN DIEGO 40,789 899,832 22.06
13 UNIV TORONTO 55,163 861,243 15.61
14 UCL 46,882 860,117 18.35
15 COLUMBIA UNIV 43,302 858,073 19.82
16 YALE UNIV 36,857 833,467 22.61
17 MIT 35,247 832,439 23.62
18 UNIV CAMBRIDGE 43,017 811,673 18.87
19 UNIV OXFORD 40,494 766,577 18.93
20 UNIV WISCONSIN 50,016 760,091 15.2

Or is it?

Because as you know, we give the articles evaluated at F1000 a score. And it has not escaped our notice that once you start doing such a thing, you can start asking interesting questions. Admittedly we only look at biology and medicine (so far…), but according to this Excel spreadsheet I’ve just opened we have over five thousand unique institutions in our database. Hmm… I wonder if we might be doing anything with that?


And talking of authors I’d like to take this opportunity to shout out to my friend Åsa, whose recent work on inhibiting protein synthesis in secondary pneumonia was evaluated on F1000 Medicine (and who might one day get a nonymous blog cough).

Posted in Competition, Indicators, Journals | Tagged: , | 4 Comments »