Faculty of 1000

Post-publication peer review

Archive for the ‘Journals’ Category

Adrift in an ocean of trash talk

Posted by stevepog on 10 February, 2010

My lesson for today: Don’t argue with an oceanographer over our responsibility for cleaning up the Great Garbage Patch. Actually, don’t argue with an oceanographer over anything marine-based and also don’t call someone (the inspirational Annie Crawley) an oceanographer who isn’t.

Credit: Slate Magazine

I made the mistake of saying that an article in Slate by Nina Shen Rastogi was wrongly titled, as I believed it should be asking how we can clean up the patch, not WHETHER we should bother.

Chief scientist Miriam Goldstein from Seaplex (@seaplexscience on Twitter),  which is The Scripps Institution of Oceanography/Project Kaisei expedition to measure plastic in the North Pacific Gyre, replied:

Actually I agree w headline. Open-ocean cleanup EXTREMELY expensive/technically challenging. Need to carefully consider cost/benefit.

The humbling part wasn’t in being dissed in under 140 characters for my lack of knowledge but in seeing what the important issues are when it comes to a massive area of trash that can’t just be cleared up with a few sweeps by a barge.

Like the Slate article author, I imagined the patch as a large mound of floating rubbish, spinning endlessly whirlpool-style without the plughole to drain out of. I had read of  banking fortune heir David de Rothschild’s headline-grabbing voyage on a yacht made of reclaimed plastic bottles, taking in the North Pacific Gyre on a route from San Francisco to Sydney (a project delayed partly by the extremely ambitious task of building such a boat).

But changing the concept that the Patch really isn’t a Patch at all will take some undoing. Perhaps there’s a word in another language that would better do it justice (and one not so similar to those of cute 80s dolls would bring home the message better anyway).

As Miriam said, cost and benefit are obvious considerations when looking at possible clean-up efforts. As Rastogi said in Slate, “despite the oft-repeated claim that the Great Pacific Garbage Patch is “twice the size of Texas,” we don’t really know the exact size of the Patch or how much garbage it contains.” (To Americans, Texas must seem really large: to Canadians, Australians, Russians etc it’s kind of small).

So committing x billion dollars to cleaning up an area of unknown mass and size could be essentially fruitless. Commenters on the article made the wise assertion that cutting the trash pile off at its source (drains, business waste overflows, garbage dumps, discarded material from boats etc.) was the only way to significantly reduce the Patch in the long-term.

In the way that more scientists are presenting sensible future-focused approaches to managing climate change (see original papers, later reviewed on f1000 Biology, from Lawler and Tear et al. for a solid review and another from Graham and McClanahan et al. on coral reef ecosystem stability), so Project Kaisei and other organisations are working on strategic responses to the issue, such as recycling retrieved waste and using large nets to snare bigger pieces of trash and leave marine creatures unharmed.

So arguing with an ocean scientist isn’t a good idea and hopefully government decision makers can come to that same conclusion.

Posted in Communication, f1000, Journals, Science | Tagged: , , , , | 8 Comments »

Private investigations

Posted by rpg on 20 January, 2010

One of the really great things about science is its potential for self-correction. If you have an hypothesis, a result (strange or otherwise), a set of data, it can be tested by anyone. This is encouraged, in fact: when you publish you’re not just saying ‘look how clever I am’ but also ‘here’s something new! Can you do it too?’. This philosophy is diametrically opposed to that behind Creationism, say; or homeopathy. In those belief systems whatever the High Priest says is of necessity true, and experiment must bend around them until the results fit.

This means that, in science, a finding or publication that people get very excited about at the time can be shown to be wrong—either through deliberate fraud, experimental sloppiness (although the boundary between the two can be fuzzy) or simply because we’re as scientists wiser now than we were then. This happens, and it’s normal and part of the process. We should welcome it; indeed, my friend Henry Gee has claimed that everything Nature publishes is wrong, or at least provisional.

So what we have to do is be completely open about this, no matter how embarrassing it is for the journal that published the work in the first place.

You know where I’m going with this, don’t you?

It was Derek Lowe who first alerted me to a paper published in Science last year, with the ome-heavy title Reactome Array: Forging a Link Between Metabolome and Genome. This was flagged as a ‘Must Read‘ (free link) back in November, because according to our reviewer Ben Davis

If this worked it could be marvellous, superb.

However, as Ben said in his evaluation,

this work should be read with some caveats. Try as we might, my group, as well as many colleagues, and I have tried to determine the chemistry described […] In my opinion, this is a work that deserves a “Must Read” rating and I strongly encourage the reader to read the source material and reach their own conclusions.

And as Derek points out, Science published an ‘Editorial expression of concern‘, noting a request for  evaluation of the original data and records by officials at the authors’ institutions, as well as mentioning it on their blog. Heavy. Immediately I saw this, I let our Editorial team know we might have a problem and we published a note to warn our readers that the work described in the paper was suspect.

Today we published a dissent to the evaluation from Michael Gelb, who says

There are many reactions shown that seem unusual and controversial […] My colleagues and I have tried to decipher the chemistry shown in Figure 1 of the main text and in the supplemental material. Many of the indicated reactions seem highly unlikely to occur, and the NMR data showing that some of the structures that were made are confusing and controversial.

We’ve also published a follow-up from Ben:

I agree wholeheartedly with the sentiments expressed in the Dissenting Opinion. The chemistry presented in this paper and in the online SI has varied in its description and content worryingly over the last 2 months.

and, rather tellingly,

as yet no chemical samples or key reagents have yet been made generally available.

(One of the usual conditions of publishing in reputable journals is that you make reagents available to other scientists, so that they can repeat your work. Failing to honour this commitment is not playing to the rules.)

It’ll be interesting to see when, not if, the original paper is retracted; and by whom.

And this, people, is the self-correcting wonder of science. Remember this, next time someone starts rabbiting about scientific conspiracies, or sends you their new theory of general relativity, or anything else that sounds crazy. It probably is.

Read the rest of this entry »

Posted in Journals, Literature, Science | Tagged: , , , , , , , | 3 Comments »

“What about the oceans?” Climate change reversal scheme has its doubters

Posted by stevepog on 16 December, 2009

With most of the science media, green movement and world leader attention focused on Copenhagen and climate change right now, it would be remiss of us not to mention a new evaluation which looks at one of numerous papers promising new ways to tackle the greenhouse effect.

The reviewer, Robie Macdonald, from the Institute of Ocean Sciences in Canada, looked at a paper in Geophysical Research Letters that discussed stratospheric geoengineering as a way to curtail greenhouse gas emissions.

MacDonald said:

This paper estimates the costs of putting sufficient aerosols into the stratosphere to slow down or reverse global warming. For possibly as little as several billions of dollars per year, one might cool the planet, stall or reverse ice melting, thwart sea-level rise, and increase the terrigenous sink for CO2 through enhanced primary production.

MacDonald quite rightly has issues with this proposed technique, as it may on one hand help produce planet-cooling sulfate aerosols but on the negative, as Ars technica also reported, “it would also produce more droughts and worsen ozone depletion. And, crucially, it would do nothing to reverse ocean acidification”.

In a time when the media is not quite sure which side of the climate change `debate’ to be on and newspapers are running unchecked stories which deny climate change exists, alongside comment pieces from an unqualified former vice-Presidential candidate (Sarah Palin) to anti-skeptics (George Monbiot), stratospheric geoengineering could take off as the next big thing in climate change reversal (if there could ever be such a beast).

*By the way, the suggested methods of using military aircraft and artillery shells to save the planet sound a little too Armageddon for my liking.

Posted in Communication, f1000, Journals, Random, Science | Tagged: , , , | 3 Comments »

How many more times?

Posted by rpg on 14 September, 2009

…what dreams may come
When we have shuffled off this mortal coil,
Must give us pause

Thomson, in a commentary in the Journal of the American Medical Association, reckon there ain’t nowt wrong with the Journal Impact Factor:

The impact factor has had success and utility as a journal metric due to its concentration on a simple calculation based on data that are fully visible in the Web of Science. These examples based on citable (counted) items indexed in 2000 and 2005 suggest that the current approach for identification of citable items in the impact factor denominator is accurate and consistent.

Well, they would say that.

And they might well be right, and you and I and Thomson Reuters might argue the point endlessly. But there are a number of problems with any citation-based metric, and a pretty fundamental one was highlighted (coincidentally?) in the same issue of JAMA.

Looking at thre general medical journals, Abhaya Kulkarni at the Hospital for Sick Children in Toronto (shout out to my friend Ricardipus) and co. found that three different ways of counting citations come up with three very different numbers.

Cutting to the chase, Web of Science counted about 20% fewer citations than Scopus or Google Scholar. The reasons for this are not totally clear, but are probably due to the latter two being of wider scope (no pun intended). Scopus, for example, looks at ~15,000 journals compared with Web of Science’s ten thousand. Why? The authors say that Web of Science ’emphasized the quality of its content coverage’: which in English means it doesn’t look at non-English publications, or those from outside the US and (possibly) Europe, or other citation sources such as books and conference proceedings. And that’s before we even start thinking about minimal citable units; or non-citable outputs; or whether blogs should count as one-fiftieth of a peer-reviewed paper.

Presumably some of the discrepancy is due to removal of self-cites, which strikes me as being just as unfair: my own output shouldn’t count for less simply because I’m building on it. It’s also difficult to know how to deal with the mobility of sciences: do you only look at the last author? or the first? I don’t know how you make that one work at all, to be honest.

That aside, I think curation of citation metrics is necessary: Kulkarni et al. report that fully two percent of citations in Google Scholar didn’t, actually, cite what they claimed to. That is a worrying statistic when you realize that people’s jobs are on the line. You have to get this right, guys.

But it’d be nice if we could all agree on the numbers to start with.

Read the rest of this entry »

Posted in Indicators, Journals, Literature, Metrics | Tagged: , , , , , | 3 Comments »

Friday I’m in love

Posted by rpg on 21 August, 2009

I’ve been struggling to get some ‘About’ pages in shape for the new site, and all of a sudden Broad has got four wickets, and things are looking a lot more exciting for England.

But over the last few weeks we (that is F1000, not the England XI) have been getting some very flattering emails and I should drag myself away from these distractions (Katich has gone! 109 for 6!) and tell you about them.

Certain publishers have been writing to us, asking us to evaluate their journals. E.g.

…we are enquiring if any of our medical journals could be considered for selection to the Faculty of 1000 journal selection. I have written a few paragraphs describing what we as a publisher of scholarly medical journals wish to achieve. Following this I have listed a selection of our longest running journals we hope you will consider, including the descriptors you may find useful. I have also attached a spreadsheet with similar descriptors of our other journals.

We are a relatively small open access publishing company that specializes in peer-reviewed scientific and medical journals that are made freely available to researchers, academics, professionals and organizations engaged in science and medicine. We pride ourselves in only producing medical and scientific journals of the highest quality.

Now the thing is, this is double edged. On the one hand it’s great that people want to be evaluated by the Faculty of 1000, and validates what we’re doing to some extent. On the other it’s not quite how we work.

To date, at least, our Faculty (currently about five thousand in number across Biology and Medicine, with two and a half thousand assistants or ‘Associate’ Faculty) choose what they evaluate. In their own reading, they choose what is important, and write about it. They are independent. This is important, so I’m going to put it on a line all by itself:

We do not influence the Faculty members’ choice of papers.

So while it’s all right for publishers to write to us (and really, we appreciate it. We want to increase our coverage), we will not interfere with the (111-7! A follow-on looming!) independence of the Faculty.

What is happening, however, is that we have a ‘scanning’ project. Using the awesome power of yeast genetics goodwill and skill of our Associate Faculty, we email out tables of contents of around 500 different, ‘non-obvious’ journals. The AFMs then scan these eToCs for important and interesting papers, and will write evaluations (in conjunction with their Faculty Member) on anything that takes their fancy. We do not impose a quota: if there is nothing of note in a particular issue we don’t get an evaluation.

So I guess the message to publishers is, if your journal really is important in a particular community, then (a) make sure it has online ToCs and (b) get back to us later this year, when you’ll be able to propose journals to be included in the scanning project. We aim to increase the number of journals that we scan quite substantially—by about 50 every couple of months.

We’re hoping that this will completely lay to rest the rumours that we only evaluate stuff from ‘top-tier’ journals. We may not be comprehensive yet, but we’re definitely making progress on ‘systematic’.

Now, back to the cricket, and preparing for Science Online London.

Posted in f1000, Journals | Tagged: , , , , , , | Comments Off on Friday I’m in love

Where the streets have no name

Posted by rpg on 4 August, 2009

Alejandro brings my attention to ScienceWatch’s list of most-cited institutions in science.

This is the list of the ‘top’ twenty institutions out of just over four thousand. For some value of ‘top’, he says snarkily. Now, we know there are serious problems with citation metrics, but essentially it’s all we’ve got to go on, so it’s not a bad list.

The Most-Cited Institutions Overall, 1999-2009 (Thomson)

Rank Institution Citations Papers Citations
Per Paper
1 HARVARD UNIV 95,291 2,597,786 27.26
2 MAX PLANCK SOCIETY 69,373 1,366,087 19.69
3 JOHNS HOPKINS UNIV 54,022 1,222,166 22.62
4 UNIV WASHINGTON 54,198 1,147,283 21.17
5 STANFORD UNIV 48,846 1,138,795 23.31
6 UNIV CALIF LOS ANGELES 55,237 1,077,069 19.5
7 UNIV MICHIGAN 54,612 948,621 17.37
8 UNIV CALIF BERKELEY 46,984 945,817 20.13
9 UNIV CALIF SAN FRANCISCO 36,106 939,302 26.02
10 UNIV PENN 46,235 931,399 20.14
11 UNIV TOKYO 68,840 913,896 13.28
12 UNIV CALIF SAN DIEGO 40,789 899,832 22.06
13 UNIV TORONTO 55,163 861,243 15.61
14 UCL 46,882 860,117 18.35
15 COLUMBIA UNIV 43,302 858,073 19.82
16 YALE UNIV 36,857 833,467 22.61
17 MIT 35,247 832,439 23.62
18 UNIV CAMBRIDGE 43,017 811,673 18.87
19 UNIV OXFORD 40,494 766,577 18.93
20 UNIV WISCONSIN 50,016 760,091 15.2

Or is it?

Because as you know, we give the articles evaluated at F1000 a score. And it has not escaped our notice that once you start doing such a thing, you can start asking interesting questions. Admittedly we only look at biology and medicine (so far…), but according to this Excel spreadsheet I’ve just opened we have over five thousand unique institutions in our database. Hmm… I wonder if we might be doing anything with that?


And talking of authors I’d like to take this opportunity to shout out to my friend Åsa, whose recent work on inhibiting protein synthesis in secondary pneumonia was evaluated on F1000 Medicine (and who might one day get a nonymous blog cough).

Posted in Competition, Indicators, Journals | Tagged: , | 4 Comments »