Faculty of 1000

Post-publication peer review

Author Archive

Think you knew librarians?

Posted by Callum Anderson on 15 April, 2010

Librarians can sometimes suffer unfairly from stereotypes. But footage like that below suggests there could be much more to your average librarian than might initially meet the eye.

I certainly wouldn’t like to be on the receiving end of a late fine from any of these warrior librarians!

See how long you can stifle your giggles. And rest assured in the knowledge that the book-cart-drill-team contest will be held again this year at the ALA annual conference.

Posted in Conferences, f1000, Random | Tagged: , , , | 2 Comments »

Flattery to deceive

Posted by Callum Anderson on 14 April, 2010

Is orange juice a new superfood? Perhaps in some situations it can benefit the body. But the term ‘superfood’ often belies negligible effects in vivo.

A paper by Husam Ghanim, Chang Ling Sia, Mannish Upadhyay, Kelly Korzeniewski, Prabhakar Viswanathan, Sanaa Abuaysheh, Priya Mohanty and Paresh Dandona at the State University of New York at Buffalo (evaluated by our wonderful Faculty of course), suggests that consuming orange juice alongside a fatty, high-carbohydrate meal could limit the adverse effects of all that junk food.

On a slightly related note – while writing this post I was directed by RPG towards a list of The 40 Deadliest Fast Food Meals – and I wonder how much orange juice we might have to drink to alleviate the effects of the top entry? The article clogging, 1300 calorie, 38 grammes-of-saturated-fat-Baconator Triple from Wendy’s!

Right – back to more serious pontification now.

The paper hinges around a comparison of orange juice, water and glucose drink alongside a fatty, high carbohydrate meal and the subsequent production of reactive oxygen species by polymorphonuclear cells, measures of cytokine and endotoxin activation in mononuclear cells, and plasma levels of endotoxin and matrix metalloproteinase.

Bruce Bistrian of the Beth Israel Deaconess Medical Center says in his evaluation

Orange juice reduced the oxidative stress and prevented the formation of pro-inflammatory components, including the increase in plasma endotoxin, compared to either water or glucose. Somewhat surprisingly, there was no increase in plasma glucose with orange juice as found with the meal plus water or the meal plus glucose, despite the substantial carbohydrate and caloric load.

And he added

it is likely that the authors’ suggestion that the mechanism for the antiinflammatory actions was due to the flavonoids naringenin and hesperidin present in orange juice is correct.

So the flavonoids in orange juice may be preventing inflammation after an unhealthy meal, in short limiting the damage.

However, I would not go as far as to suggest that orange juice is particularly brilliant in this respect, especially as the highest concentrations of hesperidin are found in the white parts and peel of oranges, which do not provide a particularly appetising juice. Furthermore, this article suggests that grapefruit provides a significantly higher concentration of naringenin than orange.

But criticism aside; the mention of flavonoids in this paper got me thinking more generally about these so called superfoods. And then more specifically about a press release I saw doing the rounds recently concerning rhubarb. Scientists are inherently aware that test tube or laboratory work does not always transfer into the real world. And the rhubarb press release is a good example of why.

Rhubarb was christened as a new superfruit by some sections of the media due to its high concentration of polyphenols. And the point of these chemicals is that in test tube study, they scavenge free radicals and show other benefits when used in high concentration. But they also have currently undefined mechanisms by which they may reduce the risk of cancer or heart disease. I would be very surprised however if these benefits effectively make the transfer from vitro to vivo. Basically, the concentration levels of ingested polyphenols are usually extremely low, and may be too low in many cases to make any real difference.

A recently published paper by Balz Frei entitled Controversy: What are the True Biological Functions of Superfruit Antioxidants? highlights further problems when flavonoids in particlular find their way into the body. He says

Flavonoids are poorly absorbed into blood and rapidly eliminated from the body; thus, flavonoids have low eventual biological availability.

So really, despite having high levels of helpful chemicals; once ingested, concentration of many of these so called ‘super’ chemicals still lags way behind more common cellular antioxidants.

So eating rhubarb is not going to affect chemical levels for particularly long, because the unique chemicals simply don’t hang around for very long in the body. And this is why I really like the paper by Ghanim et al. Ghanim and his team acknowledge the short bioavaliability of flavonoids and test them in a situation where their effect is clearly measurable against the high calorie meal.

Perhaps I am being too harsh here? In the rhubarb press release, Dr Nikki Jordan-Mahy does admit that the real application of her research lies away from ‘Superfoods’. She says

But if we can extract the polyphenols they may be useful in helping to fight cancer along with chemotherapy.

And this point hits the nail on the head, we need to be thinking how to extract and concentrate these chemicals to make them worthwhile, and in the meantime, the mainstream media needs to understand that positive laboratory tests do not always signify benefits in vivo.

Posted in f1000, Journalism, Medicine, Press Releases | Tagged: , , , , , , | Comments Off on Flattery to deceive

Migraines, magnets and a vocabulary straight from science fiction

Posted by Callum Anderson on 30 March, 2010

Whenever Richard Lipton releases a new migraine study I always receive it with interest. Partly because his work is pretty cutting edge and and he leads an excellent team, but partly because migraine science can sometimes sound like an encounter of the third kind in a science fiction novel.

Migraines are typically defined as one sided, throbbing headaches, typically accompanied by other symptoms, ranging from a non-specific ‘aura’ to zigzagging or flashing lights, a specific smell, nausea or sensitivity to light or sound.

According to some research , migraines affect close to 12% of the world’s population.

Well, Professor Lipton and his team at Albert Einstein College of Medicine have certainly not let us science fiction fans down this time either, publishing a randomised, double-blind, parallel-group, sham-controlled trial in which a hand-held transcranial magnetic stimulation device is used to treat the migraine.

Image by Andy Field (Hubmedia) via Flickr

Image by Andy Field (Hubmedia) via Flickr

The interesting thing about this study is that it tests a “handheld device” capable of alleviating migraine aura. So in theory, the study could lead to development of a product that allows sufferers to treat themselves at home rather than rely on a clinician. This is especially useful for something like a migraine which tends todevelop and subside rapidly.

The study randomized participants by computer, handing approximately half (99) a sham stimulation and the others (102) were given the sTMS device.
Those who used the real device had less pain and recurring headaches and were less likely to need medication. Of 164 patients who treated at least one attack with the real or fake stimulation devices, 39 percent of those who used the real device reported no pain after two hours compared to 22 percent of those who used the fake device.

Here are the results

37 patients did not treat a migraine attack and were excluded from outcome analyses. 164 patients treated at least one attack with sTMS (n=82) or sham stimulation (n=82; modified intention-to-treat analysis set). Pain-free response rates after 2 h were significantly higher with sTMS (32/82 [39%]) than with sham stimulation (18/82 [22%]), for a therapeutic gain of 17% (95% CI 3-31%; p=0.0179). Sustained pain-free response rates significantly favoured sTMS at 24 h and 48 h post-treatment. Non-inferiority was shown for nausea, photophobia, and phonophobia. No device-related serious adverse events were recorded, and incidence and severity of adverse events were similar between sTMS and sham groups.

When (and even if) this treatment comes onto the market, it remains unclear, how much it will cost, and Neuralieve don’t have any pricing details on their website.

I am tentatively guessing that a technology in infancy like this won’t be particularly affordable for a good few years, but with more trials comes more corporate interest and it is important to consider that the potential market for something like this could be as high as the aforementioned 12% of population, which globally, represents a huge number.

Posted in f1000, Medicine | Tagged: , , , | Comments Off on Migraines, magnets and a vocabulary straight from science fiction

Placebo experimentation and LUTS

Posted by Callum Anderson on 23 March, 2010

A couple of interesting evaluations have made their way past my desk this week, both from Faculty of 1000 Medicine. The first evaluation is of a very interesting paper, originally published in German, which reports results of a questionnaire. The title of the paper is Uncontrolled placebo experimentation in a university hospital, and the results certainly shocked me (I wonder if they would have the same effect on a practicing physician?)

So what percentage of practitioners do you think would admit to regularly treating with placebos in a ‘university hospital’?

The paper reported that 72% of participants admitted to regularly using placebos.

And this despite only 62% of the same group believing that placebos worked “often” (as opposed to 3% – “forever”; 35% – “Sometimes”.)

All those with medical knowledge are aware that placebos do work, and often work better than drugs with active compounds. This paper had me digging through a 2008 copy of the BMJ and the paper [subscription required] entitled What is the placebo worth? in which David Spiegel put forward the case that the most significant part of the placebo is the doctor-patient interaction.

He says

Perhaps the ratcheting down of the time that doctors spend with patients and our modern overemphasis on procedures is “penny wise and pound foolish.” Patients might respond better to real as well as placebo interventions if they were associated with a good doctor-patient relationship.

So perhaps placebo treatment has a place in medicine for some conditions. Spiegel specifically notes that a patient with a condition such as irritable bowel syndrome might be best treated by a doctor with an empathetic ear and time to listen to their story.

I think we still have a tremendous amount to learn about placebos, and studies such as the one conducted by Bernateck et al. imply that despite this obvious lack of understanding their use in clinical situations is relatively commonplace. With more research, and a better understanding of placebos, they could represent a very useful alternative treatment. Perhaps in a case where the clinician has reason to reduce the daily dose of certain pharmacological treatments for some reason or another; or simply in a case where psychophysiological treatment is a more sensible option.

However, a hospital setting is not the right place for experimentation, especially when the results of this survey suggest an assumption amongst medical practitioners that patients typically exaggerate symptoms.

Keeping on the same track – well sticking to medicine at least – another important paper, evaluated in Faculty of 1000 by Julian Wan, concerns an interesting and ubiquitous clinical situation: women presenting with lower urinary tract symptoms (LUTS). This randomized-controlled-trial seeks to establish if any improvements to current diagnostic procedures can be made.

A recent trend in general practice has been for patients to present to their doctor much earlier than has been typical previously. This makes it very difficult for the doctor to diagnose the condition without sending off for formal testing or culture, and as a result, antibiotics are typically prescribed by symptom alone.

This paper looked at  309 non-pregnant women aged 18-70 all presenting with LUTS (dysuria, nocturia, cloudy/foul smelling urine etc.) The women were randomized into five management approaches: empirical antibiotics, delayed empirical antibiotics, targeted antibiotics, dipstick result or midstream urine analysis. It certainly covers an appealing research topic, especially as LUTS represent a very common clinical situation.

As Faculty of 1000 member Julian Wan says

For many practitioners, it is common practice to simply prescribe by symptom without formal testing or culture. There is surprisingly little published about this very common approach, and no large scale randomized trial based on symptom relief and ‘delayed’ antibiotic prescribing.

In a nutshell, the conclusion is that there is no advantage from the perspective of alleviating symptoms in sending out routine midstream urine samples. The approach put forward is empirical delayed prescription to reduce antibiotic use, and targeting with a dipstick test rather than sending the samples off to a lab.

What I really like about this paper is that it makes surprising conclusions, but uses a significant evidence based study to make them. A study like this certainly makes me think that there is still plenty of research to be made into other common conditions. Medicine can sometimes be prone to treat according to status quo, and perhaps with more evidence based research, we could learn to treat more effectively?

Posted in f1000, Medicine | Tagged: , , , | 1 Comment »

Publish or perish – a question of ethics

Posted by Callum Anderson on 15 March, 2010

I got a very strong sense of deja vu when leafing through PLoS Biol recently. I was sure I had seen something very similar to Jeffrey Shaman’s paper Absolute Humidity and the Seasonal Onset of Influenza in the Continental United States before.

A quick check on PubMed proved me right. I found the following, published two months earlier, in PLoS Curr Influenz:

Absolute Humidity and the Seasonal Onset of Influenza in the Continental US
Jeffrey Shaman,* Virginia Pitzer,† Cecile Viboud,‡ Marc Lipsitch,§ and Bryan Grenfell

PubMed ID 20066155
http://www.ncbi.nlm.nih.gov/pubmed/20066155

Because this was PLoS, I was also able to print the full paper and compare. I couldn’t find any differences whatsoever between the two papers. In fact they were exactly the same except for a reshuffling of author order and an abbreviation in the title.

A quick check back on PLoS Biol and I notice that someone else has seen the discrepancy. A comment attached to the article begins with the following

Compare, published in PLoS Currents influenza (dec 18th)
Absolute Humidity and the Seasonal Onset of Influenza in the Continental US
Jeffrey Shaman,* Virginia Pitzer,† Cecile Viboud,‡ Marc Lipsitch,§ and Bryan Grenfell

PubMed ID 20066155

with (and not cited, if I am not mistaken)

Absolute Humidity and the Seasonal Onset of Influenza in the Continental United States (23 february 2010)

Jeffrey Shaman1*, Virginia E. Pitzer2,3,4, Cécile Viboud2, Bryan T. Grenfell2,4,5, Marc Lipsitch6,7,8

When this poster commented, only one of the articles was listed in PubMed. A search for “Absolute humidity” on PubMed today however yielded the following results [click it to get full size]

A PLoS spokesperson had answered the comment in less than 3 hours (perhaps they anticipated something being said). Their official line was as follows

PLoS Biology is fully aware of the authors’ submission to PLoS Currents referenced above. PLoS Currents is a website for immediate, open communication and discussion of new scientific data, analyses, and ideas in a critical research area. The work is screened by experts, but is not subject to in-depth peer review…

Our policy until now (February, 2010) has been to allow resubmission of PLoS Currents content to another PLoS journal. However, the decision to include Currents in PubMed (and PubMed Central) has caused us to reconsider the status of content communicated via Currents, relative to other journals.

I am certainly not convinced by this argument. Having personal experience of getting journals into PubMed, it is not something that happens immediately; the typical process is eight to twelve weeks and PLoS Curr Influenz was already accepted by PubMed in 2009. The accepted date on the re-submitted paper in PLoS Biol was January 20, 2010.

And even worse still, the received date of the paper by PLoS Biol was September 10, 2009. PLoS Curr Influenz did not even accept the duplicate paper until December 18, 2009.

The dates simply don’t add up, a journal doesn’t just email PubMed and expect to show content the next day, and feigning innocence just makes PLoS look at worst deceitful and at very best incompetent. If PLoS was aware that the paper had been submitted to both journals, and was aware that PLoS Curr Influenz would be listed on PubMed, they should have made a full disclosure on the paper subsequently published in PLoS Biol.

Now, I am very much in favour of rapid communication journals, I think they represent an excellent platform to publish cutting edge research, but a distinction between these and traditionally peer-reviewed journals must be drawn somewhere. Should a publication like this really be submitting content to PubMed when their editorial policy allows re-submission in other PLoS journals? PLoS have been having their cake and eating it for a long time now. In a world where publication stats are frequently used as a method of judging the worth of a researcher, are the authors here benefiting twice from the same paper? And PubMed has a very clear policy on duplicate articles, which PLoS should know about.

So why didn’t they do it? Why didn’t they tell PubMed that they would be knowingly supplying duplicate articles? Well I do have a theory [snip-snip – F1000 Lawyers]… But it would be much better to see what you think.

Posted in Journalism, Literature | Tagged: , , , , , , , , , | 24 Comments »

Effects of a continuous work-flow on residents in the intensive care unit

Posted by Callum Anderson on 11 March, 2010

It is well known that the average physician in training will be expected to work more than a few 24 hour shifts during their training. It is also well known that sleep deprivation affects performance (how much? Now that’s the real question, but I digress).

I read a paper, evaluated by Faculty of 1000 member Samuel Ajizian, in which a team of scientists led by R Sharpe at the University of British Columbia have studied the effects of a continuous work-flow on residents in the intensive care unit.

Although monitoring performance of sleep deprived workers is a popular research subject (see, here, here and here), there are a couple of nice aspects to this study. Firstly, the use of a realistic total patient simulation is novel to this type of study; much of the previous body of research has focussed on more general cognitive testing and surgical simulators. The participants also assessed their own performance at various stages of the study allowing the researchers to see if the physicians could notice a drop in performance for themselves.

The simulation included performing advanced cardiac life support scenarios and management of a simulated critically ill patient.

I could spot a potential problem with this paper however. And I was not surprised to read the following statement

For the advanced cardiac life support scenarios, the mean number of major errors committed… decreased during the study period.

We have what looks like (and is subsequently judged to be) a significant ‘learning effect’, and I am surprised the team couldn’t have found a way to assuage this, perhaps by demonstrating the simulation and training physicians before observation?

Results from the patient management scenario followed expectation

The mean number of errors went up from 0.92 +/- 0.90 in the first session to 1.58 +/- 0.79 in the fourth session (p = 0.9).

Another interesting aspect of this study was that Sharpe and his team asked the physicians to conduct subjective self assessment at various stages. Compare the following results of personally assessed global score (again from patient management) with the number of errors displayed above

mean global score decreased from 56.8 ± 14.6 to 49.6 ± 12.6 (p = .02).

A discrepancy between the mean number of errors and the mean global score can be seen here, strongly suggesting that our ability to judge personal performance is not nearly as accurate at hour 20 as it is at hour 1.

Although I wouldn’t like to make any serious judgements from this paper alone, I think there is significant room for further research in this area, especially using realistic patient simulation.

Ajizian also noted in his evaluation that it would be interesting to [repeat this study and allow the residents unlimited access to their preferred caffeine source].

Doing so might provide us with some better data, especially as we tend to work under the influence of caffeine even when we aren’t particularly tired. I wonder if performance levels might correlate to the intensity or the type of stimulant used. Perhaps coffee offers a greater mental boost for physicians than tea? Any anecdotal evidence is welcome below.

Sharpe R, Koval V, Ronco JJ, Dodek P, Wong H, Shepherd J, Fitzgerald JM, Ayas NT. The impact of prolonged continuous wakefulness on resident clinical performance in the intensive care unit: a patient simulator study. Crit Care Med 2010 38:766-70

Posted in f1000, Medicine | Tagged: , , | Comments Off on Effects of a continuous work-flow on residents in the intensive care unit

Have we overlooked the drinking cup?

Posted by Callum Anderson on 3 March, 2010

One of the things I love about scientific knowledge is that it is always in a state of flux. Theories are constantly being amended, rejected or confirmed by the community. In short, there is always room for more research regardless of how well trodden the ground may be.

In this vein, I read an interesting paper, evaluated by Faculty of 1000 member Phil Fischer (link to evaluation free for 3 months).

Simonne Rufener and a team based jointly at the University of Bern, Swiss Tropical Institute in Basel and Institute of Aquatic Sciences and Technology at Dübendorf have published a field study which expands significantly on previous research.

Every year, some 1.6 million people die due to diarrhoea associated with contaminated drinking water. In countries without running water, where drinking water must be collected at source, plenty of research has shown that the water is often contaminated at various stages before consumption, even if the source is relatively free from contamination (see here, here, and here).

The paper hypothesizes that in-house recontamination of drinking water after treatment is a significant, often overlooked problem in the developing world. The team visited 81 households in Bolivia and took 347 water samples from current sources, treated water, transport vessels and drinking vessels. Looking at levels of E. coli at various stages of the water journey, the Rufener and his team were able to show that disinfection at source, or even at home prior to consumption, did not effectively reduce bacteria levels. In fact the paper makes the point that even after home based water treatment such as boiling or SODIS

Only 36% of the treated water samples were free from E. coli

The real conclusion to acknowledge is that disinfecting water at source or at home will continue remain a relatively ineffective treatment while the majority of drinking vessels are still contaminated with E. coli. In the future, we may find solutions which combine water-source interventions, with effective hygiene education to help reduce levels of bacteria in the drinking cup itself.

Rufener S et al J Health Popul Nutr 2010 28 :34-41

Posted in f1000, Medicine | Tagged: , , , , | Comments Off on Have we overlooked the drinking cup?

Spending too long on the couch

Posted by Callum Anderson on 28 February, 2010

Couch potatoes beware!, or so says Faculty of 1000 member Paul Pagel in his evaluation of a paper studying links between television viewing time and mortality in Australia.

Television is the sedentary activity of choice for many of us in the developed world. And plenty of studies have already demonstrated a relationship between televison viewing time and various disorders such as cardio-metabolic risk, diabetes and weight gain.

This particular study into television viewing habits of 8800 by Professor David Dunstan and his team is valuable due to the large sample size and also the length of time the observation ran.

By studying a large sample (8800) of adults aged more than 25 for a median period of 6.6 years, the researchers were able to get results on a big enough scale to make some powerful conclusions.

Let’s look at the numbers.

•A total of 284 deaths were reported in the period
•Of which 87 were related to cardiovascular disease

After appropriate adjustments for age, gender, exercise and and body habitus were made, the authors were able to determine that the likelihood of cardiovascular related mortality was increased for each one hour increment of television viewing per day. As viewing time passed four hours per day, the risk was significantly enlarged.

See the results for yourself

After adjustment for age, sex, waist circumference, and exercise, the hazard ratios for each 1-hour increment in television viewing time per day were 1.11 (95% confidence interval [CI], 1.03 to 1.20) for all-cause mortality, 1.18 (95% CI, 1.03 to 1.35) for CVD mortality, and 1.09 (95% CI, 0.96 to 1.23) for cancer mortality. Compared with a television viewing time of or =2 to or =4 h/d. For CVD mortality, corresponding hazard ratios were 1.19 (95% CI, 0.72 to 1.99) and 1.80 (95% CI, 1.00 to 3.25).

Dunstan and his team at Baker IDI are also careful to note in this paper that television watching itself is not necessarily the danger. Rather it is the prolonged sitting time associated with watching television. I found this paper particularly interesting because it runs against the grain, and insists that as well as promoting healthy initiatives such as exercise and lifestyle modifications, we should also be looking at methods of reducing sedentary activities. What do you think? Is it more important to encourage a healthy lifestyle or discourage an unhealthy one, or is a balance of the two necessary?

Posted in f1000 | Tagged: , , | 2 Comments »

Branching out

Posted by Callum Anderson on 22 February, 2010

We all know that given the right conditions, forests are prone to grow, but until now, it has been very difficult to define what can be considered normal re-growth (i.e. as a forest ages), and what might be considered exceptional growth.

In a paper appearing in PNAS, a team led by Sean McMahon and Geoffrey Parker, both of the Smithsonian Institute have been able to discover evidence for a recent increase in forest growth. Sean and his team were able to access a dataset of biomass from 55 temperate forest plots collected over 22 years.

In putting together weather and CO2 data collected over 100 years and recorded biomass levels over the last 22 years, Sean and his team were able to conclude that 1987-2009 represents a period of exceptional growth in the forests sampled. The interesting thing about this study is that is was perhaps the first to use large datasets in comparison with predicted growth, calculated by using the Monod function.

Once they realised that the biomass was growing much more quickly than expected, McMahon, Parker and the rest of the team attempted to hypothesize why this might be the case.

According to the team, the most likely factors affecting biomass are

1. A small increase in mean temperature over the last 100 years
2. Longer growing seasons over the same period
3. Increased atmospheric CO2 from 1970-2009

Yet again we see evidence suggesting that climate change (note the small ‘c’s) can have a wide effect on the ecosystem. Although this study is only limited to the forests around Maryland, McMahon and Parker believe that the phenomenon is representative of the Eastern deciduous forest biome as a whole.

Faculty of 1000 member Richard Houghton has also noted in his evaluation of the article that the findings here are contrary to results of previous studies. He says

The results are in sharp contrast to the study by Caspersen et al. {1} that found no evidence for increased rates of growth from forest inventory data across five eastern states. On the other hand, Caspersen et al. analyzed growth over the decade ending in the early to mid 1990s, while McMahon et al. analyzed data obtained between 1987 and 2009. Perhaps the different findings help to define ‘recent’ as post-1990.

So it looks like as well as growing at a higher rate than expected, the forests in question also confined growth almost entirely to the period between the mid 1990s and 2009. Should we therefore be looking at the 1990s as an ecological turning point in terms of biomass growth?

Posted in f1000 | Tagged: , , , , , , | Comments Off on Branching out