Faculty of 1000

Post-publication peer review

Archive for the ‘Statistics’ Category

It had to be you

Posted by rpg on 7 January, 2010

One of the biggest problems facing authors of scientific papers is the ordering of the author list. In my own field, the person who did the most work (or who had the bright idea, &c.) would tend to go first, and the person running the lab in the prestigious last author position. (My own experience is one of being completely shafted—but I’ll save that story for another day.) Other disciplines do things differently, ranging from completely random to strictly alphabetical.

A statistician (or possibly biologist) friend of mine who writes at Nature Network yesterday published a paper entitled Do not log-transform count data in Nature Precedings. He and his coauthor have come up with a novel way of assigning authorship, of which I heartily approve:

The order of the authors was determined by the result of the South Africa – England cricket ODI on the 27th September 2009, which England won by 22 runs.

How do you solve this problem? I might offer a small prize for the most innovative method, provided it has actually been used for a published paper.
Read the rest of this entry »

Advertisements

Posted in Literature, Statistics | Tagged: , | 14 Comments »

No definite link between cannabis use and suicide: our review

Posted by stevepog on 7 December, 2009

We’ve published an interesting review (aren’t they all though?) on a study that discussed the lack of association between marijuana and suicide risk, in what our reviewer Wayne Hall from the University of Queensland, Australia, described as “the largest and best controlled prospective study of the relationship to date“.

It’s a tough topic to tackle, especially in a time when celebrity deaths, marijuana usage and suicide are so closely linked by tabloid media (Marilyn Monroe being the newest revelation) and when fears that teenage brains getting destroyed by cannabis are high on the news agenda. But on the other hand, when a highly-respected scientist such as Professor David Nutt gets vilified by the government for his outspoken views on drugs policy, the media generally showed support for the sacked Professor while still being skeptical of his evidence-based comments (such as cannabis use being safer than horse riding).

Hall looked at the paper, Cannabis and suicide: longitudinal study, by Allebeck and Price at al. from Cardiff University UK, published in the British Journal of Psychiatry, which went beyond previous small cross-sectional studies to look at whether the cannabis/suicide attempt relationship took into account pre-existing suicide risk between young people who become regular cannabis users and their peers who do not.

In the study, more than 50,000 Swedish men aged 18-20 were followed up for 33 years using death registers to identify those who had died from suicide.

Hall says:

As in previous studies, self-reported cannabis use at conscription was positively related to suicide (odds ratio [OR]=1.62, 95% confidence interval [CI] 0.65-2.07) but this association was no longer significant when plausible potential confounders, such as problematic behavior during childhood, intelligence, alcohol abuse, parental psychiatric disorder, other drug use, and psychiatric diagnosis at conscription were statistically controlled for by logistic regression (OR=0.88, 95% CI 0.65-1.20).
The selection of confounders to control for did not affect the finding that the OR was no longer significant after adjustment for confounders. This study strongly suggests that the modest association observed between regular cannabis use and suicide in cross sectional studies reflects the fact that young people who are at marginally higher risk of suicide are more likely to become regular cannabis users than their peers.

And it’s the last point that is most pressing: those at a slightly higher suicide risk are more likely to become regular pot smokers, not the other way round. If you look at the citation rates on Google Scholar for “cannabis and suicide”, many academics seem to support the view that the two are  strongly linked.

Allebeck and Price’s earlier paper, published in 1990 at this study’s 15-year mark, even stated “the proportion of suicides increased sharply with the level of cannabis consumption“: their new study clarified that the “association was eliminated after adjustment for confounding” and the link was better explained by markers of psychological and behavioural problems.

No doubt long-term studies such as this will lend more weight to the Nutt debate and, if they are given adequate publicity, hopefully help to cut down on biased anti-drug journalism.

Posted in f1000, Journalism, Statistics | Tagged: , , , , | Comments Off on No definite link between cannabis use and suicide: our review

Take the red poll!

Posted by stevepog on 25 September, 2009

Posted in f1000, Statistics | Tagged: | Comments Off on Take the red poll!

Half The Lies You Tell Ain’t True

Posted by rpg on 6 August, 2009

You’ve probably seen all the fuss over Wyeth and the ghost-writing of medical articles, along with the associated smugness of certain commentators. According to my contacts in the medical comms industry, the practice as such is nothing new, and there are very, very strong guidelines. The creative outrage we’re seeing is really rather misplaced:

Well, this is 1998 information and back then, things were a LOT slacker and this kind of thing did go on. The last 5 years have seen a big change and the policy that Wyeth now has is pretty much in line with everyone else

and

Pharma has sorted this out and anyone behaving like that gets fired. In fact, not stating the source of funding for writing invokes the OIG Federal Anti-kickback Statute, and that is two years in chokey.

{REDACTED} have EXAMS on compliance and anyone breaching compliance in a way that results in negative press for us or our clients is fired.

Teapot, there’s a storm a-brewin’.

Anyway, I didn’t want to talk about that, except it’s a nice hook on which to hang examples of a different-but-similar kind of spin.

Via John Graham-Cumming (go sign his Turing petition) I found this wonderful, wonderful site that shows you just how medical comms, pharma companies, eco-terriers, homeopaths, publishers, bloggers, GPs, PR agencies, newspapers, organic interest groups and in fact just about anyone can lie to you without really lying. It’s precisely the sort of thing that Ben Goldacre tries to get the numpty public (and let’s face it, most scientists/medics) to understand, except with pretty graphs and funky webby clicknology.

2845 ways to spin the risk uses an interactive animation to show exactly how drugs, interventions, whatever can be made to look good, bad or indifferent, simply by displaying the same data in different ways.

chances

Play with the animation for a while, then go read the explanations. The whole ‘relative risk/absolute risk/number needed to treat’ thing is pretty well explained, along with bacon butties:

Yet another way to think of this is to consider how many people would need to eat large bacon sandwiches all their life in order to lead to one extra case of bowel cancer. This final quantity is known as the number needed to treat (NNT), although in this context it would perhaps better be called the number needed to eat. To find the NNT, simply express the two risks (with and without whatever you are interested in) as decimals, take the smaller from the larger and invert: in this case we get 1/(0.06 – 0.05) = 100. Now the risks do not seem at all remarkable.

Mmm. Bacon.

Interesting tidbits abound:

One of the most misleading, but rather common, tricks is to use relative risks when talking about the benefits of a treatment, for example to say that “Women taking tamoxifen had about 49% fewer diagnoses of breast cancer”, while harms are given in absolute risks – “the annual rate of uterine cancer in the tamoxifen arm was 30 per 10,000 compared to 8 per 10,000 in the placebo arm”. This will tend to exaggerate the benefits, minimise the harms, and in any case make it unable to compare them. This is known as ‘mismatched framing’

which is quite intriguing, but then we find that it

was found in a third of studies published in the British Medical Journal.

Ouch.

It’s a splendid breakdown of all the things we need to understand when, for example—oh I don’t know—understanding clinical trial results, perhaps; and I certainly haven’t got to grips with it all yet.

I should, but it’s lunchtime and I now fancy some bacon…

Posted in Communication, Statistics | Tagged: , , , | Comments Off on Half The Lies You Tell Ain’t True