Bad Science Doesn’t Allow Comments That Contradict Author?

Update: Ben Goldacre has now published Linda Ward’s commentary. See comments below for more details.

The blog Bad Science is written by Ben Goldacre where he discusses problems he sees with science. To this end, he has also authored two books Bad Science and Bad Pharma. I am usually supportive of those who try to make sense and “correct” misconceptions about science to the general public.

For his effort, Goldacre should be applauded. However, he published a head-scratching commentary in the BMJ about bicycle helmets with David Spiegelhalter. Goldacre posted a short discussion and link to the commentary on his blog. I found their take on the topic to be misinformed and uncritical of anti-helmet advocates. This is a topic someone like Goldacre should take head on, but instead seems content to regurgitate anti-helmet rhetoric.

Linda Ward tried posting a comment to Bad Science detailing much of the evidence ignored by Goldacre and Spiegelhalter back in April 2014. Her comment was not published and I don’t know why.

Here is her comment in full.

Several population level helmet law studies have controlled for background trends and included both head and non-head injuries, and shown that the effect of the legislation on hospital admissions for cycling head injuries to be far from minimal:

– Carr/MUARC (1995), (Victoria, Australia)
– Hendrie (1999), (Western Australia)

– Povey (1999), (New Zealand)

– Scuffham (2000), (New Zealand)

– Karkhaneh (2006), (Alberta, Canada)

– Walter (2011), (New South Wales, Australia)

The head injury results in all these population-level longitudinal studies, and the AIS3/4 head/brain injury results in the Carr study, are consistent with the (hospital control) results of the Thompson Cochrane Review, and the Attewell and Elvik meta-analyses, of case-control studies.

Two factors are likely to be responsible the Dennis minimal effect finding: collinearity (between ‘time’ and ‘time since intervention’); and such a tiny number of pre-law data points for Ontario (30% of the 1994 injuries, law was Oct 95) and British Columbia (19% of the 1994 injuries, law was Sep 96).

Dennis et al. cite the Scuffham and Walter studies as being “limited by sample size or methodological quality”. However both the Scuffham and Walter analyses took baseline trends into account, and had (more than) adequate sample sizes. Macpherson claimed that the Povey and Scuffham analyses, and a preliminary (1992) MUARC study by Cameron, “failed to include a concurrent control group in the analysis”; however all 3 analyses used cyclist non-head injuries as concurrent control groups. (Povey’s and Scuffham’s analyses also included non-cyclist injuries.) Dennis also cites the preliminary 1992 Cameron/MUARC study; both Macpherson and Dennis have apparently overlooked the (1995) Carr/MUARC study (4 years of post-law data), which superceded the (1992) Cameron study (1 year of post-law data).

This (2013) paper debunks the Fyhri and Walker risk compensation, and Robinson safety in numbers, claims: (also see With respect to the 85/88% in the “Seattle” study, Guy Chapman states that “nothing approaching this has ever been observed in a real cyclist population and the case and control groups were completely different”. By “real cyclist population” and “completely different” “case and control groups”, it seems that Guy may mean population-level longitudinal studies, and hospital vs population controls. I am not aware of any studies using population controls, it would be helpful if Guy were to cite the studies he is talking about (and a reference for his claim, on a Wikipedia talk page last year, that “50% of cyclist deaths in London are due to crushing by goods vehicles at junctions, cause of death being abdominal trauma”).

Guy states that “substituting co-author Rivara’s own street count data in the 1989 study, instead of their assumed value, makes the effect vanish into the statistical noise”, but does not provide an references. Struggling to understand how one could (validly) “substitute” “Rivara’s own street count data” into a case-control study (and finding no helmet studies in with Rivara as 1st author in PubMed), I forced myself to have a look at the (truly dreadful) cyclehelmets site. Guy’s claim that substituting “Rivara’s own” data . . . makes the effect vanish into the statistical noise” seems to be referring to the claim that “Of 4,501 child cyclists observed cycling around Seattle, just 3.2% wore helmets. This is not statistically different from the 2.1% of the hospital cases who were wearing helmets”. The required sample size, to detect a difference (with 80% power) between 2.1% and 3.2%, is 3,346 in EACH group; the cyclehelmets site states that there were 135 cases. The effect does not “vanish into the statistical noise”, it is (statistical) rubbish to claim, on the basis of such grossly inadequate sample size (less than 1/20th of the numbers cases required for such a comparison), that the lack of a statistically significant effect is (real) evidence that there is no effect.

I am still wondering what Guy means by “assumed value”, it would be helpful if Guy could explain how the the case-control study “assumed” helmet wearing prevalence.

It is the BHRF site (cyclehelmets) site, not the Cochrane review, that is disgraceful: the site also misrepresents the results of the Carr, Hendrie, Povey, Scuffham, Karkhaneh, Walter, Attewell, and Elvik studies; it also misrepresents the results of the Australian (Victorian, New South Wales, and South Australian) participation surveys (see the above Olivier/ACRS link).

My current ‘favourite’ example is the claim ( that “Helmeted cyclists have about the same percentage of head injuries (27.4%) as unhelmeted car occupants and pedestrians (28.5%). Wearing a helmet seems to have no discernible impact on the risk of head injury.”. The reference cited is “Serious injury due to land transport accidents, Australia, 2003-04”. As a BHRF “editorial board” member, maybe Guy can explain how it is possible to draw such a conclusion from a report that does contain any information as to what the head injury rates were prior to the helmet legislation?

(The BHRF: a perfect teaching case for how NOT to ‘do’ epidemiology?)

As a demonstration Linda actually submitted her comments, here are screenshots.

BadScience1 BadScience2 BadScience3


3 thoughts on “Bad Science Doesn’t Allow Comments That Contradict Author?

  1. I followed up with an email to Dr Goldacre in May (2014), but received no reply.

    Yet a comment (critical of helmet legislation) posted in August (2014) has been published.

    Bad Science? Extremely bad form.

  2. Oh the melodrama. Nobody’s censoring you. Your comment was caught in the spam filter, and I’m busy with work.

    Also, the piece you’re talking about was a BMJ editorial, and their Rapid Responses section is open to all, just like (tho they have staff to monitor spam!).

    • I don’t get why you’re describing this as melodrama. As I mentioned, Linda submitted her response and could see it when logged in. However, it was never posted publicly and she contacted you directly to ask why not. Since she received no response from you, she asked for my advice and I agreed to post it to my blog. I’ll take you on your word her message was sent to your spam folder; however, there’s nothing in her comment that seems like spam to me.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s