I started this blog late last year, in part, as a response to the plethora of misinformation about bicycle helmets that exists on the internet. I believe much of the problem is a statistical one in that many people either don’t understand or they misuse statistical methods and/or data.
As an academic, the dilemma I found myself in was how to address criticism that originates outside the peer-review environment? When criticism is published in a peer-reviewed journal, it is usually possible to respond in a Letter to the Editor (or similar format). For what it’s worth, the strangely named Bicycle Helmet Research Foundation has an editorial board but no clear avenue for responding to their criticisms — there’s not even a comments section to their posts.
What follows is an email I wrote to Paul Jakma, the author of a blog post that was critical of one of my papers (he mentions he made a similar response here). As I pointed out to him, much of what he wrote was factually incorrect and he misunderstood either our analysis or the data used. To date, he has not corrected his post and, in that time, has left a comment pointing to Chris Rissel’s rejoinder to our paper while ignoring our response. He also seems to have ignored (or is unaware of) our paper showing the benefit of helmet legislation was maintained over the following two decades and there was no “apparent detrimental impact on safety in terms of head injury states” as he states in his post.
I could be wrong, but he doesn’t seem interested in getting his facts correct about our research. I have published my response so that ultimately the reader can make up their mind about the truth.
Email to Paul Jakma (19 January 2012)
I am confused by some of the comments on your blog and on the referring websites.
- “Injury rates are seasonal, and they have only very limited data (less than a year) on pre-law rates”
You first point is clearly true which makes monthly injury counts more variable than yearly counts. However, we accounted for that using the X11 method which adjusts for seasonal and abnormal patterns as stated in the paper. Your second point is not true – we used head and limb injury counts 18 months before the law and 18 months afterwards. This gives us plenty of time points (n=36 months) to estimate any trend effects. Inspection of the deviance residuals (not published) indicated we accounted for any seasonal or abnormal variability.
- “there appears to be a significant benefit only over a short-period of time”
This point seems to have floated around the blogsphere and is a complete myth. We explicitly state the following in the paper on page 2069.
The tendency towards stability in post-law trends with the inclusion of additional years of data suggests that either 18 months is not sufficient follow up time to accurately detect trends or that the trends shown represent temporally localised changes that did not persist beyond the analysis period. Based on the original analysis there is some evidence that the initial improvement in head injury rates diminished over the 18 months following legislation as shown by the increasing post-law head to limb injury ratios in Fig. 4. Alternatively, the longer term post-law trends being closer to parallel for head and limb injury rates (equivalent to a post-law horizontal line in Fig. 4A and C) supports the idea that the legislation attributable improvement was maintained.
This paragraph seems to be routinely ignored by those who’ve commented about the paper. We chose 18 months post-law to balance out the useable information available pre-law. This is the right approach statistically so that the information before or after the law isn’t dominating the analysis. However, the use of more post-law data demonstrated the benefit was maintained.
- “the helmet law has managed to turn a decreasing head injury rate into an increasing head injury rate”
This ties in with my previous point that long term trends (>18 months post law) were flat after the law. Also, note that there were 1288 head injuries in the pre-law period and dropped to 866 in the post-law period, while arm injuries took only a slight dip from 1158 to 1062.
- Equal exposure assumption
Your statement is pretty much verbatim from our paper. What we didn’t discuss, and probably should have in retrospect, is that the assumption seemed to work pretty well for arm injures and less so for limb injuries. Their general proximity to the head is likely the reason, i.e., injuries to body parts near each are more correlated than those farther away. Further, the monthly change in the head/arm ratio before the law is 0.997 (95% CI: 0.978-1.016) and 1.006 (95% CI: 0.96-1.06) afterwards. These estimates are virtually textbook examples of no effect/difference (ratio estimate near 1 with tight confidence intervals). Also, for the record, this ratio drops drastically after the law (0.725, 95% CI: 0.539-0.974).
- Comments about risk compensation
Many of your arguments seem to be based on #2 and #3, but there are additional reasons I don’t agree with this argument. The primary reason for comparing head to limbs over time is that changes in cycling environment or cycling rates would affect both. I’d recommend reading Povey et al. (1999) as they do a better job explaining this. The idea is that if risk compensation is happening due to more helmet wearing, all injuries would increase (as you point out) but head injuries dropped by almost a third and arm injuries were flat. It has also been suggested the drop is due to less people cycling, but that would mean limb injuries would drop as well and instead dipped slightly. Further, any combination of risk compensation or cycling rate fluctuations would be accounted for by limb injuries (also true for distances cycled). It is true we did not explicitly account for cyclist behaviour or types of cycling, but a person cannot separate their head from their limbs when they get on a bike. So, if a person changes their cycling behaviour after the law, it will affect both.
- Comments about fatalities not being included
As I mentioned in my previous email, hospitalisation data can include some fatalities but will not include all. Simply put, Australian data isn’t collected that way. I’ve got a few points about that. First, we used hospitalisation data uniformly before and after the law. If, in fact, helmets turn fatalities into survivable injuries as suggested, you would find an increase in head injuries post-law (which would make it more difficult to find a helmet law benefit). As mentioned above, long term trends for head injuries were flat post-law. That doesn’t mean the point is untrue because fatalities are rare, like I mentioned, and would thus have little influence on the trends in serious bicycle injuries (hospitalisations and fatalities). In a different paper, we found only 39 bicycle-related deaths from all causes in a five year period (2000/01 – 2004/05) (S. Chong et al. / Accident Analysis and Prevention 42 (2010) 290–296). Any cycling-related death is unacceptable, in my opinion, but the inclusion or exclusion of fatalities has little influence on our analysis.
Hospitalised head and limb injuries are quite serious. Most bicycle injuries probably go unreported and most that are reported do not lead to a hospitalisation (most that seem serious on the surface probably only go the ED and don’t get captured as a hospitalisation). These injuries are the ones that can have life-long detrimental effects. In my opinion, a drop in a third of those injuries after a policy change is a clear benefit.
You may have strong views about helmets or helmet laws, but we feel that we have been transparent in our analysis and have given reasonable justification for all aspects of our methods including their limitations. Throughout the process we aimed to be objective and I believe we’ve succeeded on that account.