Plotting Data That Doesn’t Exist is a Bad Idea

I came across a blog post recently that gives someone’s take on bicycle helmet legislation in New Zealand (links here and here). The post includes the plot below created by Chris Gillham on his anti-helmet advocacy website http://www.cycle-helmets.com (the NZ analysis is here). Gillham is also an editorial board member of Bicycle Helmet Research Foundation.

newzealandhelmetsv6

The figure shows the number of cyclists in New Zealand declined around the helmet law in 1994 and was roughly flat thereafter. Gillham’s estimate of injuries per 100,000 cyclists has increased during that time.

This figure (and others) is discussed on the Wikipedia page for Bicycle Helmets in New Zealand. It states

Australian journalist Chris Gillham [19] compiled an analysis of data from Otago University and the Ministry of Transport, showing a marked decline in cycling participation immediately following the helmet law introduction in 1994. At the same time as the number of cyclists aged over 5 years approximately halved, the injury rate approximately doubled. Noting both the decline in numbers and increase in injury rate preceded the law’s introduction at the start of 1994, possibly attributable to the fact that heavy promotion of helmets had been ongoing in the lead-up to the law’s introduction. This phenomenon of just helmet promotion leading to a reduction in cycling has been witnessed in several countries.[20] See Figure 2.

The problem here is that much of the cycling participation data shown simply does not exist.

The link takes you to Reports and Fact Sheets from the NZ Household Travel Survey. In the Cycling Fact Sheet, Tables 5-7 includes historical data from past surveys taken during the years 1989/90, 1997/98 and 2003-2012. So, there is no data prior to 1989, between 1991-1996, and between 1999-2002. Importantly, there is no cycling participation data in a 6-year window around the NZ helmet law date of 1 January 1994. Here’s a plot that accurately represents the NZ cycling participation survey data.

NZ_Participation

Although there are only two surveys prior to 2003, Gillham’s plot contains data for each year. Where did this new data come from? Gillham states

This pop-up table is based on the Ministry of Transport surveys of 5yo+ cycling participation as a percentage of population, as displayed above, with trends smoothed to compensate for the irregularity of the survey timeline.

I find this result quite curious. There are only two surveys, and therefore only two points, before 2003. Fitting a linear model requires the estimation of an intercept, a slope and additional parameters for any changes in the initial linear pattern. Starting with 1988, I count two changes in the trend up to 1997 (could be three, but the 1991 data is hard to see). Therefore, four parameters would need to be estimated to reproduce this “smooth” plot.

To estimate the parameters in a linear model, the number of data points must be at least the number of parameters. Since there are only two data points, only two parameters can be estimated, i.e., a slope and an intercept. Therefore, the “trend” given in Gillham’s plot is impossible given the data available. Note the red line graph relies on cycling participation data, so it is also incorrect.

There is also a problem with computing injuries per cyclist based on these surveys. The Household Travel Surveys capture cycling for transport, so it would not represent any changes in off-road cycling like mountain biking. It might be reasonable to compute injuries per cyclist involved in a motor vehicle collision, but injuries from non-motor vehicle collisions would be tenuous at best.

This issues are especially troubling because Gillham’s analysis forms part of the research knowledge base on Wikipedia where, I believe, most laypeople and media types get their information. Note that many of the contributors to the Wikipedia page are editorial board members of the anti-helmet organisation Bicycle Helmet Research Foundation. There have been discussions to limit their negative and undo influence, but nothing has come of it.

So, why was there a drop in cycling between the first and second surveys?

It’s really hard to tell given there are only two time points and there is no way to assess changes relative to helmet legislation with only one pre-helmet law time point. It could very well be the decline in cycling participation started long before the helmet law. One NZ research article notes cycling participation in NZ declined steadily from 1986 onwards.

The website for the 1997/98 Travel Survey actually discusses the changes in on-road cycling since the 1989/90 survey.

Between 1989/90 and 1997/98, on-road cycling has decreased by 19 percent, with the largest decrease among school-age children and teenagers. Other countries have also seen large reductions in cycling (for example, cycling in Great Britain has fallen by 20 percent over the same period*). Once an almost universal mode of transport for school children, concern about safety has seen cycling to school become less popular. However, there has been an increase in cycling, particularly longer trips, among the 20-24 age group.

* Source: “Road Accidents Great Britain 1998 The Casualty Report” (September 1999), Department of the Environment, Transport and the Regions, United Kingdom.

Since Great Britain does not have any form of helmet legislation and cycling participation declined at a rate similar to NZ, the evidence does not support the hypothesis the NZ helmet law deterred cycling rates.

Why I Started This Blog

I started this blog late last year, in part, as a response to the plethora of misinformation about bicycle helmets that exists on the internet. I believe much of the problem is a statistical one in that many people either don’t understand or they misuse statistical methods and/or data.

As an academic, the dilemma I found myself in was how to address criticism that originates outside the peer-review environment? When criticism is published in a peer-reviewed journal, it is usually possible to respond in a Letter to the Editor (or similar format). For what it’s worth, the strangely named Bicycle Helmet Research Foundation has an editorial board but no clear avenue for responding to their criticisms — there’s not even a comments section to their posts.

What follows is an email I wrote to Paul Jakma, the author of a blog post that was critical of one of my papers (he mentions he made a similar response here). As I pointed out to him, much of what he wrote was factually incorrect and he misunderstood either our analysis or the data used. To date, he has not corrected his post and, in that time, has left a comment pointing to Chris Rissel’s rejoinder to our paper while ignoring our response. He also seems to have ignored (or is unaware of) our paper showing the benefit of helmet legislation was maintained over the following two decades and there was no “apparent detrimental impact on safety in terms of head injury states” as he states in his post.

I could be wrong, but he doesn’t seem interested in getting his facts correct about our research. I have published my response so that ultimately the reader can make up their mind about the truth.

Email to Paul Jakma (19 January 2012)

I am confused by some of the comments on your blog and on the referring websites.

  1. “Injury rates are seasonal, and they have only very limited data (less than a year) on pre-law rates”

You first point is clearly true which makes monthly injury counts more variable than yearly counts. However, we accounted for that using the X11 method which adjusts for seasonal and abnormal patterns as stated in the paper. Your second point is not true – we used head and limb injury counts 18 months before the law and 18 months afterwards. This gives us plenty of time points (n=36 months) to estimate any trend effects. Inspection of the deviance residuals (not published) indicated we accounted for any seasonal or abnormal variability.

  1. “there appears to be a significant benefit only over a short-period of time”

This point seems to have floated around the blogsphere and is a complete myth. We explicitly state the following in the paper on page 2069.

The tendency towards stability in post-law trends with the inclusion of additional years of data suggests that either 18 months is not sufficient follow up time to accurately detect trends or that the trends shown represent temporally localised changes that did not persist beyond the analysis period. Based on the original analysis there is some evidence that the initial improvement in head injury rates diminished over the 18 months following legislation as shown by the increasing post-law head to limb injury ratios in Fig. 4. Alternatively, the longer term post-law trends being closer to parallel for head and limb injury rates (equivalent to a post-law horizontal line in Fig. 4A and C) supports the idea that the legislation attributable improvement was maintained.

This paragraph seems to be routinely ignored by those who’ve commented about the paper. We chose 18 months post-law to balance out the useable information available pre-law. This is the right approach statistically so that the information before or after the law isn’t dominating the analysis. However, the use of more post-law data demonstrated the benefit was maintained.

  1. “the helmet law has managed to turn a decreasing head injury rate into an increasing head injury rate”

This ties in with my previous point that long term trends (>18 months post law) were flat after the law. Also, note that there were 1288 head injuries in the pre-law period and dropped to 866 in the post-law period, while arm injuries took only a slight dip from 1158 to 1062.

  1. Equal exposure assumption

Your statement is pretty much verbatim from our paper. What we didn’t discuss, and probably should have in retrospect, is that the assumption seemed to work pretty well for arm injures and less so for limb injuries. Their general proximity to the head is likely the reason, i.e., injuries to body parts near each are more correlated than those farther away. Further, the monthly change in the head/arm ratio before the law is 0.997 (95% CI: 0.978-1.016) and 1.006 (95% CI: 0.96-1.06) afterwards. These estimates are virtually textbook examples of no effect/difference (ratio estimate near 1 with tight confidence intervals). Also, for the record, this ratio drops drastically after the law (0.725, 95% CI: 0.539-0.974).

  1. Comments about risk compensation

Many of your arguments seem to be based on #2 and #3, but there are additional reasons I don’t agree with this argument. The primary reason for comparing head to limbs over time is that changes in cycling environment or cycling rates would affect both. I’d recommend reading Povey et al. (1999) as they do a better job explaining this. The idea is that if risk compensation is happening due to more helmet wearing, all injuries would increase (as you point out) but head injuries dropped by almost a third and arm injuries were flat. It has also been suggested the drop is due to less people cycling, but that would mean limb injuries would drop as well and instead dipped slightly. Further, any combination of risk compensation or cycling rate fluctuations would be accounted for by limb injuries (also true for distances cycled). It is true we did not explicitly account for cyclist behaviour or types of cycling, but a person cannot separate their head from their limbs when they get on a bike. So, if a person changes their cycling behaviour after the law, it will affect both.

  1. Comments about fatalities not being included

As I mentioned in my previous email, hospitalisation data can include some fatalities but will not include all. Simply put, Australian data isn’t collected that way. I’ve got a few points about that. First, we used hospitalisation data uniformly before and after the law. If, in fact, helmets turn fatalities into survivable injuries as suggested, you would find an increase in head injuries post-law (which would make it more difficult to find a helmet law benefit). As mentioned above, long term trends for head injuries were flat post-law. That doesn’t mean the point is untrue because fatalities are rare, like I mentioned, and would thus have little influence on the trends in serious bicycle injuries (hospitalisations and fatalities). In a different paper, we found only 39 bicycle-related deaths from all causes in a five year period (2000/01 – 2004/05) (S. Chong et al. / Accident Analysis and Prevention 42 (2010) 290–296). Any cycling-related death is unacceptable, in my opinion, but the inclusion or exclusion of fatalities has little influence on our analysis.

Hospitalised head and limb injuries are quite serious. Most bicycle injuries probably go unreported and most that are reported do not lead to a hospitalisation (most that seem serious on the surface probably only go the ED and don’t get captured as a hospitalisation). These injuries are the ones that can have life-long detrimental effects. In my opinion, a drop in a third of those injuries after a policy change is a clear benefit.

You may have strong views about helmets or helmet laws, but we feel that we have been transparent in our analysis and have given reasonable justification for all aspects of our methods including their limitations. Throughout the process we aimed to be objective and I believe we’ve succeeded on that account.

Regards,

Jake

Effect Sizes, Overtaking Distance for Cyclists and Mandatory Helmet Legislation

I came across an interesting comment in an article by Alan Davies on The Urbanist. Davies discusses Ian Walker’s overtaking distance study which found a statistically significant association with helmet wearing and motor vehicle overtaking distance. Like a study I published last year, Davies did not find the results convincing with regards to helmets but found other factors like vehicle size and the cyclist’s position in the lane more important.

As expected, there were several comments defending Walker’s conclusions among those with anti-helmet views. A comment left by RidesToWork (who I believe is Bicycle Helmet Research Foundation editorial board member Dorothy Robinson) states

Many readers of this blog appear keen to dismiss the arguments than the difference in passing room might make a difference. Yet they don’t apply the same critical thinking to claims about helmet laws – such as Figure 2 of http://www.cyclehelmets.org/1228.html The effect, if there is one, is nothing like as clear as the effect of helmet wearing on passing distance.

I found this comment dubious, in part, because it links to an unfounded and misleading criticism of one my articles, but also because it’s quite a bold statement comparing the results from two very different studies. Walker’s analysis was a 2×5 ANOVA for motor vehicle overtaking distance while cycling in the UK with 2 levels of helmet wearing and 5 levels of distance to the kerb. Our study, used Poisson regression to model the rate of head and limb injury hospitalizations in NSW in the the three year period surrounding the helmet laws in 1991.

Note the Figure 2 mentioned in the comment has been dubiously manipulated through rescaling and shifting two time series so they overlap to produce a seemingly null effect. This seems to be a common tactic by anti-helmet advocates as I’ve discussed before here and here.

Regarding Robinson’s claim, the question is how can someone claim the results from one study are more “clear” compared to another, disparate study?

One method is to compare their effect sizes (ES) on some common scale. It is well known that large sample sizes can make unimportant differences statistically significant (the opposite is true for small sample sizes, i.e., important differences are not statistically significant). Take the one sample z-test for the population mean \mu, for example. The right tailed p-value converges to 0 as n tends to infinity for fixed values of \bar{x} and \sigma, i.e.,

\lim\limits_{n\rightarrow\infty}\mathcal{P}\left(Z>\sqrt{n}\dfrac{\bar{x}-\mu_0}{\sigma}\right)=0

As a result, a significant p-value is possible even when \bar{x} is infinitesimally close to \mu_0. For that reason, it is important not to overpower a study by choosing an excessively large sample size (Walker did just that in his study computing sample size based on 98% power instead of the usual 80% or 90%). Effect sizes are essentially the observed test statistic without the influence of sample size (conversely, it can be used to compute sample size a priori).

Jacob Cohen defined the difference in sample means divided by the sample standard deviation as an effect size for the two sample t-test, i.e.,

d=\dfrac{\bar{x}_1-\bar{x}_2}{s}.

Cohen further proposed operationally defined values of d\in\{0.2,0.5,0.8\} as smallmedium and large effect sizes respectfully[1]. The reasoning given by Cohen regarding these values is

My intent was that medium ES represent an effect likely to be visible to the naked eye of a careful observer… I set small ES to be noticeably smaller than medium but not so small as to be trivial, and I set large ES to be the same distance above medium as small was below it.

The results from both Walker’s data and our study on mandatory helmet legislation can be converted to Cohen’s d. An F statistic with 1 numerator degree freedom can be converted to Cohen’s d by

d=2\sqrt{\dfrac{F}{df_d}}

For helmet wearing, Walker reported F_{1,2313}=8.71 which correponds to d=0.12. This categorizes helmet wearing as a trivial, and therefore unimportant, effect size by Cohen’s definition.

It is also possible to convert to Cohen’s d for a linear model with a binary predictor x (takes on values 0 or 1) and corresponding coefficient \beta as

d=\dfrac{|\beta|}{se(\beta)}\sqrt{\dfrac{1}{n_1}+\dfrac{1}{n_2}}

where n_1 and n_2 are group sample sizes. In our re-analysis of Walker’s data, the adjusted effect of helmet wearing for Walker’s data is -0.058 respectively. There were 1206 and 1149 overtaking events when not wearing or wearing a helmet respectively which translates to d=0.16 (a more respectable value, although I doubt Walker or Robinson would ever agree with our results). Note that in my analysis, I did not use a square root transformation or remove 35 observations as Walker did as these considerations are unnecessary given the large sample size.

In our re-analysis of Walker’s paper, we also performed logistic regression to assess whether close overtaking was associated with helmet wearing as this is more relevant to cycling safety. Using the one meter rule as a cut point, we estimated a non-significant adjusted odds ratio of 1.13 (p=0.54). Odds ratios can also be transformed to Cohen’s d by

d=\dfrac{\sqrt{3}log(OR)}{\pi}

Using the adjusted odds ratio, we get d=0.07 which is again trivial by Cohen’s definition.

In our paper regarding mandatory helmet legislation, we reported a decline in bicycle related head injury relative to limb injury at the time of the NSW helmet law of 27.5% or 31% depending on whether arm or leg injuries were used as the comparator. These results can also be written as rate ratios of 0.725 or 0.69 respectively. If we assume the rate ratio here is equivalent to an odds ratio, Cohen’s d is 0.18 and 0.20 respectively.

Our analysis made the implicit assumption that no cyclist wore helmets pre-law and all cyclists wore helmets post-law. If, instead, the comparison was relative to the approximate 55% increase in helmet wearing in NSW, we get rate ratios of 0.56 and 0.51 and transformed Cohen d’s of 0.32 and 0.37.

I also published a paper last year that proposed operationally defined effect sizes for odds ratios that did not require transformation to Cohen’s d or make distributional assumptions regarding effect sizes. These were 1.22, 1.86 and 3.00 for small, medium and large odds ratios (or, equivalently, 0.82, 0.54 and 0.33).

These results suggest the helmet wearing effect from Walker’s data is trivial and the effect of helmet legislation is about a medium effect size. Of course, these results are from one cyclist and from one Australian state. So, over-generalizations should not be made without more data. However, note that Walker did repeat aspects of his first study comparing seven types of cyclists. The CASUAL type was the only one without a helmet with a mean overtaking distance of 117.61cm. This was approximately dead center of the means (range: 114.01cm – 122.12cm) suggesting the trivial helmet effect size from the original study was no coincidence.

  1. Cohen J (1992) A power primerPsychological Bulletin 112: 155–159.