Effect Sizes, Overtaking Distance for Cyclists and Mandatory Helmet Legislation

I came across an interesting comment in an article by Alan Davies on The Urbanist. Davies discusses Ian Walker’s overtaking distance study which found a statistically significant association with helmet wearing and motor vehicle overtaking distance. Like a study I published last year, Davies did not find the results convincing with regards to helmets but found other factors like vehicle size and the cyclist’s position in the lane more important.

As expected, there were several comments defending Walker’s conclusions among those with anti-helmet views. A comment left by RidesToWork (who I believe is Bicycle Helmet Research Foundation editorial board member Dorothy Robinson) states

Many readers of this blog appear keen to dismiss the arguments than the difference in passing room might make a difference. Yet they don’t apply the same critical thinking to claims about helmet laws – such as Figure 2 of http://www.cyclehelmets.org/1228.html The effect, if there is one, is nothing like as clear as the effect of helmet wearing on passing distance.

I found this comment dubious, in part, because it links to an unfounded and misleading criticism of one my articles, but also because it’s quite a bold statement comparing the results from two very different studies. Walker’s analysis was a 2×5 ANOVA for motor vehicle overtaking distance while cycling in the UK with 2 levels of helmet wearing and 5 levels of distance to the kerb. Our study, used Poisson regression to model the rate of head and limb injury hospitalizations in NSW in the the three year period surrounding the helmet laws in 1991.

Note the Figure 2 mentioned in the comment has been dubiously manipulated through rescaling and shifting two time series so they overlap to produce a seemingly null effect. This seems to be a common tactic by anti-helmet advocates as I’ve discussed before here and here.

Regarding Robinson’s claim, the question is how can someone claim the results from one study are more “clear” compared to another, disparate study?

One method is to compare their effect sizes (ES) on some common scale. It is well known that large sample sizes can make unimportant differences statistically significant (the opposite is true for small sample sizes, i.e., important differences are not statistically significant). Take the one sample z-test for the population mean \mu, for example. The right tailed p-value converges to 0 as n tends to infinity for fixed values of \bar{x} and \sigma, i.e.,


As a result, a significant p-value is possible even when \bar{x} is infinitesimally close to \mu_0. For that reason, it is important not to overpower a study by choosing an excessively large sample size (Walker did just that in his study computing sample size based on 98% power instead of the usual 80% or 90%). Effect sizes are essentially the observed test statistic without the influence of sample size (conversely, it can be used to compute sample size a priori).

Jacob Cohen defined the difference in sample means divided by the sample standard deviation as an effect size for the two sample t-test, i.e.,


Cohen further proposed operationally defined values of d\in\{0.2,0.5,0.8\} as smallmedium and large effect sizes respectfully[1]. The reasoning given by Cohen regarding these values is

My intent was that medium ES represent an effect likely to be visible to the naked eye of a careful observer… I set small ES to be noticeably smaller than medium but not so small as to be trivial, and I set large ES to be the same distance above medium as small was below it.

The results from both Walker’s data and our study on mandatory helmet legislation can be converted to Cohen’s d. An F statistic with 1 numerator degree freedom can be converted to Cohen’s d by


For helmet wearing, Walker reported F_{1,2313}=8.71 which correponds to d=0.12. This categorizes helmet wearing as a trivial, and therefore unimportant, effect size by Cohen’s definition.

It is also possible to convert to Cohen’s d for a linear model with a binary predictor x (takes on values 0 or 1) and corresponding coefficient \beta as


where n_1 and n_2 are group sample sizes. In our re-analysis of Walker’s data, the adjusted effect of helmet wearing for Walker’s data is -0.058 respectively. There were 1206 and 1149 overtaking events when not wearing or wearing a helmet respectively which translates to d=0.16 (a more respectable value, although I doubt Walker or Robinson would ever agree with our results). Note that in my analysis, I did not use a square root transformation or remove 35 observations as Walker did as these considerations are unnecessary given the large sample size.

In our re-analysis of Walker’s paper, we also performed logistic regression to assess whether close overtaking was associated with helmet wearing as this is more relevant to cycling safety. Using the one meter rule as a cut point, we estimated a non-significant adjusted odds ratio of 1.13 (p=0.54). Odds ratios can also be transformed to Cohen’s d by


Using the adjusted odds ratio, we get d=0.07 which is again trivial by Cohen’s definition.

In our paper regarding mandatory helmet legislation, we reported a decline in bicycle related head injury relative to limb injury at the time of the NSW helmet law of 27.5% or 31% depending on whether arm or leg injuries were used as the comparator. These results can also be written as rate ratios of 0.725 or 0.69 respectively. If we assume the rate ratio here is equivalent to an odds ratio, Cohen’s d is 0.18 and 0.20 respectively.

Our analysis made the implicit assumption that no cyclist wore helmets pre-law and all cyclists wore helmets post-law. If, instead, the comparison was relative to the approximate 55% increase in helmet wearing in NSW, we get rate ratios of 0.56 and 0.51 and transformed Cohen d’s of 0.32 and 0.37.

I also published a paper last year that proposed operationally defined effect sizes for odds ratios that did not require transformation to Cohen’s d or make distributional assumptions regarding effect sizes. These were 1.22, 1.86 and 3.00 for small, medium and large odds ratios (or, equivalently, 0.82, 0.54 and 0.33).

These results suggest the helmet wearing effect from Walker’s data is trivial and the effect of helmet legislation is about a medium effect size. Of course, these results are from one cyclist and from one Australian state. So, over-generalizations should not be made without more data. However, note that Walker did repeat aspects of his first study comparing seven types of cyclists. The CASUAL type was the only one without a helmet with a mean overtaking distance of 117.61cm. This was approximately dead center of the means (range: 114.01cm – 122.12cm) suggesting the trivial helmet effect size from the original study was no coincidence.

  1. Cohen J (1992) A power primerPsychological Bulletin 112: 155–159.

More Misinformation from Freestylecyclists.org

I recently came across another excellent cycling article from Alan Davies at The Urbanist. In the article, Davies discusses claims that cycling accidents are on the rise. This is due to an increase in the cycling road toll in 2013. This may be a concern, but it’s impossible to establish a trend from one data point.

Davies briefly mentions helmet legislation, but notes it isn’t relevant to the current discussion (and I agree). However, in the comments, I found a few troubling responses regarding that topic. Strewth states

But we also know from analysis done in the 1990s that among cyclists, the decline in non-head injuries over this period was as great or greater than the decline in head injuries.

No citation or link is given to support this claim. This comment is strange since a previous study of mine estimates a 35% drop in cycling head injury hospitalizations with the NSW helmet law while arm and leg injuries dropped by only 11% and 6% respectively. A more comprehensive response was left by Linda Ward.

Another comment by Nik Dow states

A fact-based explanation is linked (see “detailed explanation”) http://www.freestylecyclists.org/do-helmet-laws-make-much-difference/ and covers the introduction of demerit points and ramping up of speed and red-light cameras.

The link takes you to Freestylecyclists.org, an anti-helmet organization I’ve discussed previously. My previous post pointed to misinformation presented by national spokesperson Alan Todd and the given link is more of the same.

The following plot of cycling and pedestrian fatalities from 1980-2006 in Australia is given and the anonymous author concludes declines in cycling deaths were mostly “due to massive ramping up of speed and red-light cameras, together with the introduction of demerit points.” I’ve assumed this is due to the pedestrian and cycling time series being placed on top of each other.


What is problematic here is this is not an accurate representation of the fatality data (much of it can be found here). The author has apparently rescaled the pedestrian time series to get them to overlay. As I’ve discussed before, this a strategy too often used to mislead actual temporal relationships of data. In the comments, Davies also questions the accuracy of this figure.

Plotting both time series on the same graph is problematic here since pedestrian fatalities have historically dwarfed those for cyclists (in 1989 they were 501 and 98 respectively). One method to address this problem is to index the time series to a starting value. An advantage of this approach is you’re able to plot trends without distorting scales; however, a disadvantage is the actual data is not being presented and each data point is a comparison with some starting value.

Below is such a plot for the period 1971 to 2013 for cycling and pedestrian fatalities in Australia (the vertical red lines represent the first and last helmet law dates in Australia).


This looks virtually nothing like the Freestylecyclist.org plot. Relative to 1971, pedestrian fatalities have steadily declined over the next 40 years, while cycling fatalities were flat up to the 1990, followed by a substantial decline by 1992 and flat thereafter. This does not suggest declines in cycling fatalities are associated with general road safety improvements such as demerit points or speed cameras. Further, the lack of temporal agreement between cycling and pedestrian fatalities prior to 1990 raises questions regarding pedestrians as a suitable comparator to cyclists.

It is not appropriate to make decisions about trends from eye-balling a figure, so I fit an interrupted time series model to this data. The Poisson model I used was


where TIME is centered at 1991 and LAW is an indicator that takes on the value 1 for years 1991 onwards and 0 everywhere else. My results suggest no pre-1990 trend for cycling fatalities (p=0.84) and a 42% decline in cycling fatalities at 1991 (p<0.001). Residual plots indicate good overall fit, although the 2013 observation may exhibit high leverage.0000


Something profound happened for cycling fatalities in Australia between 1990-1992. It is often argued helmet legislation deters cycling; however, this is an argument I largely reject due to conflicting evidence from data of low quality.[1] Still, this does not necessarily indicate helmet legislation is a causal factor in lowering cycling fatalities. Yet, this analysis does rule out general road safety interventions as a causal influence proposed by Freedomcyclists.org.

  1. Olivier, J., Grzebieta, R., Wang, J.J.J. & Walter, S. (2013). Statistical Errors in Anti-Helmet Arguments. Australasian College of Road Safety Conference.