Effect Sizes, Overtaking Distance for Cyclists and Mandatory Helmet Legislation

I came across an interesting comment in an article by Alan Davies on The Urbanist. Davies discusses Ian Walker’s overtaking distance study which found a statistically significant association with helmet wearing and motor vehicle overtaking distance. Like a study I published last year, Davies did not find the results convincing with regards to helmets but found other factors like vehicle size and the cyclist’s position in the lane more important.

As expected, there were several comments defending Walker’s conclusions among those with anti-helmet views. A comment left by RidesToWork (who I believe is Bicycle Helmet Research Foundation editorial board member Dorothy Robinson) states

Many readers of this blog appear keen to dismiss the arguments than the difference in passing room might make a difference. Yet they don’t apply the same critical thinking to claims about helmet laws – such as Figure 2 of http://www.cyclehelmets.org/1228.html The effect, if there is one, is nothing like as clear as the effect of helmet wearing on passing distance.

I found this comment dubious, in part, because it links to an unfounded and misleading criticism of one my articles, but also because it’s quite a bold statement comparing the results from two very different studies. Walker’s analysis was a 2×5 ANOVA for motor vehicle overtaking distance while cycling in the UK with 2 levels of helmet wearing and 5 levels of distance to the kerb. Our study, used Poisson regression to model the rate of head and limb injury hospitalizations in NSW in the the three year period surrounding the helmet laws in 1991.

Note the Figure 2 mentioned in the comment has been dubiously manipulated through rescaling and shifting two time series so they overlap to produce a seemingly null effect. This seems to be a common tactic by anti-helmet advocates as I’ve discussed before here and here.

Regarding Robinson’s claim, the question is how can someone claim the results from one study are more “clear” compared to another, disparate study?

One method is to compare their effect sizes (ES) on some common scale. It is well known that large sample sizes can make unimportant differences statistically significant (the opposite is true for small sample sizes, i.e., important differences are not statistically significant). Take the one sample z-test for the population mean \mu, for example. The right tailed p-value converges to 0 as n tends to infinity for fixed values of \bar{x} and \sigma, i.e.,

\lim\limits_{n\rightarrow\infty}\mathcal{P}\left(Z>\sqrt{n}\dfrac{\bar{x}-\mu_0}{\sigma}\right)=0

As a result, a significant p-value is possible even when \bar{x} is infinitesimally close to \mu_0. For that reason, it is important not to overpower a study by choosing an excessively large sample size (Walker did just that in his study computing sample size based on 98% power instead of the usual 80% or 90%). Effect sizes are essentially the observed test statistic without the influence of sample size (conversely, it can be used to compute sample size a priori).

Jacob Cohen defined the difference in sample means divided by the sample standard deviation as an effect size for the two sample t-test, i.e.,

d=\dfrac{\bar{x}_1-\bar{x}_2}{s}.

Cohen further proposed operationally defined values of d\in\{0.2,0.5,0.8\} as smallmedium and large effect sizes respectfully[1]. The reasoning given by Cohen regarding these values is

My intent was that medium ES represent an effect likely to be visible to the naked eye of a careful observer… I set small ES to be noticeably smaller than medium but not so small as to be trivial, and I set large ES to be the same distance above medium as small was below it.

The results from both Walker’s data and our study on mandatory helmet legislation can be converted to Cohen’s d. An F statistic with 1 numerator degree freedom can be converted to Cohen’s d by

d=2\sqrt{\dfrac{F}{df_d}}

For helmet wearing, Walker reported F_{1,2313}=8.71 which correponds to d=0.12. This categorizes helmet wearing as a trivial, and therefore unimportant, effect size by Cohen’s definition.

It is also possible to convert to Cohen’s d for a linear model with a binary predictor x (takes on values 0 or 1) and corresponding coefficient \beta as

d=\dfrac{|\beta|}{se(\beta)}\sqrt{\dfrac{1}{n_1}+\dfrac{1}{n_2}}

where n_1 and n_2 are group sample sizes. In our re-analysis of Walker’s data, the adjusted effect of helmet wearing for Walker’s data is -0.058 respectively. There were 1206 and 1149 overtaking events when not wearing or wearing a helmet respectively which translates to d=0.16 (a more respectable value, although I doubt Walker or Robinson would ever agree with our results). Note that in my analysis, I did not use a square root transformation or remove 35 observations as Walker did as these considerations are unnecessary given the large sample size.

In our re-analysis of Walker’s paper, we also performed logistic regression to assess whether close overtaking was associated with helmet wearing as this is more relevant to cycling safety. Using the one meter rule as a cut point, we estimated a non-significant adjusted odds ratio of 1.13 (p=0.54). Odds ratios can also be transformed to Cohen’s d by

d=\dfrac{\sqrt{3}log(OR)}{\pi}

Using the adjusted odds ratio, we get d=0.07 which is again trivial by Cohen’s definition.

In our paper regarding mandatory helmet legislation, we reported a decline in bicycle related head injury relative to limb injury at the time of the NSW helmet law of 27.5% or 31% depending on whether arm or leg injuries were used as the comparator. These results can also be written as rate ratios of 0.725 or 0.69 respectively. If we assume the rate ratio here is equivalent to an odds ratio, Cohen’s d is 0.18 and 0.20 respectively.

Our analysis made the implicit assumption that no cyclist wore helmets pre-law and all cyclists wore helmets post-law. If, instead, the comparison was relative to the approximate 55% increase in helmet wearing in NSW, we get rate ratios of 0.56 and 0.51 and transformed Cohen d’s of 0.32 and 0.37.

I also published a paper last year that proposed operationally defined effect sizes for odds ratios that did not require transformation to Cohen’s d or make distributional assumptions regarding effect sizes. These were 1.22, 1.86 and 3.00 for small, medium and large odds ratios (or, equivalently, 0.82, 0.54 and 0.33).

These results suggest the helmet wearing effect from Walker’s data is trivial and the effect of helmet legislation is about a medium effect size. Of course, these results are from one cyclist and from one Australian state. So, over-generalizations should not be made without more data. However, note that Walker did repeat aspects of his first study comparing seven types of cyclists. The CASUAL type was the only one without a helmet with a mean overtaking distance of 117.61cm. This was approximately dead center of the means (range: 114.01cm – 122.12cm) suggesting the trivial helmet effect size from the original study was no coincidence.

  1. Cohen J (1992) A power primerPsychological Bulletin 112: 155–159.

New Zealand Helmet Law and Validity of a Regression Model

After my recent post regarding cycling fatalities in New Zealand, someone pointed me to a Wikipedia discussion regarding a peer-reviewed paper I co-authored that discussed methodological issues of papers assessing the NZ helmet law that became effective 1 January 1994.

There are criticisms about our paper from Dorothy Robinson, Richard Keatinge and Nigel Perry (all editorial board members of the anti-helmet organization Bicycle Helmet Research Foundation) regarding our criticisms of a paper by Robinson (2001) which in turn was a criticism of a paper by Povey et al. (1999). In both papers, the ratio of head injuries and limb fractures were modelled over the period 1990-1996. In their paper, Povey et al. found changes in helmet wearing were negatively associated with a decline in the log of the head/limb injury ratio for three age groups in non-motor vehicle accidents and all ages in motor vehicle accidents.

Robinson criticized Povey and colleagues for “failure to fit time trends in their model” and that the observed benefit was an “artefact”. Her analysis focused solely on adults in non-motor vehicle accidents and ignored the data for children and motor-vehicle accidents (which are often the most severe). This is curious considering the NZ helmet law applies to on-road cycling and, therefore, the cyclist interactions with motor vehicles is the more relevant here.

In our paper, we noted that although Povey et al did not appear to check the assumptions of their model, inspection of the residuals suggests their model was valid. On Wikipedia, Robinson (under the pseudonym Dorre) reiterates her earlier criticism stating “Povey did not take time trends into account” and suggests this as the reason for finding a helmet benefit. She then states “most people would expect a claim that the model is “valid” to imply there is evidence of causation!” It is unclear to me why Robinson, who claims to be a statistician, would make such a statement (and other such statements in her paper and Wikipedia). 

Let me explain. The actual model fit by Povey et al. (1998) is

log(HEAD_{i}/LIMB_{i})=\alpha + \delta(HELMET_{i}) + \epsilon_{i}

where \epsilon_{i} \hbox{ for } i=1,\dots,n are assumed to be independent, normally distributed random variables with mean 0 and constant variance \sigma^2. This is usually stated as

\epsilon_i \overset{iid}{\sim} N(0,\sigma)

A linear regression model is valid if the above assumptions imposed on the \epsilon_i‘s are reasonable. There is no assumption of fitting time trends, as Robinson suggests, to any linear or generalized linear model. It is assumed the errors are serially independent, but that is not equivalent to fitting time trends. Additionally, a valid linear model does not imply a causal relationship between the independent and dependent variables — this would also hold for Robinson’s contention which is essentially that time caused the decline.

The assumptions related to a linear model can be checked using the residuals. The residuals are the observed differences in the observed and fitted observations which is mathematically written as

e_i=log(HEAD_{i}/LIMB_{i})-\left(\hat{\alpha}+\hat{\delta}(HELMET_{i})\right)

where \hat{\alpha} and \hat{\delta} are intercept and slope estimates using the method of least squares.

Using the observed residuals, the normal assumption can be assessed using a normal quantile plot, the linearity and constant variance assumptions can be checked by a scatterplot of the residuals, and serial independence checked using the Durbin-Watson statistic or graphically using the autocorrelation function.

There is nothing in the residual plots that suggest the model used by Povey et al is not valid.

PoveyPlots

Below is the R code to construct these plots, and to perform the Durbin-Watson test.

ratio<-c(1.40,1.09,1.07,0.94,0.86,0.83,0.77)
helmet<-c(30,36,41,43,92,93,87)
reg<-lm(log(ratio)~helmet)
par(mfrow=c(1,3))
qqnorm(reg$res); qqline(reg$res)
plot(reg$res~helmet,ylab=’Residuals’,main=’Residual Plot’)
acf(reg$res,main=’Autocorrelation Function’)
library(lmtest); dwtest(reg)

Another concept that seems lost in the criticism is that Povey and colleagues were testing an a priori hypothesis. As such, their model was hypothesis driven and pre-determined without letting the data influence modelling decisions. This is an important consideration if the researcher is to avoid spurious correlations.

It is a shame what has happened to the Wikipedia pages on bicycle helmets. Many of the contributors have clear conflicts of interest, like Dorothy Robinson, Richard Keatinge and Nigel Perry (all editorial board members of the anti-helmet organization Bicycle Helmet Research Foundation), who routinely offer a biased view of the available research. I do plan on discussing the negative influence this group and website have had on our understanding about cycling safety at some point.

The comments Robinson, Keatinge and Perry have made to my paper with Joanna Wang is a prime example of their negative influence. Although their discussion began on Wikipedia, it would appear these critics are unaware that Wikipedia has an actual page discussing the validity of a regression model. The first sentence states:

In statistics, regression model validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are in fact acceptable as descriptions of the data.

As discussed, model validity has nothing to do with any of their criticisms and, importantly, the assumptions made by Povey et al (1999) in their analysis appear to be valid.