More Misinformation from

I recently came across another excellent cycling article from Alan Davies at The Urbanist. In the article, Davies discusses claims that cycling accidents are on the rise. This is due to an increase in the cycling road toll in 2013. This may be a concern, but it’s impossible to establish a trend from one data point.

Davies briefly mentions helmet legislation, but notes it isn’t relevant to the current discussion (and I agree). However, in the comments, I found a few troubling responses regarding that topic. Strewth states

But we also know from analysis done in the 1990s that among cyclists, the decline in non-head injuries over this period was as great or greater than the decline in head injuries.

No citation or link is given to support this claim. This comment is strange since a previous study of mine estimates a 35% drop in cycling head injury hospitalizations with the NSW helmet law while arm and leg injuries dropped by only 11% and 6% respectively. A more comprehensive response was left by Linda Ward.

Another comment by Nik Dow states

A fact-based explanation is linked (see “detailed explanation”) and covers the introduction of demerit points and ramping up of speed and red-light cameras.

The link takes you to, an anti-helmet organization I’ve discussed previously. My previous post pointed to misinformation presented by national spokesperson Alan Todd and the given link is more of the same.

The following plot of cycling and pedestrian fatalities from 1980-2006 in Australia is given and the anonymous author concludes declines in cycling deaths were mostly “due to massive ramping up of speed and red-light cameras, together with the introduction of demerit points.” I’ve assumed this is due to the pedestrian and cycling time series being placed on top of each other.


What is problematic here is this is not an accurate representation of the fatality data (much of it can be found here). The author has apparently rescaled the pedestrian time series to get them to overlay. As I’ve discussed before, this a strategy too often used to mislead actual temporal relationships of data. In the comments, Davies also questions the accuracy of this figure.

Plotting both time series on the same graph is problematic here since pedestrian fatalities have historically dwarfed those for cyclists (in 1989 they were 501 and 98 respectively). One method to address this problem is to index the time series to a starting value. An advantage of this approach is you’re able to plot trends without distorting scales; however, a disadvantage is the actual data is not being presented and each data point is a comparison with some starting value.

Below is such a plot for the period 1971 to 2013 for cycling and pedestrian fatalities in Australia (the vertical red lines represent the first and last helmet law dates in Australia).


This looks virtually nothing like the plot. Relative to 1971, pedestrian fatalities have steadily declined over the next 40 years, while cycling fatalities were flat up to the 1990, followed by a substantial decline by 1992 and flat thereafter. This does not suggest declines in cycling fatalities are associated with general road safety improvements such as demerit points or speed cameras. Further, the lack of temporal agreement between cycling and pedestrian fatalities prior to 1990 raises questions regarding pedestrians as a suitable comparator to cyclists.

It is not appropriate to make decisions about trends from eye-balling a figure, so I fit an interrupted time series model to this data. The Poisson model I used was


where TIME is centered at 1991 and LAW is an indicator that takes on the value 1 for years 1991 onwards and 0 everywhere else. My results suggest no pre-1990 trend for cycling fatalities (p=0.84) and a 42% decline in cycling fatalities at 1991 (p<0.001). Residual plots indicate good overall fit, although the 2013 observation may exhibit high leverage.0000


Something profound happened for cycling fatalities in Australia between 1990-1992. It is often argued helmet legislation deters cycling; however, this is an argument I largely reject due to conflicting evidence from data of low quality.[1] Still, this does not necessarily indicate helmet legislation is a causal factor in lowering cycling fatalities. Yet, this analysis does rule out general road safety interventions as a causal influence proposed by

  1. Olivier, J., Grzebieta, R., Wang, J.J.J. & Walter, S. (2013). Statistical Errors in Anti-Helmet Arguments. Australasian College of Road Safety Conference.

Why does anecdote trump evidence? The cost of helmets in Melbourne

While working on another research paper, I came across a discussion regarding bicycle helmet legislation that I read around the middle of last year. It originated with an article arguing against bicycle helmet legislation[1] and was followed by two responses[2,3]. This was then followed by a reply from the original authors[4].

I will not regurgitate the arguments here, but I I believe the Biegler and Johnson response is outstanding and I highly recommend it be read by anyone with an interest in this topic. One issue I found curious was the advertised cost of a helmet. Biegler and Johnson state

Helmets retail for as low as A$5 while treatment for brain injury can run into millions.

A citation was given which directed the reader to a page on the Melbourne Bike Share website. Information on this page states

Free helmets are now available with our blue bikes.  Just leave the helmet with the bike upon completion of ride.  Easy! Go to our Gallery to view the video as to how to secure the free helmet onto the blue bike. Alternatively, helmets are available for just $5 at many retail outlets or vending machines at Southern Cross Station and Melbourne University. A limit of 2 helmets per customer applies.

A list of stores nearest each Melbourne Bike Share Station follows. If it’s not completely clear bicycle helmets meeting the Australian standard can be purchased for $5, here is a picture.


In a reply to Biegler and Johnson, Hooper and Spicer state

Biegler and Johnson also rely on the claim that the cost of purchasing cycle helmets is fairly marginal. However, quite aside from the fact that most helmets cost far more than the $5 quoted by these authors, it is important to realise that many people are unlikely to skimp when they buy helmets.

No citation or evidence is used to support this statement. I suppose it is possible cyclists could be turned off by a helmet being too inexpensive, but where is the evidence this is actually happening with the Melbourne Bike Share helmets? There’s also no evidence to support the authors claim the “extra cost may well be prohibitive.”

I have a hard time believing a free or $5 helmet is prohibitive to anyone. Also, shouldn’t the lack of supportive evidence presented by Hooper and Spicer have been filtered out during the peer-review process? To paraphrase a famous quote, the presentation of anecdotal arguments against helmet legislation does not constitute a valid argument.

  1. Hooper C, Spicer J. Liberty or death; don’t tread on me. J Med Ethics 2012;38 (6):338–41.
  2. Biegler P, Johnson M. In defence of mandatory bicycle helmet legislation: response to Hooper and Spicer. J Med Ethics.
  3. Trégouët P. Helmets or not? Use science correctly. J Med Ethics.
  4. Hooper C, Spicer J. Bike helmets: a reply to replies. J Med Ethics.

New Zealand Helmet Law and Validity of a Regression Model

After my recent post regarding cycling fatalities in New Zealand, someone pointed me to a Wikipedia discussion regarding a peer-reviewed paper I co-authored that discussed methodological issues of papers assessing the NZ helmet law that became effective 1 January 1994.

There are criticisms about our paper from Dorothy Robinson, Richard Keatinge and Nigel Perry (all editorial board members of the anti-helmet organization Bicycle Helmet Research Foundation) regarding our criticisms of a paper by Robinson (2001) which in turn was a criticism of a paper by Povey et al. (1999). In both papers, the ratio of head injuries and limb fractures were modelled over the period 1990-1996. In their paper, Povey et al. found changes in helmet wearing were negatively associated with a decline in the log of the head/limb injury ratio for three age groups in non-motor vehicle accidents and all ages in motor vehicle accidents.

Robinson criticized Povey and colleagues for “failure to fit time trends in their model” and that the observed benefit was an “artefact”. Her analysis focused solely on adults in non-motor vehicle accidents and ignored the data for children and motor-vehicle accidents (which are often the most severe). This is curious considering the NZ helmet law applies to on-road cycling and, therefore, the cyclist interactions with motor vehicles is the more relevant here.

In our paper, we noted that although Povey et al did not appear to check the assumptions of their model, inspection of the residuals suggests their model was valid. On Wikipedia, Robinson (under the pseudonym Dorre) reiterates her earlier criticism stating “Povey did not take time trends into account” and suggests this as the reason for finding a helmet benefit. She then states “most people would expect a claim that the model is “valid” to imply there is evidence of causation!” It is unclear to me why Robinson, who claims to be a statistician, would make such a statement (and other such statements in her paper and Wikipedia). 

Let me explain. The actual model fit by Povey et al. (1998) is

log(HEAD_{i}/LIMB_{i})=\alpha + \delta(HELMET_{i}) + \epsilon_{i}

where \epsilon_{i} \hbox{ for } i=1,\dots,n are assumed to be independent, normally distributed random variables with mean 0 and constant variance \sigma^2. This is usually stated as

\epsilon_i \overset{iid}{\sim} N(0,\sigma)

A linear regression model is valid if the above assumptions imposed on the \epsilon_i‘s are reasonable. There is no assumption of fitting time trends, as Robinson suggests, to any linear or generalized linear model. It is assumed the errors are serially independent, but that is not equivalent to fitting time trends. Additionally, a valid linear model does not imply a causal relationship between the independent and dependent variables — this would also hold for Robinson’s contention which is essentially that time caused the decline.

The assumptions related to a linear model can be checked using the residuals. The residuals are the observed differences in the observed and fitted observations which is mathematically written as


where \hat{\alpha} and \hat{\delta} are intercept and slope estimates using the method of least squares.

Using the observed residuals, the normal assumption can be assessed using a normal quantile plot, the linearity and constant variance assumptions can be checked by a scatterplot of the residuals, and serial independence checked using the Durbin-Watson statistic or graphically using the autocorrelation function.

There is nothing in the residual plots that suggest the model used by Povey et al is not valid.


Below is the R code to construct these plots, and to perform the Durbin-Watson test.

qqnorm(reg$res); qqline(reg$res)
plot(reg$res~helmet,ylab=’Residuals’,main=’Residual Plot’)
acf(reg$res,main=’Autocorrelation Function’)
library(lmtest); dwtest(reg)

Another concept that seems lost in the criticism is that Povey and colleagues were testing an a priori hypothesis. As such, their model was hypothesis driven and pre-determined without letting the data influence modelling decisions. This is an important consideration if the researcher is to avoid spurious correlations.

It is a shame what has happened to the Wikipedia pages on bicycle helmets. Many of the contributors have clear conflicts of interest, like Dorothy Robinson, Richard Keatinge and Nigel Perry (all editorial board members of the anti-helmet organization Bicycle Helmet Research Foundation), who routinely offer a biased view of the available research. I do plan on discussing the negative influence this group and website have had on our understanding about cycling safety at some point.

The comments Robinson, Keatinge and Perry have made to my paper with Joanna Wang is a prime example of their negative influence. Although their discussion began on Wikipedia, it would appear these critics are unaware that Wikipedia has an actual page discussing the validity of a regression model. The first sentence states:

In statistics, regression model validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are in fact acceptable as descriptions of the data.

As discussed, model validity has nothing to do with any of their criticisms and, importantly, the assumptions made by Povey et al (1999) in their analysis appear to be valid.