After my recent post regarding cycling fatalities in New Zealand, someone pointed me to a Wikipedia discussion regarding a peer-reviewed paper I co-authored that discussed methodological issues of papers assessing the NZ helmet law that became effective 1 January 1994.

There are criticisms about our paper from Dorothy Robinson, Richard Keatinge and Nigel Perry (all editorial board members of the anti-helmet organization Bicycle Helmet *Research* Foundation) regarding our criticisms of a paper by Robinson (2001) which in turn was a criticism of a paper by Povey et al. (1999). In both papers, the ratio of head injuries and limb fractures were modelled over the period 1990-1996. In their paper, Povey et al. found changes in helmet wearing were negatively associated with a decline in the log of the head/limb injury ratio for three age groups in non-motor vehicle accidents and all ages in motor vehicle accidents.

Robinson criticized Povey and colleagues for “failure to fit time trends in their model” and that the observed benefit was an “artefact”. Her analysis focused solely on adults in non-motor vehicle accidents and ignored the data for children and motor-vehicle accidents (which are often the most severe). This is curious considering the NZ helmet law applies to on-road cycling and, therefore, the cyclist interactions with motor vehicles is the more relevant here.

In our paper, we noted that although Povey et al did not appear to check the assumptions of their model, inspection of the residuals suggests their model was valid. On Wikipedia, Robinson (under the pseudonym *Dorre*) reiterates her earlier criticism stating “Povey did not take time trends into account” and suggests this as the reason for finding a helmet benefit. She then states “most people would expect a claim that the model is “valid” to imply there is evidence of causation!” It is unclear to me why Robinson, who claims to be a statistician, would make such a statement (and other such statements in her paper and Wikipedia).

Let me explain. The actual model fit by Povey et al. (1998) is

where are assumed to be independent, normally distributed random variables with mean 0 and constant variance . This is usually stated as

A linear regression model is * valid *if the above assumptions imposed on the ‘s are reasonable. There is no assumption of fitting time trends, as Robinson suggests, to any linear or generalized linear model. It is assumed the errors are serially independent, but that is not equivalent to fitting time trends. Additionally, a valid linear model does not imply a causal relationship between the independent and dependent variables — this would also hold for Robinson’s contention which is essentially that time

*caused*the decline.

The assumptions related to a linear model can be checked using the *residuals*. The residuals are the observed differences in the observed and fitted observations which is mathematically written as

where and are intercept and slope estimates using the method of least squares.

Using the observed residuals, the normal assumption can be assessed using a normal quantile plot, the linearity and constant variance assumptions can be checked by a scatterplot of the residuals, and serial independence checked using the Durbin-Watson statistic or graphically using the autocorrelation function.

There is nothing in the residual plots that suggest the model used by Povey et al is not valid.

Below is the code to construct these plots, and to perform the Durbin-Watson test.

ratio<-c(1.40,1.09,1.07,0.94,0.86,0.83,0.77)

helmet<-c(30,36,41,43,92,93,87)

reg<-lm(log(ratio)~helmet)

par(mfrow=c(1,3))

qqnorm(reg$res); qqline(reg$res)

plot(reg$res~helmet,ylab=’Residuals’,main=’Residual Plot’)

acf(reg$res,main=’Autocorrelation Function’)

library(lmtest); dwtest(reg)

Another concept that seems lost in the criticism is that Povey and colleagues were testing an *a priori *hypothesis. As such, their model was hypothesis driven and pre-determined without letting the data influence modelling decisions. This is an important consideration if the researcher is to avoid spurious correlations.

It is a shame what has happened to the Wikipedia pages on bicycle helmets. Many of the contributors have clear conflicts of interest, like Dorothy Robinson, Richard Keatinge and Nigel Perry (all editorial board members of the anti-helmet organization Bicycle Helmet *Research* Foundation), who routinely offer a biased view of the available research. I do plan on discussing the negative influence this group and website have had on our understanding about cycling safety at some point.

The comments Robinson, Keatinge and Perry have made to my paper with Joanna Wang is a prime example of their negative influence. Although their discussion began on Wikipedia, it would appear these critics are unaware that Wikipedia has an actual page discussing the validity of a regression model. The first sentence states:

In statistics, regression model validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are in fact acceptable as descriptions of the data.

As discussed, model validity has nothing to do with any of their criticisms and, importantly, the assumptions made by Povey et al (1999) in their analysis appear to be valid.

A/Prof Olivier

Thanks for confirming that Dorre/Robinson has (again) gotten herself confused re basic statistical modelling concepts (residual analysis, on this occasion).

Dorre claims that “in the normal sense of the word, Povey’s analysis would be considered invalid”.

Could you please comment as to whether, when you said in your paper, “the results and conclusions in Povey et al.’s analysis are valid”, you meant that the results and conclusions are valid “in the normal sense of the word”.

Do you have any comments re Dorre’s claims that

– “Wang’s results appear to confirm the validity of Robinson’s finding that after accounting for time tends, there is no estimated benefit of helmets”

– “Wang et al. also shoot themselves in the foot” by saying that the assumptions for model 4 are satisfied and hence the results and conclusion in Povey et al.’s analysis are valid”, and her suggestion that this is “a careless typo”, and that you meant to say model 3 (the model Povey actually fitted).

– “It doesn’t say very much for the paper (or the peer-review process) that such a key issue that affects the meaning of the entire paper wasn’t corrected”

“The results and conclusions of Povey et al.’s analysis are valid” refer to the fact that the assumptions for the statistical model used for the data (ie, linear regression) are satisfied. To say that the results are valid, ie, there’s a significant reduction in head injuries associated with an increase in helmet wearing rate, is definitely not establishing a causal relationship between helmet wearing rate and hospitalised head injuries.

Fitting a time trend improves the fit the model but there’s problem with including both time and helmet wearing rate in the model as pointed out in Wang et al. Fitting just a time trend does not answer the research hypothesis relating to the effect of helmet wearing on head injuries. Again, to say that the model with time trend fits the data better is not saying the other model (ie, Povey’s model with helmet wearing rate) is invalid.

This is a careless typo, we wanted to say the assumptions for model 3 (Povey’s model) are satisfied hence the results are valid.

Linda,

I really don’t understand Dorothy Robinson’s comment about a valid model in the “normal sense of the word.” As I point out, there are fairly clear guidelines for assessing model validity which do not coincide with Robinson’s claim of “normal” usage.

Further, our paper does not confirm the validity of Robinson’s model. Povey et al. fit a model with a certain hypothesis in mind “a priori”. Model diagnostics indicate Povey’s model fit the data well.

Robinson just chose a model that fit “better” than Povey’s. That’s usually not that hard to do and it would’ve be that hard to find a model, based on some criteria, that fits better than Robinson. However, that model would unlikely be very meaningful for this discussion. In this type of situation, it is important to distinguish between approaches that test hypotheses using the scientific method (like Povey) and non-scientific fishing expeditions (like Robinson).

I’d also like to note that Robinson’s paper also incorrectly describes simulated data and model bias. Data simulation involves randomly drawing values from known probability distributions and a model is unbiased if the expectation of the error process is zero. Robinson did not demonstrate either of these in her paper.

Jake

Dr Wang and A/Prof Olivier

Thanks for all that additional info.

This is how http://en.wikipedia.org/w/index.php?title=Bicycle_helmets_in_New_Zealand#Research currently describes the Povey and Robinson analyses . . .

‘A 2001 study by Robinson re-evaluated that data, finding that the reduction in head injuries per limb injuries, for crashes not involving motor vehicles injuries, was part of a larger downward time trend and bore no directcorrelation to the dramatic increase in helmet-wearing following the introduction of the helmet law. Robinson concluded: “Because the large increases in wearing with helmet laws have not resulted in any obvious change over and above existing trends, helmet laws and major helmet promotion campaigns are likely to prove less beneficial and less cost effective than proven road-safety measures.” See Figure 1.’

Would the following be an accurate description of your re-analysis?

“Wang et al. reviewed the Povey and Robinson analyses, and found that Povey’s model fits the data well, and that Povey’s results and conclusions are valid. They also pointed out that Robinson incorrectly described her hypothetical data as ‘simulated’; that it was inappropriate to use such data in a statistical model; and that Robinson failed to perform any diagnostics to check whether her model was valid [http://www.tars.unsw.edu.au/news/2013/Various_files/28_Wang_PR.pdf]. Olivier, one of the co-authors on the Wang paper, has subsequently pointed out that Robinson’s description of model bias is incorrect [https://injurystats.wordpress.com/2014/06/04/new-zealand-helmet-law-and-validity-of-a-regression-model]”

I find Robinson’s claim of existing trends to be quite dubious. It is unclear which trends she is mentioning. Both helmet wearing and cycling injuries were changing during the period 1990-1996. Helmet wearing in NZ did not abruptly change as much as it did in NSW following helmet legislation which went from around 25% to 80%.

Helmet use surveys began in 1986 in NZ with a 5% helmet wearing rate and increased to 46% before the helmet law went into effect. By that time, all Australian states had helmet legislation and this could have influenced the increase in helmet wearing (also there was undoubtedly discussions helmet legislation would eventually come to NZ). It is difficult to say this increase was purely “voluntary”.

The helmet wearing rate stabilized to just over 90% after 1996. If cycling head injuries also stabilized at this time (or not) without any intervening factors, that would actually establish whether the change in head injury was due to an existing trend or due to the change in helmet wearing. Note that Robinson’s model will not show this since her model will eventually estimate a negative head injury/limb fracture ratio (the use of a log transformation or log link has the desirable property of y=0 being a horizontal asymptote).

Ultimately, neither Robinson nor Povey (or any other author assessing the NZ law) explored the long term trends of cycling head injury and helmet wearing estimates.

A/Prof Olivier

Thanks for that, but unfortunately I neglected to include the Wikipedia description of Povey’s study: “In a study by the Ministry of Transport published in 1999, researchers estimated that from 1990 to 1996, that the increase in helmet-wearing after passage of the law “reduced head injuries by between 24 and 32% in non-motor vehicle crashes, and by 20% in motor vehicle crashes.”‘.

In your ACRS/NZ paper you said that “the results and conclusions in Povey et al.’s analysis are valid”.

Does that mean that Povey’s results/conclusions, as described above/on Wikipedia, are valid (“in the normal sense of the word”)?

For starters, I find it troubling the authors are listed as the “Ministry of Transport”. Povey et al (1999) published a peer reviewed paper in an international journal. Instead, it’s presented as a potentially biased internal report used to support the current NZ law by a government agency.

In the normal sense of the word, the results from the Povey et al (1999) analysis are valid. There is indeed an association between changes in helmet wearing and the ratio of head injury and limb fractures.

Richard Keatinge is also editing wikipedia articles to push his POV. https://en.wikipedia.org/w/index.php?title=Bicycle_helmet&oldid=prev&diff=868308728

Yep. I don’t trust any of the Wikipedia pages on bike helmets. Many of the authors have clear conflicts of interest. For example, see

https://en.wikipedia.org/wiki/User_talk:Dorre

“I think you and I are going to have some clashes about the helmet stuff. I’m also pretty sure that you have a conflict of interest and really, really should try confining yourself to the talk page. – Sitush (talk) 11:38, 4 June 2013 (UTC)”

Someone on wikipedia has challenged Keatinge.

https://en.wikipedia.org/wiki/User_talk:Richard_Keatinge