Did Australian helmet law “kill” UK cycling to work?

There’s a recently published article in The Conversation about common misinterpretations of research. I strongly disagree with their take on helmet legislation and I have even stronger concerns they cite the Bicycle Helmet Research Foundation as a reliable source for information. I have communicated my concerns to the article’s authors privately.

There were lots of comments about helmet legislation — both critical and supportive. Here is one from Dorothy Robinson that I found very strange.

Adding in Scotland (which used to be included in the census data but now appears to have gone its own way), UK census data on cycling to work are:
1981: 3.76%
1991: 2.97%
2001: 2.89%
2011: 2.96%

Note that no citation was given for this data and I don’t know where it exists on the web. Some UK census data for cycling to work exists here.

For many years now, Robinson and the BHRF have used census data and counts from helmet use surveys to argue helmet legislation in Australia have significantly deterred people from cycling. In the UK, cycling to work decreased 21% from before any helmet legislation (1981) to after most Australians were subjected to such laws (1991). Note that during those same years, the Australian census data reported 1.11% and 1.13% travelled to work by bicycle in capital cities.

This certainly does not mean helmet legislation in Australia had anything to do with cycling rates in the UK (this post’s title is meant to be tongue-in-cheek). Cycling in Denmark has decreased 17% since 1990 (year of the first helmet law in Victoria) and no one believes this had anything to do with Australian helmet laws. However, I think such thought experiments highlight the problems in drawing strong conclusions from such analyses.

Census data is taken over a day and successive observations are five years apart (in the UK they are apparently 10 years apart). Treating this data as a time series ignores the day to day variability in the proportions of travel modes. There are lots of factors that influence whether someone cycles (including regular cyclists). Two observations, five or ten years apart doesn’t remotely account for that.

Yearly estimates of cycling participation/amount and broad categories about cycling frequency would be an improvement. An honest assessment of the quality of the available data and its limitations is sorely needed in this area. It seems there are some that are quite content with data as long as it supports their conclusions.

Advertisements

Transcribing Problems with cycle-helmets.com Analysis

I recently discussed problems replicating the results found in an assessment of mandatory helmet legislation in Australia published in Accident Analysis and Prevention (Robinson, 1996). This issue was introduced to me by Linda Ward who has pointed to a related issue.

The anti-helmet website http://www.cycle-helmets.com has a page titled “Spinning” helmet law statistics. Under the heading Measuring changes in cycle use, the webpage states

Similarly, in South Australia a telephone survey found no significant decline in the amount people said they cycled but there was a large, significant drop in how much they had actually cycled in the past week 24. In 1990 (pre-law), 17.5% of males aged at least 15 years reported cycling in the past week (210 out of 1201), compared to 13.2% (165 out of 1236) post-law in 1993. For females, 8.1% (102 out of 1357) had cycled in the past week in 1990 compared to 5.9% (98 out 1768) in 1993 24.

These reductions (24% for males, 26% for females aged at least 15 years) are statistically significant (P < 0.005 for males, P = 0.025 for females).

The citation given is a technical report that evaluated the introduction of helmet legislation in South Australia.[1] Table 1 of the SA report gives frequencies of bicycle riding from two surveys, one in 1990 and the other in 1993, for those aged 15 years or older separated by gender. In this survey, the amount of cycling split into four categories: “At Least Once A Week”, “At Least Once A Month”, “At Least Once Every 3 Months” and “Less Often or Never”. The SA helmet law went into effect on 1 July 1991.

The main problem here is the numbers in the above quote don’t match up to the data in the original report. Here is a screenshot of the table.

Marshall_Tab1

When these numbers are corrected and a comparison is made for those cycling at least once a week versus everyone else, the p-values are 0.279 and 0.450 for males and females respectively. Additionally, the relative risks are 0.90 (95% CI: 0.76,1.08) and 0.91 (95% CI: 0.71, 1.17) for males and females respectively. The point estimates for changes in the proportion cycling in the past week are much less than those reported on the webpage.

In addition to using the wrong data, I don’t agree with the analysis. There are four cycling categories which have been collapsed into two — those who cycle at least once a week and those who don’t. A lot of information is needlessly removed from the data. Instead, a chi-square test for independence could’ve been performed and individual changes could be assessed through an investigation of the residuals.

The Pearson residuals for an individual cell from a chi-square test are

r=\dfrac{O-E}{\sqrt{E}}

where O are the observed frequencies and E is the expected frequency under an assumption of independence, i.e., no relationship between helmet legislation and the amount of cycling. These residuals are asymptotically normal, so residuals with absolute value greater 1.96 may be considered “statistically significant”. The sign would indicate observing more than expected (if positive) or less than expected (if negative).

When analyses are performed on the full tables, the chi-square tests give p-values of 0.20 and 0.85 for males and females respectively. None of the residuals have absolute value anywhere near 1.96. The largest residual pair is for males cycling “at least once every 3 months”. The signs of the residuals indicate there is less cycling than expected in 1990 (r=-1.04) and more cycling than expected in 1993 (r=1.02) if there is no relationship between helmet legislation and amount of cycling. Here is some R code to do those analyses.

males=matrix(c(204,190,66,83,58,77,871,886),nrow=2)
males

females=matrix(c(104,123,59,74,52,64,1141,1507),nrow=2)
females

chisq.test(males,correct=F)
chisq.test(females,correct=F)

chisq.test(males,correct=F)$residuals
chisq.test(females,correct=F)$residuals

The analyses above are stratified by gender and we could perform a unified analysis using Poisson regression. This model is essentially

log(\mu)=\beta_0+\beta_1YEAR+\beta_2FREQ+\beta_3GENDER+\beta_4YEAR*FREQ+\beta_5YEAR*GENDER+\beta_6FREQ*GENDER+\beta_7YEAR*FREQ*GENDER

I’ve simplified things a bit here because the variable FREQ has four categories and therefore gets estimated by three dummy variables.

The important comparison here is the interaction between YEAR and FREQ. If significant, this would indicate helmet legislation and amount of cycling are associated. Using the given South Australian data, the three-way interaction was non-signficant, so was removed from the model. The p-value of the interaction between YEAR and FREQ is not statistically significant (p=0.41).

No analysis I’ve performed indicates a significant relationship between helmet legislation and amount of cycling in South Australia among those 15 years or older when using the correct data.

Note: The anti-helmet website http://www.cycle-helmets.com is maintained by Chris Gillham. I previously discussed problems with this website here. If you download the PDF version of this report, the author is listed as “Dorre” who I believe is Dorothy Robinson. Both Gillham and Robinson are editorial board members of the anti-helmet organisation Bicycle Helmet Research Foundation.

  1. Marshall, J. and M. White, Evaluation of the compulsory helmet wearing legislation for bicyclists in South Australia Report 8/94, 1994, South Australian Department of Transport, Walkerville, South Australia.

Something Amiss in Robinson (1996)

A 1996 article titled “Head Injuries and Bicycle Helmet Laws” published in Accident Analysis and Prevention is one of the most highly cited papers assessing the effect of helmet legislation.[1] (148 citations, Google Scholar, 4 Sept 2014) Additionally, this seems to be the first article purportedly demonstrating a negative impact of such laws. The conclusions of this paper state

Consequently, a helmet law, whose most notable effect was to reduce cycling, may have generated a net loss of health benefits to the nation.

In this paper, secondary analyses were performed on data contained in other reports. I’ve pointed out in a previous paper[2] that NSW adult cycling counts exist from sources cited in this paper although they are not presented. This is curious because the counts of adult cyclists from NSW helmet use surveys increased from pre- to post-helmet legislation which contradicts the conclusions of this paper. Adult cycling also increased by 44% in Victoria following helmet legislation.[3]

Linda Ward has pointed to another issue with this paper regarding a comparison of the proportion of head injury hospitalizations to cyclists before and after legislation in Victoria. Some of the relevant data is given in Table 6.[1] In this table, the proportion of head injuries are 31.4% for 1989/90 and 27.3% for 1990/91 for hospital admissions in Victoria. During this period, there are a total of n=2300 cycling hospitalizations. The author notes a comparison of these proportions is non-significant by a chi-square test.

The 2×2 table for this data can be reproduced using the source material.[4] Figure 25 of this report gives “All Other Injuries” of about 900 for year 1989/90. This allows us to fill in the rest of the table given below.

Year Other Injury Head Injury
1989/90 900 412
1990/91 718 270

The frequencies of the other cells seem to correspond to the other values in Figure 25. The chi-square test for this table results in \chi^2=4.49, p=0.03 and OR=0.82. This result could be influenced by the need to estimate the number of cases from a plot. We can assess the influence of this estimate by repeating the analysis for other values near 900. Choosing values from 890 to 910 results in the plot of p-values below.

Robinson_pvalue

As you can see, there is a statistically significant decline in head injury in each instance for cycling injury in Victoria before and after helmet legislation. R code to reproduce these results is given below.

n=2300
p1=0.314
p2=0.273

a=900
n1=round(900/(1-p1))
b=n1-900
n2=n-n1
d=round(n2*p2)
c=n2-d

tab=matrix(c(a,b,c,d),nrow=2,byrow=T)
rownames(tab)=c(‘1989/90′,’1990/91’)
colnames(tab)=c(‘Other’,’Head Injury’)
tab

chisq.test(tab,correct=F)
fisher.test(tab)

This re-analysis has important ramifications. First, the author’s conclusions are not fully justified. Cycling head injuries fell at a rate greater than other cycling injuries following legislation. It is possible there was less cycling following legislation, but head injuries fell at a significantly greater rate. We also found this to be true in NSW in a 2011 paper. Secondly, organizations that have used this paper to justify their opposition to helmet legislation should reconsider their stance. This includes the Transport and Health Study Group (THSG) which is affiliated with the Journal of Transport and Health (JTH). Finally, the editors of Accident Analysis and Prevention and the journal’s publisher Elsevier should seriously investigate the reproducibility of the analyses in this paper with a keen eye for the information found in the source material that was not included in the paper.

Note: Dorothy Robinson is a patron and editorial board member of the anti-helmet organization Bicycle Helmet Research Foundation.

  1. Robinson, DL (1996) Head injuries and bicycle helmet laws. Accident Analysis and Prevention, 28, 463-475.
  2. Olivier J, Grzebieta R, Wang JJJ & Walter, S. (2013) Statistical Errors in Anti-Helmet Arguments. Proceedings of the Australasian College of Road Safety Conference. http://acrs.org.au/wp-content/uploads/26_Olivier_PR.pdf
  3. Cameron, M., Vulcan, AP, Finch, CF & Newstead, SV. (1994) Mandatory bicycle helmet use following a decade of helmet promotion in Victoria, Australia — An evaluation. Accident Analysis and Prevention, 26, 325-337.
  4. Cameron, M, Heiman, L & Neiger, D. (1992) Evaluation of the bicycle helmet wearing law in Victoria during its first 12 months. Report No. 32, Monash University Accident Research Centre.

New Zealand Helmet Law and Validity of a Regression Model

After my recent post regarding cycling fatalities in New Zealand, someone pointed me to a Wikipedia discussion regarding a peer-reviewed paper I co-authored that discussed methodological issues of papers assessing the NZ helmet law that became effective 1 January 1994.

There are criticisms about our paper from Dorothy Robinson, Richard Keatinge and Nigel Perry (all editorial board members of the anti-helmet organization Bicycle Helmet Research Foundation) regarding our criticisms of a paper by Robinson (2001) which in turn was a criticism of a paper by Povey et al. (1999). In both papers, the ratio of head injuries and limb fractures were modelled over the period 1990-1996. In their paper, Povey et al. found changes in helmet wearing were negatively associated with a decline in the log of the head/limb injury ratio for three age groups in non-motor vehicle accidents and all ages in motor vehicle accidents.

Robinson criticized Povey and colleagues for “failure to fit time trends in their model” and that the observed benefit was an “artefact”. Her analysis focused solely on adults in non-motor vehicle accidents and ignored the data for children and motor-vehicle accidents (which are often the most severe). This is curious considering the NZ helmet law applies to on-road cycling and, therefore, the cyclist interactions with motor vehicles is the more relevant here.

In our paper, we noted that although Povey et al did not appear to check the assumptions of their model, inspection of the residuals suggests their model was valid. On Wikipedia, Robinson (under the pseudonym Dorre) reiterates her earlier criticism stating “Povey did not take time trends into account” and suggests this as the reason for finding a helmet benefit. She then states “most people would expect a claim that the model is “valid” to imply there is evidence of causation!” It is unclear to me why Robinson, who claims to be a statistician, would make such a statement (and other such statements in her paper and Wikipedia). 

Let me explain. The actual model fit by Povey et al. (1998) is

log(HEAD_{i}/LIMB_{i})=\alpha + \delta(HELMET_{i}) + \epsilon_{i}

where \epsilon_{i} \hbox{ for } i=1,\dots,n are assumed to be independent, normally distributed random variables with mean 0 and constant variance \sigma^2. This is usually stated as

\epsilon_i \overset{iid}{\sim} N(0,\sigma)

A linear regression model is valid if the above assumptions imposed on the \epsilon_i‘s are reasonable. There is no assumption of fitting time trends, as Robinson suggests, to any linear or generalized linear model. It is assumed the errors are serially independent, but that is not equivalent to fitting time trends. Additionally, a valid linear model does not imply a causal relationship between the independent and dependent variables — this would also hold for Robinson’s contention which is essentially that time caused the decline.

The assumptions related to a linear model can be checked using the residuals. The residuals are the observed differences in the observed and fitted observations which is mathematically written as

e_i=log(HEAD_{i}/LIMB_{i})-\left(\hat{\alpha}+\hat{\delta}(HELMET_{i})\right)

where \hat{\alpha} and \hat{\delta} are intercept and slope estimates using the method of least squares.

Using the observed residuals, the normal assumption can be assessed using a normal quantile plot, the linearity and constant variance assumptions can be checked by a scatterplot of the residuals, and serial independence checked using the Durbin-Watson statistic or graphically using the autocorrelation function.

There is nothing in the residual plots that suggest the model used by Povey et al is not valid.

PoveyPlots

Below is the R code to construct these plots, and to perform the Durbin-Watson test.

ratio<-c(1.40,1.09,1.07,0.94,0.86,0.83,0.77)
helmet<-c(30,36,41,43,92,93,87)
reg<-lm(log(ratio)~helmet)
par(mfrow=c(1,3))
qqnorm(reg$res); qqline(reg$res)
plot(reg$res~helmet,ylab=’Residuals’,main=’Residual Plot’)
acf(reg$res,main=’Autocorrelation Function’)
library(lmtest); dwtest(reg)

Another concept that seems lost in the criticism is that Povey and colleagues were testing an a priori hypothesis. As such, their model was hypothesis driven and pre-determined without letting the data influence modelling decisions. This is an important consideration if the researcher is to avoid spurious correlations.

It is a shame what has happened to the Wikipedia pages on bicycle helmets. Many of the contributors have clear conflicts of interest, like Dorothy Robinson, Richard Keatinge and Nigel Perry (all editorial board members of the anti-helmet organization Bicycle Helmet Research Foundation), who routinely offer a biased view of the available research. I do plan on discussing the negative influence this group and website have had on our understanding about cycling safety at some point.

The comments Robinson, Keatinge and Perry have made to my paper with Joanna Wang is a prime example of their negative influence. Although their discussion began on Wikipedia, it would appear these critics are unaware that Wikipedia has an actual page discussing the validity of a regression model. The first sentence states:

In statistics, regression model validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are in fact acceptable as descriptions of the data.

As discussed, model validity has nothing to do with any of their criticisms and, importantly, the assumptions made by Povey et al (1999) in their analysis appear to be valid.

Bicycle Helmet Research Foundation: A Reliable Resource?

The Bicycle Helmet Research Foundation (BHRF), and their website cyclehelmets.org, is an organization  that serves as a hub for material regarding the efficacy of either bicycle helmets or helmet legislation. The BHRF has an editorial board who is “responsible for the content” of the website which is “subjected to multi-disciplinary peer review”.

The BHRF is not affiliated with any university or academic society, yet seems to be influential in the discussion of the efficacy of bicycle helmets and whether jurisdictions should mandate their use. For example, the recent Queensland inquiry into cycling safety contained six footnotes linking to the BHRF website. In a submission to the Victorian Parliament, Colin Clarke cites four BHRF websites which he uses as evidence regarding the effectiveness of helmet legislation in that state. Jennifer Mindell, editor-in-chief of the Journal of Transport and Health, has used BHRF graphs to argue “cycle helmet use does not yield a population level effect“. This presentation was a collaboration with BHRF editorial board members Malcolm Wardlaw and John Franklin.

On many occasions, I have come across comment boards discussing virtually anything related to bicycle helmets where someone will write something like “everything you need to know about helmets is on this website cyclehelmets.org.” This has led me to wonder about the reliability of the BHRF as a resource for bicycle helmet-related research. Can they be trusted to present a fair assessment of the available scientific evidence regarding the efficacy of bicycle helmets and helmet legislation.

In their policy statement, the BHRF exists “to undertake, encourage, and spread the scientific study of the use of bicycle helmets.” This seems quite straightforward and aligns nicely with the name of their organization. However, the policy statement then seems to devolve into a rant against helmets or helmet legislation. They state

  • closer investigation has revealed serious flaws in the evidence most frequently cited in favour of helmet effectiveness“,
  • helmet laws have led almost universally to large declines in the number of people who cycle“, and
  • the promotion of cycle helmets has been to brand cycling as an inherently hazardous activity.”

In my opinion, this comes across as anti-bicycle helmet advocacy and does not remotely resemble a research organization.

It is not uncommon for advocacy groups to be an integral part of research. For example, the declared purpose of the National Heart Foundation of Australia is to “reduce premature death and suffering from heart, stroke and blood vessel disease in Australia.” This is accomplished, in part, by funding cardiovascular research. The Amy Gillett Foundation is another advocacy organization directed at “reducing the incidence of death and injury of bike riders”. These two groups, and many more like them, are important sources for the spread of research to a broad, non-scientific audience. The big difference between these organizations and the BHRF is they serve to facilitate research and not as the final arbiter of an issue.

I suppose my initial impression could be wrong and they aren’t an anti-helmet advocacy group. So, what can be discerned from the material on their website beyond their policy statement? Are there well-reasoned arguments discussing both sides of the argument leading them to a clear conclusion? Are commentaries provided by experts in the field who actively publish in scientific journals? How does the BHRF editorial board stack up against established research journals? These are important concerns due to the ease at which ideas are proliferated over the internet and the difficulty in discerning the reliability of resources. As HIV researcher Dr Seth Kalichman puts it “The Internet has made pseudoscience as accessible, or perhaps even more accessible, than quality medical science.”

In a series of posts, I will discuss the BHRF editorial board, the supportive/skeptical articles they list, and articles that are missing from their website. I will also respond to their criticism of one of my papers (because how else is an academic supposed to address unfounded criticisms posted on someone’s website).