About Jake Olivier

Associate Professor in Statistics, University of New South Wales

Bicycle Network Mandatory Helmet Review

The Bicycle Network is reviewing their bicycle helmet policy. I’ve been asked by BN CEO Craig Richards to comment on the questions:

Do you believe it should be mandatory to wear a helmet when riding a bicycle? What’s your reasons for your answer?

In the spirit of transparency, what follows is my response to Mr Richards.


Before I respond to this question, I would like to give my impressions regarding the motivations and methods used in this review.

  1. It is quite reasonable for an organization such as the Bicycle Network (BN) to review their policies and/or advocacy positions. However, why advertise you are reviewing your policies to the media or anyone external to BN? I believe it would be quite rare for any organization to issue a media release and provide interviews to print and television media that they are reviewing their policies.
  2. What is the population frame for the survey? Whatever it is, it should be informed by what the surveyors are trying to accomplish. If the purpose is to canvas the opinions of the BN, then the survey should have been restricted to BN members. If BN is trying to canvas opinions of Australian cyclists, then this has not been accomplished for the same reason. As far as I am aware, there was nothing in place to ensure responses were BN members or Australian residents or cyclists or even human (i.e., self-report cannot be validated).
  3. It is well known and accepted that online polls lack scientific rigor. This cannot be changed by statistical analysis or restricting the sample of responders. Given these issues and perhaps the wording of the questions, it is unclear what is actually being measured and whether any of the results will be generalizable to any population. The Statistical Society of Australia has raised similar concerns regarding the Australian Marriage Law Postal Survey.
  4. These points invalidate the survey results irrespective of whether they are favorable or unfavorable towards bike helmet legislation. In my view, it is an exercise in motivational reasoning and, at a minimum, propaganda to drum up support for a predetermined change in BN policy.
  5. What is an expert? That is truly a difficult question to answer, but it certainly is not someone who wins a popularity contest. It is also not someone who has an opinion on a topic, including those with doctoral degrees but no formal training or established research record on this topic. It is someone who can evaluate the quality of evidence and not just selectively cite data supportive of their opinions.
  6. It is well known from multiple surveys here and abroad that lack of cycling infrastructure and concerns for safety are the reasons people do not cycle. So, why have a huge, media-grabbing survey about helmet policy? Why are more important issues not being addressed?
  7. If BN change their policy, are they prepared to properly evaluate its impact? What if there is no increase in cycling, yet cycling head injuries/fatalities increase? Will the BN accept the potential consequences of that decision?

The reason I support bicycle helmet legislation is because the peer-reviewed research evidence indicates:

  1. Bicycle helmets are highly effective at mitigating head, serious head (roughly skull fractures and/or intracranial hemorrhage), fatal head, and facial injury,
  2. Helmet wearing rates are low without helmet legislation and high with them,
  3. Bicycle helmet legislation is associated with declines in cycling head injury and cycling fatality, and
  4. There is no strong evidence supportive of common hypotheses presented by anti-helmet and anti-helmet helmet legislation advocates. These include no clear evidence for risk compensation, helmets increasing angular acceleration, helmet legislation deters cycling, helmet legislation is part of the causal pathway for the rise in obesity, or that removing existing helmet legislation will increase cycling or improve population health.

Bicycle helmet legislation is supported by professional societies including the Australasian College of Road Safety, the Australian Injury Prevention Network and the Royal Australasian College of Surgeons.

As an aside, arguments calling for mandatory helmets in cars or for pedestrians are Straw Man arguments. This discussion is about interventions to make cycling safer and not other road users. There are interventions such as lower posted speed limits that benefit all road users, but that is not what is being argued. Additionally, this view implies no measures should be taken to improve cycling safety unless injuries are eliminated for other road users. I find this view to be unethical.

I would also like to add that it is not uncommon for public health interventions to be opposed and debated. This is not unique to bicycle helmets or helmet legislation. However, this discussion should revolve around the available evidence and how “better” evidence may be collected if there are knowledge gaps. Instead, I find bike helmets to be less debate about evidence and more about choosing sides, standing one’s ground and ignoring quality evidence that disagrees with a predetermined position.

There is a misconception Australia is alone in bicycle helmet legislation. My colleagues and I count at least 271 country, state, territory, province or city bicycle helmet laws. That is, there were at 271 instances across the world where legislators proposed legislation, debated its merit, and decided to move forward with legislation.

Countries with bicycle helmet legislation include Argentina, Australia, Austria, parts of Canada, Chile, Croatia, Czech Republic, Estonia, Finland, France, Iceland, Israel, Japan, Jersey, Latvia, Lithuania, Malta, New Zealand, Slovakia, Slovenia, South Africa, South Korea, Spain, Sweden, United Arab Emirates and parts of the United States. These laws differ in terms of enforcement and many apply only to children below a certain age. Ten countries have bicycle helmet laws that apply to all ages (Argentina, Australia, some parts of Canada, Chile, Finland, Malta, New Zealand, South Africa, United Arab Emirates, and parts of the United States). There have only been two jurisdictions that have repealed helmet legislation (Mexico City and Bosnia & Herzegovina) and the impacts of these repeals have never been evaluated. The validity of the argument Australia is alone aside, the reader should be aware that appeals to popularity have no logical foundation.

Below is a list of peer-reviewed articles I have published with colleagues that have informed my position. Many of these papers are available publicly and I am happy to share copies privately. There is also research by others that I would happily point to for any interested reader.

  1. Grzebieta RH, Olivier J & Boufous S. (2017) Reducing serious injury from road traffic crashes. Medical Journal of Australia, 207(6): 242-243.
  2. Olivier J & Radun I. (2017) Bicycle helmet effectiveness is not overstated. Traffic Injury Prevention, 18: 755-760.
  3. Schepers P, Stipdonk H, Methorst R & Olivier J. (2017) Bicycle fatalities: Trends in crashes with and without motor vehicles in The Netherlands. Transportation Research Part F: Traffic Psychology and Behaviour,46: 491-499.
  4. Olivier J & Creighton P. (2017) Bicycle injuries and helmet use: A systematic review and meta-analysis. International Journal of Epidemiology, 46(1): 278-292.
  5. Olivier J, Boufous S & Grzebieta R. (2016) No strong evidence bicycle helmet legislation deters cycling. Medical Journal of Australia, 205(2): 54-55.
  6. Boufous S & Olivier J. (2016) Recent trends in cyclist fatalities in Australia. Injury Prevention, 22(4):284-287.
  7. Olivier J, Creighton P & Mason CT. (2016) Evidence Bicycle Helmets Mitigate Intracranial Injury is Not Controversial. European Journal of Trauma and Emergency Surgery, 42:333-336.
  8. Olivier J & Walter SR. (2015) Too much statistical power can lead to false conclusions: A response to Kary (2014). Injury Prevention, 21: 289.
  9. Olivier J, Wang JJJ, Walter S & Grzebieta R. (2014) Anti-Helmet Arguments: Lies, damned lies and flawed statistics. Journal of the Australasian College of Road Safety, 25(4): 10-23.
  10. Olivier J, Wang JJJ & Grzebieta R. (2014) A systematic review of methods used to assess mandatory bicycle helmet legislation in New Zealand. Journal of the Australasian College of Road Safety, 25(4): 24-31.
  11. Olivier J. (2014) The apparent ineffectiveness of bicycle helmets: A case of selective citation (letter). Gaceta Sanitaria, 28: 254-255.
  12. Wang J, Olivier J & Grzebieta R. (2014) Response to ‘Evaluation of New Zealand’s bicycle helmet law’ (letter). New Zealand Medical Journal, 127(1389): 106-108.
  13. Olivier J & Walter SR. (2013) Bicycle helmet wearing is not associated with close overtaking: A reanalysis of Walker (2007). PLOS ONE, 8(9): e75424. Erratum in: PLOS ONE, 9(1).
  14. Bambach MR, Mitchell RJ, Grzebieta RH & Olivier J. (2013) The effectiveness of helmets in bicycle collisions with motor vehicles: A case-control study. Accident Analysis and Prevention, 53: 78-88.
  15. Walter SR, Olivier J, Churches T & Grzebieta R. (2013) The impact of compulsory cycle helmet legislation on cyclist head injuries in New South Wales, Australia: A response. Accident Analysis and Prevention, 52: 204-209.
  16. Olivier J, Walter SR & Grzebieta RH. (2013) Long term bicycle related head injury trends for New South Wales, Australia following mandatory helmet legislation. Accident Analysis and Prevention, 50: 1128-1134.
  17. Olivier J, Churches T, Walter S, McIntosh A & Grzebieta R. (2012) Response to Rissel and Wen, The possible effect on frequency of cycling if mandatory bicycle helmet legislation was repealed in Sydney, Australia: a cross sectional survey (letter). Health Promotion Journal of Australia, 23(1): 76.
  18. Poulos RG, Chong SSS, Olivier J & Jalaludin B. (2012) Geospatial analyses to prioritize public health interventions: a case study of pedestrian and pedal cycle injuries in New South Wales, Australia. International Journal of Public Health, 57(3): 467-475.
  19. Walter SR, Olivier J, Churches T & Grzebieta R. (2011) The impact of compulsory cycle helmet legislation on cyclist head injuries in New South Wales, Australia. Accident Analysis and Prevention, 43(6): 2064-2071.
  20. Esmaeilikia M, Grzebieta R & Olivier J. (2017) A systematic review on the effects of bicycle helmet legislation on cycling. Proceedings of the 6th Annual International Cycling Safety Conference (extended abstract).
  21. Olivier J & Terlich F. (2016) The use of propensity score stratification and synthetic data to address allocation bias when assessing bicycle helmet effectiveness. 2016 IRCOBI Conference Proceedings – International Research Council on the Biomechanics of Injury, IRC-16-29: 185-193.
  22. Olivier J, Grzebieta R, Wang JJJ & Walter, S. (2013) Statistical Errors in Anti-Helmet Arguments. Proceedings of the Australasian College of Road Safety Conference.
  23. Wang JJJ, Grzebieta R, Walter S & Olivier J. (2013) An evaluation of the methods used to assess the effectiveness of mandatory bicycle helmet legislation in New Zealand. Proceedings of the Australasian College of Road Safety Conference.
  24. Olivier J, Wang JJJ, Walter S & Grzebieta R. (2013) On the use of empirical Bayes for comparative interrupted time series with an application to mandatory helmet legislation. Proceedings of the Australasian Road Safety, Research, Policing and Education Conference.
  25. Wang JJJ, Walter S, Grzebieta R & Olivier J. (2013) A Comparison of Statistical Methods in Interrupted Time Series Analysis to Estimate an Intervention Effect. Proceedings of the Australasian Road Safety, Research, Policing and Education Conference.

No One Thinks Helmets are a Panacea for Cycling Safety

Last week I published a post about an ABC Radio National piece about bicycle helmets. A colleague of mine contacted ABC’s Media Watch and ran a story about it. In the aftermath, I’ve been told the debate is not about whether bicycle helmets are effective or if helmet legislation has resulted in fewer bicycle related head injuries, the myth that was being debunked on the show Assumptions was whether helmet legislation should be the sole strategy for cycling safety.

The problem is that no one believes bicycle helmets are a panacea for cycling safety, mandatory or not. Helmets are designed to protect the head and therefore are not designed to protect other body parts. Head injuries may be very important, but it is possible for a cyclist to have serious or fatal injuries that do not involve the head. Clearly, other strategies are needed for a comprehensive cycling safety strategy.

On several occasions now, I’ve called for more research into other strategies besides helmet promotion or legislation like increased segregated cycling infrastructure and lower speed limits on roads commonly used by cyclists. Here are a few direct quotes from those papers.

Walter et al [1]

While helmet legislation appears to play an important role in the reduction of cyclist head injuries, further improvements in cyclist safety in general may be gained from a broader focus.  Cyclist safety is a complex issue driven by a range of factors. Cycling in Australia has changed with a considerable increase in recreational road cycling and mountain biking in recent years. Additional research into the diverse and changing risk profiles among these cycling subgroups would facilitate further safety improvements.

Olivier et al [2]

Our study shows that the beneficial effect of MHL in NSW has been maintained since enactment of the law over 20 years ago. This signals a need to focus on other aspects of bicycle safety in order to further reduce cycling-related injuries. Collisions involving motor vehicles clearly have a high risk of head injury indicating that the interaction between cyclists and MVs is a key area for intervention through further changes to transport infrastructure and modification of cyclist and motorist behaviour. 

Walter et al [3]

Safety concerns tend to dominate media coverage of cycling in Australia (Rissel et al., 2010). This public discourse is often framed, or re-framed, around removal or relaxation of mandatory helmet laws (Piper et al., 2011), despite the wearing of helmets by cyclists forming only one part of the safety picture. In contrast, the high levels of cycling participation and the excellent cycling safety environment in northern European countries such as the Netherlands, Denmark and Germany appear to have been predicated on substantial and sustained investment in well-connected networks of thoughtfully designed bike lanes, bike paths and other cycling infrastructure. Other important factors are low speed limits on urban streets, the fostering of consideration and understanding between road users, and adequate legal protection for vulnerable road users, both cyclists and pedestrians alike. These are the essential elements for providing a cycling environment that encourages participation, with all its health, economic and environmental benefits, while maximising safety.

Olivier and Walter [4]

Risk compensation theory for helmet wearing while cycling has generated increased interest in the peer-reviewed literature, although there is little to no evidence to support the theory. Walker’s [2] argument that helmet wearing affects the behaviour of motor vehicle drivers does not support risk compensation theory upon reanalysis. Helmet wearing is associated with a small difference in passing distance and is not associated with close passing. The evidence from this study does not justify recommendations around helmet wearing, but rather highlights the more important factors of kerb distance, road characteristics and traffic type which may inform more effective cycling safety improvements.

Olivier [5]

Helmets, however, should not be viewed as a panacea and instead are an important part of any cycling safety strategy along with segregated cycling facilities and lower speed limits for motorised traffic. The benefits of each intervention are situational – helmets will help a cyclist in an accident and segregated cycling infrastructure will help avoid accidents. I therefore believe the decision to mandate helmet use should be in conjunction with a comprehensive strategy and not in isolation.

The current view of road safety is the safe system approach where many strategies are used in concert to minimize the risk and severity of injury. The notion of framing helmet promotion or legislation as an all-or-nothing strategy is very far from the truth. I stated just that in an article published last year.

Olivier et al [6]

Quite often arguments against helmet legislation are framed as an all-or-nothing safety intervention strategy that is in direct competition with creating segregated cycling infrastructure. In other words, it is believed a government will support one but not both. To wit, Ian Walker in a recent New York Times article states “Any solution to bicyclist safety should focus on preventing collisions from taking place, not seeking to minimize the damage after a collision has occurred” [35]. This strategy runs counter to the safe system approach supported by government and safety advocacy groups, where personal protection is seen as a critical component of the whole system to reducing vulnerable road user (cyclist and motorcyclist) injuries. There is also little support for focussing on injury avoidance alone in the injury record. In NSW from 1991 to 2010, only 12% and 23% of bicycle related head injury hospitalisations for children and adults respectively involve a motor vehicle. The goal of the safe system approach, on the other hand, would be to minimise the risk of a crash (crash avoidance) and to minimise the risk of injury when a crash occurs (personal protection), i.e., a holistic approach is used to reduce road trauma.

No one believes bicycle helmets are a panacea as a cycling safety strategy, and that includes me and my colleagues. There’s no myth to be busted.

  1. Walter, S.R., Olivier, J., Churches, T., & Grzebeita, R. (2011). The impact of compulsory cycle helmet legislation on cyclist head injuries in New South Wales, Australia. Accident Analysis and Prevention, 43, 2064–2071.
  2. Olivier, J., Walter, S.R., & Grzebieta, R.H. (2013). Long term bicycle related head injury trends for New South Wales, Australia following mandatory helmet legislation. Accident Analysis and Prevention, 50, 1128–1134.
  3. Walter, S.R., Olivier, J., Churches, T. & Grzebieta, R. (2013). The impact of compulsory helmet legislation on cyclist head injuries in New South Wales, Australia: A response. Accident Analysis and Prevention, 52, 204-209.
  4. Olivier, J. & Walter, S.R. (2013). Bicycle helmet wearing is not associated with close motor vehicle passing: A re-analysis of Walker, 2007. PLOS ONE, 8(9): e75424.
  5. Olivier, J. (2014). The apparent ineffectiveness of bicycle helmets: A case of selective citation. Gaceta Sanitaria, 28, 254-255.
  6. Olivier, J., Wang, J.J.J., Walter, S. & Grzebieta, R. (2015). Anti-helmet arguments: lies, damned lies and flawed statistics. Journal of the Australasian College of Road Safety, 25, 10-23.

Media Watch Segment “Helmet claims under fire”

Alan Todd of Freestyle Cyclists was recently interviewed for an upcoming piece on ABC Radio National. Much of Todd’s responses were the same misleading information he gives in all his interviews (see here and here, for examples). I sent an email to host Susan Carland detailing some of the misinformation given by Todd and received no response.

A colleague forwarded my email to ABC’s Media Watch. They ran a segment on last night’s show. You can read the transcripts or watch the video here. Note the website includes links to responses from Alan Todd, ABC Radio National Manager Deborah Leavitt, Professor Raphael Grzebieta and Dr Andrew McIntosh.

I found Todd’s conclusion quite curious which seems to be a common argument by anti-helmet advocates.

In conclusion, it is now twenty five years since Victoria become the first place in the world to mandate helmets for cyclists. Since then, only three countries have introduced nationally enforced all ages helmet laws for cyclists. If the evidence for the benefits of such laws was indeed overwhelming, why have the other one hundred and ninety three countries not followed suit?

In my own view, this is a self fulfilling prophesy. There are several anti-helmet advocacy groups that are very vocal like FreestyleCyclists, the Bicycle Helmet Research Foundation and Cyclists Rights Action Group. They may seem like legitimate research organizations to the uninformed, but they are not comprised of experts from relevant fields like road safety, injury epidemiology, trauma surgery, biostatistics, etc. The ones who do have research backgrounds are from other disciplines like Dorothy Robinson (beef industry research) Nigel Perry (computer science) and John Adams (geography).

I don’t understand how it happened, but anti-helmet advocates have gained a foothold among some influential organizations including policy makers. This has likely stopped other jurisdictions from moving ahead with either helmet legislation or promotion.

To be clear, I’m not an advocate for helmet legislation in other jurisdictions. That is a decision the people of each country/state/city must make for themselves from an informed position. The problem is the spread of misinformation about helmets and helmet laws means that is unlikely to happen.

Note: The comments are full of the usual names like Colin Clarke and Chris Gillham. Neither are ever shy about promoting their anti-helmet views.

Update: Interesting Tweet from Wendy Carlisle who initially reported on the 2010 Voukelatos and Rissel paper that was later retracted.

Colin Clarke’s Assessment of Australian Helmet Laws

Colin Clarke, a former engineering student, left a comment to my recent post regarding my experiences with Elsevier. Although this post was about an email I sent to Jennifer Mindell (editor in chief of the Journal of Transport and Health) and Chris Pringle (Elsevier Executive Publisher), Clarke took it upon himself to promote his views on bicycle helmets.

Since his comment was really not relevant to my post, I thought about rejecting his comment for quite a while. However, since I assume he would just criticize me somewhere else for censorship, I allowed his comment to be published.

I’ve had numerous online discussions with Clarke about bike helmets (see here, for example). Years ago, it seems Clarke wrote a standard criticism about helmets and, if you’ve ever read one, you’ve essentially read them all. I’ve pointed out fallacies in his arguments many times, and he continually ignores those criticisms.

Clarke’s primary argument is what I call the ‘helmet law as cycling deterrent’ hypothesis. There’s no real, solid evidence to support this hypothesis, yet it has not stopped those like Clarke to proclaim its truth to anyone who cares to listen.

In his comment, Clarke ‘estimates’ (and I’m being generous here) 413 cycling head injuries in NSW are expected in 1992 (this assumes a 30% drop in cycling from 1991 to 1992 and 590 head injuries reported in 1991, i.e., 590*0.7=413). Instead, there were 648 NSW head injury hospitalizations in 1992 (hospital data can be found here).

There are two problems with this approach. First, NSW adults were subject to helmet legislation for the entirety of 1991 and children for half. Clarke would have needed 1990/91 data to make that point (Jan-Dec for adults, Jul-Jun for children). This also assumes there aren’t any existing trends in cycling rates (no data exists to even attempt to estimate those secular trends, except for 1.5 years of pre-law hospital data that doesn’t support Clarke’s hypothesis). Second, if you apply the same logic to arm injuries, you get an expected 660*0.7=462 arm injuries in 1992 (there were 760). In fact, not adjusting for population, head injuries increased by 9.8% while arm injuries increased 15% between those years. None of this suggests Clarke’s hypothesis is true.

Importantly, the injury data comes from a census of medically diagnosed hospitalizations at that time and is therefore of high quality. To justify the drop in cyclists, Clarke relies on repurposed cycling counts from helmet use surveys. Standing on a street corner observing helmet use among cyclists is prone to bias when used as a measure of cycling exposure (this makes the Victoria and NSW helmet use surveys problematic if used for that purpose). Data collected in this way does not readily capture recreational cyclists (which is the backbone of Australian cycling) and does not/cannot account for changes in cycling routes or patterns (observation locations are fixed from year to year). It is known that many kids stopped cycling to school in Australia because either their parents or their schools started discouraging it or outright forbade it. This was not due to helmet legislation, although the two issues may share a common causal factor (i.e., parents/society concerned about cycling injury). This did not mean kids stopped cycling after school or on weekends.

Data collected through (stratified) random sampling does not suffer from those problems. The only such surveys in Australia, that I’m aware of around the time of helmet legislation, are from South Australia[1] and Western Australia[2]. Below is a summary of their results for proportion of responses to one of four categories of cycling frequency. (Note these reports do not seem to exist online and I’ve decided to host them on my webpage as I feel very few have ever read them or any other source material.)

South Australia Western Australia
1990 1993 1989 1993
At least weekly 21.8 21.0 At least weekly 26.6 27.7
At least monthly 5.2 6.0 At least every 3 months 11.1 11.6
At least every 3 months 3.9 4.4 At least once per year 10.3 11.5
Less often or never 69.1 68.6 Never 52.0 49.2

The results indicate no change in cycling in either SA or WA. We also found that to be true for non-head cycling injury hospitalisations in NSW[3]. Neither arm nor leg injuries declined significantly following helmet legislation, and certainly not at the 20% rate cited by Clarke or the 30-40% figure quoted by others. If cycling in NSW dropped substantially after helmet legislation as suggested, I have always wondered why is there no evidence in the census of injury hospitalizations?

Clarke cites two of his own papers to justify his position. The first is a non-peer reviewed conference paper titled Evaluation of Australia’s bicycle helmet laws. I received numerous spam emails about this conference for months. Unfortunately, I get 10+ emails a day for such conferences or predatory publishers asking me to submit ‘my latest research’. Legitimate conferences and publishes do not need to advertise so aggressively.

As with much of his comments about other people’s work, Clarke’s paper is loaded with cherry-picked data and not a shred of statistical/analytic rigor. I’ll detail a few examples.

Figure 3 on page 6 are counts of cyclists from helmet use surveys in Victoria in 1990 and 1991. Back in June 2013, Tim Churches pointed out this figure omits the 1992 data and does not aggregate by age, which is important considering cycling numbers differ substantially across age groups. Below is a plot created using Tim’s R code.

Vic_Cyc_Counts

The counts of cyclists for two age groups are virtually identical from 1990 to 1992 (age 5-11 and 18+). The only decline was for 12-17 year olds. If you assume this data is an accurate estimate of cycling exposure (keep in mind I don’t agree with this stance), the cycling decline is age dependent and not true overall.

Clarke also cites a WA report from Heathcote. I could not find a copy of this 1993 report anywhere; however, I did find a 1994 report by the same author[2]. According to 1994 report, the 1993 report concerned helmet use and “recommended the need for further research into cycling trends, and attitudes towards the helmet wearing law”. This was the motivation behind the 1994 report. Therefore, the later report is the more relevant here.

The numbers of recreational cyclists reported by Clarke are not found in the 1994 report; however, Table 8 summarizes Usual Destinations with 21.3% in 1989 and 17.4% in 1993 reporting “To a venue for sport/recreation”. This is not near a 57% reduction in recreation cycling as reported by Clarke and, when taken out of context, ignores that there was no real change in WA cycling frequency as was reported in the same surveys (see above table).

In his Table 13, Clarke reports proportions cycling to work in five year intervals (I discussed problems with how this data is interpreted in a previous post). In addition to the usual problems with this data, Clarke creates additional problems by “estimating” the proportion in 1990. He gives absolutely no justification for this figure except for links to anti-helmet advocacy websites. I guess he could have extrapolated that figure, but such an estimate would be fraught with problems from data taken over three days 5 years apart. Estimating trends from such data completely ignores day-to-day variability.

Clarke also cites his 2012 NZMJ paper. Note I have been critical of this paper on at least four occasions [4-7]. The last citation was a systematic review of the methods used to assess the New Zealand bicycle helmet law. Against quality criteria for assessing a population-based intervention, Clarke’s study scored the lowest of five included studies.

Clarke is quite active on the internet sharing his views on helmet legislation. This includes routine editing of the Wikipedia pages on bicycle helmets under the pseudonym “Colin at cycling” where his contributions have been to promote his NZMJ paper (see the edits page). Although I’m generally supportive of Wikipedia (even contributing money), I find it disturbing anti-helmet advocates like Clarke have taken control of the bicycle helmet content. Like Clarke, they all have conflicts of interest which are never reported (see Dorothy Robinson’s Wikipedia page for a flavor of what’s happened, she has a clear conflict of interest since she essentially promotes her own work and strongly opposes the inclusion of other studies that criticize her work).

If anything, Clarke is steadfast in his beliefs helmets are ‘bad’ and he’s not afraid to share his views. However, I’d be more willing to accept his points if he’d stop cherry-picking data and present a balanced view of the topic. My past experiences with him indicate that is highly unlikely.

  1. Marshall, J. & White, M. (1994). Evaluation of the compulsory helmet wearing legislation for bicyclists in South Australia. South Australian Department of Transport, Walkerville, SA.
  2. Heathcote, B. & Maisey, G. (1994) Bicyclist use and attitudes to the helmet wearing law. Traffic Board of Western Australia, Perth.
  3. Walter, S.R., Olivier, J., Churches, T., & Grzebeita, R. (2011). The impact of compulsory cycle helmet legislation on cyclist head injuries in New South Wales, Australia. Accident Analysis and Prevention, 43, 2064–2071.
  4. Olivier, J. (2012) Don’t blame mandatory helmets for cyclist deaths in New Zealand. The Conversation.
  5.  Wang, J., Olivier, J. & Grzebieta, R. (2014). Response to ‘Evaluation of New Zealand’s bicycle helmet law’ article. The New Zealand Medical Journal, 127, 106-108.
  6. Olivier, J., Wang, J.J.J., Walter, S. & Grzebieta, R. (2014). Anti-helmet arguments: lies, damned lies and flawed statistics. Journal of the Australasian College of Road Safety, 25, 10-23.
  7. Olivier, J., Wang, J.J.J. & Grzebieta, R. (2014). A systematic review of methods used to assess mandatory bicycle helmet legislation in New Zealand. Journal of the Australasian College of Road Safety, 25, 24-31.

Reflections on MJA and Elsevier

Virtually all members of the Medical Journal of Australia (MJA) editorial board have resigned in light of a decision to partner with publishing giant Elsevier to produce future issues of MJA. According to reports, 19 of 20 members of the MJA editorial advisory committee resigned after editor-in-chief Stephen Leeder was sacked. Leeder is quoted as saying dealing with Elsevier was “beyond the reach of my ethical tolerance,” while other members note it was an Elsevier journal that published the now infamous Andrew Wakefield MMR vaccine/autism study and Elsevier in the past created “fake” journals sponsored by drug companies.

This incidence has reminded me of an email exchange and online discussion I had with Jennifer Mindell, editor-in-chief of Elsevier’s Journal of Transport and Health (JTH), and Chris Pringle, an Executive Publisher at Elsevier for Geography & Transport. In my original post, I expressed doubt regarding the legitimacy of JTH as an unbiased source of research as the journal is affiliated with the anti-helmet advocacy group Transport and Health Study Group (THSG).

In her response regarding the debate around bike helmets, Mindell states

“I feel strongly that these issues need to be discussed in a respectful way in a scientific forum, rather than in a series of confidential emails full of allegations or in separate blogs, with difficulty in bringing all the evidence together in one place.”

In a long, detailed email, I detailed how I don’t trust Elsevier or Mindell to handle things in a unbaised manner. Specifically, a journal with ties to an anti-helmet organization should not be the host for the debate.

There are also those that have concerns about Elsevier’s business practices.

To perhaps contribute to the discussion around Elsevier and promote transparency, I’ve inserted my full response to Mindell and Pringle below. This was my explanation in regards to me declining an invitation to contribute a bike helmet article to JTH.

Email to Jennifer Mindell and Chris Pringle (16 April 2014)

Jennifer,

Perhaps some more background would help explain our decision not to contribute to the JTH cycle helmets issue.

I became involved in bicycle helmet research around 2010 because a student of mine at the time, Scott Walter, had an interest. Scott is a commuter cyclist (as was I until I needed to take our small child to daycare). Our interest was piqued by a paper by Alex Voukelatos and Chris Rissel in the Journal of the Australasian College of Road Safety (JACRS) and the published criticisms of that paper by Tim Churches. It was clear to us the Voukelatos/Rissel paper was weak because

(1) no statistical analysis was performed (they merely “eyeballed” their incorrectly drawn graphs to reach their conclusions, and unfortunately the human eye is not a well-defined nor reliable statistical procedure),

(2) the numbers literally didn’t add up (as pointed out by Tim, their results were numerically incorrect, as was the time-scale on the graph which formed the entire basis of their analysis, thus they based their conclusions on a graph that showed incorrect numbers at the wrong position on a date axis),

(3) there was a huge discrepancy between head injuries identified by ICD9 and ICD10 indicating incorrect usage of ICD codes (we accept they would likely not be identical, but Voukelatos/Rissel identified over twice as many head injuries using ICD10 than using ICD9 for 1998/99 (511 vs 1170), we were also never able to reproduce the injury counts reported in their paper, there was a clear problem there)

(4) the use of 1988/89 data was problematic as only one diagnosis code existed in the database at that time (so cyclists with both head and arm injuries would only be listed as one – and most likely head instead of arm thereby inflating the 1988/89 head/arm ratio)

(5) there was no comparison to other body parts and no clear justification arm injuries were a good comparator

As I’m sure you are all aware, the Voukelatos/Rissel paper was later retracted by the JACRS. Although there were clear problems with their study, Scott and I felt that the comparison of head and arm injuries over time was an interesting one and could help deal with the lack of exposure data. We were aware of the criticisms of past research on this topic and we felt this strategy had the best chance of addressing those issues in the most rigorous way possible (if the claims by some authors that helmet legislation led to a 30-40% drop in cycling are true, wouldn’t we see a corresponding decline in non-head injury?). This is certainly not a new strategy as psychologists would call the arm injury comparison a dependent, non-equivalent, no treatment control.

Our goal was to correct those issues and present the results as we found them – no matter if they were supportive or not supportive of helmet legislation. In the end, we reported a 29% decline in cycling head injuries (after adjusting for changes in arm and leg injury). Although we found significant declines in cycling head injury at exactly the time of helmet law introduction, we found no significant declines in cycling arm or leg injury at the time of the NSW helmet law commencement. Note that this change corresponded to approximately a 55% increase in helmet wearing. Thus, per “new” helmet wearer, the decline was actually about 0.29/0.55=52.3%. That figure is not unlike what is usually reported in case-control studies of cycling helmet effectiveness.

Note that our conclusion was not that every jurisdiction in the world should have helmet legislation, as reactionary critics have said that our paper recommends, but merely that the calls to repeal helmet legislation in NSW, Australia are not justified by our analysis. How exactly does that make us pro-helmet?

Once our paper was published, our research was attacked by anti-helmet groups like Helmet Freedom (www.helmetfreedom.com) and the Bicycle Helmet Research Foundation (www.cyclehelmets.org – your colleagues Malcolm Wardlaw and John Franklin are both editorial board members), and elsewhere on the blogosphere. It doesn’t seem to matter to these organisations whether their information is correct or not and, in my opinion, they resemble anti-vaccine and climate change denier websites. One site claims we used less than a year worth of data before the law (we had 18 months of data before the law and would’ve gone further back if usable data existed), while another suggested our paper was a government conspiracy to counter de Jong’s cycle helmets cost-benefit paper and Rissel and Wen’s cycle helmet attitudes survey. The latter claim is quite ludicrous considering our paper was published first of the three, and we actually cited a working draft of de Jong’s paper (as it was the only version available to us at the time), and we published a direct response to the Rissel and Wen paper
(http://www.publish.csiro.au/?act=view_file&file_id=HE12076.pdf). After pointing out to Rissel and Wen that they had misinterpreted their own logistic regression results and ignored their own results that run contrary to their conclusions, they state “Olivier et al. are unashamedly pro-helmet advocates,7,8”. The citations are our 2011 paper and Rissel’s rejoinder to our paper.

Rissel’s rejoinder to our 2011 Accident Analysis and Prevention (AAP) paper was published in 2012 in AAP (another Elsevier journal). The journal’s editors, Karl Kim and Rune Elvik, did not offer us the right of reply or even inform us his rejoinder existed. We found out about it after it was published. In his rejoinder, Rissel cites as primary evidence his own already retracted paper discussed above which is, quite frankly, rather bizarre academic behaviour, and in violation of COPE guidelines. The data and analysis used by Voukelatos and Rissel is clearly wrong and I find it troubling that Rissel was allowed to proclaim his conclusions were unaffected by incorrect data (he never corrected the data or the graph used in his analysis, so he had no evidence to even support his claim). Rissel also misquoted figures/analyses from other studies — a 2011 Rune Elvik paper (who was still editor-in-chief of AAP at the time, this is also the meta-analysis paper that has had at least two corrigendums), an assessment of the Western Australia helmet law (Hendrie et al), and an assessment of UK cycling fatalities (Hynd et al).

One of Rissel’s criticisms was we did not share our data. Since Rissel has an appointment with the NSW Ministry of Health, he is keenly aware of the privacy laws governing the sharing of hospitalisation data. Although we had access to unit record data, we were only approved to release the information contained in our paper, nothing more. I also found his request for our data strange considering he had access to the very same NSW hospitalisation data (even if he used it incorrectly in his paper). Our paper clearly listed the ICD9 codes we used, so he could have easily replicated the aggregated data identified from the unit records.

Additionally, Raphael Grzebieta and I had an email exchange with Chris Rissel two weeks before he submitted his rejoinder. Most of the criticisms found in his rejoinder were in his email. I responded in detail why his criticisms were unfounded. Rissel even alludes to this email exchange in his rejoinder where he states

“A somewhat arbitrary 18 months before and after the official start of the mandatory helmet legislation was used as the basis of regression modelling, limited by poor quality data prior to the legislation (Olivier, personal communication).”

This was actually in reference to there being only 18 months of usable NSW hospitalisation data before the helmet law and not hospitalisation data being of poor quality. Note that since our analysis used the same source data as Voukelatos and Rissel, their retracted paper and Rissel’s post-retraction contention the helmet law had no effect
suffers the same problem (as would any assessment of the NSW helmet law). Note that Rissel never received my permission for publishing the contents of our personal email exchange, and we find his failure to take any heed of our detailed response to the criticisms contained in his AAP rejoinder, which he received two weeks prior to submission of the rejoinder article, to be disingenuous in the extreme. Given these types of behaviour, you will understand our reluctance to submit to a journal with him on the editorial board.

Once we were aware of Rissel’s rejoinder and his bad faith in failing to modify his rejoinder in light of our prior correspondence with him, we alerted these and other issues to Karl Kim, Rune Elvik and, eventually, Chris Pringle. Kim and Elvik’s initial reaction was Rissel’s rejoinder should be retracted. However, Elsevier in the end decided not to
and offered us the option of replying to his criticisms which would go out to peer-review. Much of the response had already been written because we had already responded to Rissel through email.

Note that the retracted Voukelatos/Rissel paper is presently hosted by anti-helmet websites (for example, http://www.cycle-helmets.com/rissel.pdf and http://www.cyclehelmets.org/1146.html) without any indication of the retraction. These organisations, including the Bicycle Helmet Research Foundation – again, your colleagues Wardlaw and Franklin are editorial board members, would be in violation of COPE guidelines if they were, in fact, research organisations.

Tim has also made you aware of Elvik’s papers in AAP and Elsevier’s handling of that situation. I find it incredibly strange that the editor-in-chief of a journal can have the key results in a paper corrected in a major way not once, but twice, without having the paper retracted, nor having the paper re-submitted for peer-review, nor even modifying his conclusions or discussion in light of such major calculation errors. The whole episode reflects extremely poorly on Elsevier’s methods and ethical standards as a scientific publisher of repute.

There were also unfounded criticisms made by you, Malcolm Wardlaw and John Franklin in a THSG presentation (see attached [link: http://www.transportandhealth.
org.uk/wp-content/uploads/2011/10/THSG-seminar_
Cycling-safety.pptx]). You state you want the discussion of cycle helmets to be “less adversarial” as it “does not allow for defining common ground”, yet your actions indicate otherwise. Much of your presentation on helmets is a criticism of other people’s work. When I came across your presentation, it was clear right away that you had manipulated the axes and scales of a graph from our 2011 AAP paper to make your point (these are both massive no-no’s that are usually taught in introductory statistics courses). However, how was I supposed to react to your unfounded criticisms? You didn’t publish them in a journal or conference proceedings – I downloaded it from the THSG website (was it peer-reviewed?). As a researcher, how am I supposed to address such criticisms? You seem to be displeased I expressed my views about the THSG and JTH openly on my blog, yet isn’t that exactly what you did to our paper? If you don’t agree with my blog entry, you can at least leave a comment. We were not offered that courtesy by you.

In your presentation, you include the plot below. I’ve seen this plot elsewhere on anti-helmet websites. I find it interesting, not for what it presents but what it doesn’t. The effect of a trend is presented, but each point represents observations taken on one day taken five years apart. The information conveyed does not indicate how the weather affected census data done in some years and there’s no information about how modes of transportation are categorised. In earlier censuses, multiple modes of travel including a bicycle leg would be categorised as “cycling”. Yet, later years, the categorisation was by primary mode. So, someone taking the Manly ferry coupled with cycling trips to and from may be included with cycling in some censuses and not in others. It also ignores the fact that 1.14% of trips in Australian cities (where most people live and cycle) were by bicycle in the 1986 survey and this figure changed little (1.13%) by the next one in 1991. These are the two censuses on either side of when most Australians were subjected to helmet legislation. It also ignores declines in other modes of transportation except cars (see attached figure created by Tim Churches). Was helmet legislation also the causal factor for declines in other modes of transport?

BHRF Plot

I also note that your THSG co-chair and JTH editorial board member Stephen Watkins has stated “Mass helmet use has not reduced serious head injuries to a noticeable degree relative to general improvements in road safety seen for other road users (i.e. secular trends) (http://www.cyclist.ie/wp-content/uploads/2010/11/Transport-Health-Study-
Group-Cycle-Helmets.pdf). Another quote of his is “The most important is that it has been shown in a number of different jurisdictions that compulsory (and enforced) helmet-wearing reduces cycle use.” These statements are interesting when juxtaposed together. I wholly agree that separating out the effect of helmet legislation (or any other
intervention) from secular trends is important. We did just that in our 2011 analysis. However, this rigour never seems to apply to surveys used to draw conclusions about cycling numbers (whether designed for that purpose or not). There were 4 NSW cycling surveys taken in Oct 1990, Apr 1991, Apr 1992 and Apr 1993. The helmet law came
into effect on 1 Jan 1991 (adults) and 1 July 1991 (kids). The adult data, whose overall counts change little over the 4 surveys, is not found in any of Robinson’s work (it effectively doesn’t exist) and she uses only the 1991-1993 surveys for kids. That means only one data point is used pre-helmet law. How can anyone estimate secular trends in cycling from one data point? Two data points isn’t even realistic as only a line can be estimated.

In fact, trend data does exist for Australian metro areas dating back to 1900 (see attached). Cycling mode share increased up to WWII peaking at about 8-9% mode share. This was followed by a steady decline thereafter. The big change in cycling rates came at the expense of car travel and not helmet legislation.

Australia metro travel

I am confused when you say the THSG is “less likely” to revise their policy as a result of our declining your request to contribute to the JTH helmet issue. Shouldn’t revision of policy happen naturally as more evidence comes to light? Surely the evidence does not have to appear in the JTH in order to be considered by THSG. For example, in reference
of your comment that “many people in public health” believed helmets only protected against “minor issues such as grazes”, a recent article in Accident Analysis and Prevention (http://www.sciencedirect.com/science/article/pii/S000145751400061X) states

“Considering a realistic bicycle accident scenario documented in the literature (Fahlstedt et al., 2012) where a cyclist was thrown at 20 km/h (i.e. 5.6 m/s which corresponds to a drop height of approximately 1.5 m), our analysis indicates that a helmeted cyclist in this situation would have a 9% chance of sustaining the severe brain and skull injuries noted above whereas an unhelmeted cyclist would have sustained these injuries with 99.9% certainty. In other words, a helmet would have reduced the probability of skull fracture or life threatening brain injury from very likely to highly unlikely.”

We also published a paper earlier this year where we found helmets reduced the odds of severe head injury by up to 74% (these were NSW cyclists hospitalised after a motor vehicle crash and reported to the police from 2001-2009). Severe injuries included “Open wound of head with intracranial injury” (S01.83), “Multiple fractures involving skull
and facial bones” (S02.7), “Fracture of skull and facial bones, part unspecified” (S02.9), “Loss of consciousness [30 mins-24hrs]” (S06.03), “Loss of consciousness prolonged without return of consciousness ” (S06.05), “Traumatic cerebral oedema” (S06.1), “Diffuse brain injury” (S06.2), “Other diffuse cerebral & cerebellar injury” (S06.28), “Traumatic subdural haemorrhage” (S06.5), “Traumatic subarachnoid haemorrhage” (S06.6), “Other intracranial injuries” (S06.8), and “Intracranial injury, unspecified” (S06.9). None of these are minor injuries.

One of the main tenet’s of scientific discourse is to re-evaluate one’s position in light of new evidence. Shouldn’t you reflect on the THSG’s policy in light of this and other evidence? Whether we submit an article or not to the JTH?

It takes an awful lot of effort on the part of several researchers, while coordinating everyone’s busy schedules, to produce a high-quality research paper. There has been nothing in my experience with you, the THSG, the JTH or Elsevier that would indicate submitting a “pro-helmet” paper to JTH would be worth the effort. However, I suppose
that could change in light of new evidence

Regards,

Jake

Psych Journal Bans Hypothesis Testing and Confidence Intervals

In a recent editorial, the scientific journal Basic and Applied Social Psychology have banned submissions that utilize null hypothesis significance testing (NHST) or confidence intervals (CI). Firstly, I can certainly appreciate what motivated the journal’s policy as I routinely read/review manuscripts that solely base decision making on p-values less than 5%, i.e., the (incorrect) belief research findings are important if p<0.05.

For example, I recently read a case-control study of helmet use and bicycle-related trauma by Heng et al.[1] One of their reported measures was alcohol use and they found “[a]lcohol consumption did not correlate with…helmet wearing” and report a p-value of “NS” (as in, not significant). A summary of that data is given below.

Helmet Use
Alcohol Involvement Yes No
Yes 0 18
No 17 125

The p-value from this table is certainly greater than 5% by Fisher’s exact test. For a measure of effect size, it isn’t possible to compute an odds ratio in the usual way. Instead, a continuity corrected version can be computed by adding a half to each cell.

OR_{cc}=\dfrac{(a+0.5)(d+0.5)}{(b+0.5)(c+0.5)}

The above 2\times2 table gives a continuity corrected odds ratio of OR_{cc}=0.19. In other words, there is an associated 81% reduction in the odds of alcohol involvement among helmet wearers versus those not wearing a helmet in this data set. That is not a trivial result and, perhaps, this is the type of situation the journal is trying to avoid. That is something I fully support.

On the other hand, I believe the journal has over-reached by banning NHST and CI. Granted there are problems with these methods of statistical inference, but I strongly disagree these methods are invalid. Perhaps the strengths and limitations of these methods are poorly understood, but they are certainly not invalid.

The journal does recommend an increased focus on descriptive statistics and correctly note they “become increasingly stable” as sample size increases. Of course, this does rely on descriptive statistics that are unbiased (at least asymptotically), otherwise you’ll get stable estimates that do not reliably estimate what you want estimated.

I also think there is a disconnect between the push for descriptive statistics and the ban on confidence intervals. So, for example, I may be encouraged to report the sample mean and its standard error

\bar{x}   and   \dfrac{s}{\sqrt{n}}

but I’m banned from combining these results as

\bar{x}\pm{1.96}\times\dfrac{s}{\sqrt{n}}.

The American Statistical Association is aware of this situation and has formed a committee to comment on the journal’s decision. I look forward to reading what they come up with.

  1. Heng KWJ, Lee AHP, Zhu S, Tham KY, Seow E. (2006) Helmet use and bicycle-related trauma in patients presenting to an acute hospital in Singapore. Singapore Med J 47(5): 367-372.

Celebrity Blogger Gets Helmet Research Wrong

I received a phone call last week from Nick Rushworth from Brain Injury Australia. He was in a rush and wanted to know if some comments made earlier in the day on 2UE had any validity. I was on my way to a meeting, so I flicked him links to websites and some journal articles to read. When I got some free time, I looked into what all the fuss was about. Celebrity blogger Sarah Wilson was telling 2UE listeners to get rid of the Australian bicycle helmet laws. She has also blogged about bike helmets on her website and contributed an article to the Daily Telegraph. After listening to her interview and reading her blog, I recommend Sarah Wilson do some fact checking before doling out recommendations.

Let me explain. To support her argument, Wilson states

…biggest stumbling block to getting people to riding is not the sweat, is not the exertion, it’s the bike helmets.

In 2011 and 2013, the Cycling Promotion Fund published survey results in which participants were asked about barriers to cycling. In the first report, helmets were the 10th and 13th most common response for current cyclists and non-cyclists respectively, and helmet wearing comprised 4% of all responses. When asked about encouraging women to cycle more in the later survey, 4.1% of responders gave the repeal of the helmet law as their main response. The most common barriers from both surveys were lack of cycling infrastructure and concerns around safety. Neither sweat nor exertion made either list as a cycling barrier.

Wilson also seems to claim Australia and New Zealand are the only countries with helmet legislation and other countries have rescinded their laws. There are helmet laws in various parts of the US and Canada as well as Dubai, Finland, Spain, Iceland, Czech Republic, Jersey and Japan. It has also been debated in other countries. More information can be found here. Note that most helmet laws outside of Australia and New Zealand are directed at children only. I do believe Israel and Mexico City repealed helmet legislation, but that doesn’t appear to be the norm as suggested.

During the interview, Wilson also indicated the science around helmets are not supportive of their use. She gives more details on her website and this is where her argument really starts to fall apart. First up is the argument helmet laws deter cycling. I’ve blogged about this before and discuss it more fully in a peer-reviewed article published last year. In short, this argument is only supported when cherry-picking through the available data. The data often used are cycling counts from helmet use surveys which have a clear selection bias. To my knowledge, there are only two surveys (data collected before and after legislation) that even attempt to collect a representative sample (one from South Australia and the other Western Australia). Here is a summary of the proportions cycling in a given time frame from those surveys. Cycling amounts are nearly identical before and after helmet legislation in those Australian states.

South Australia Western Australia
1990 1993 1989 1993
At least weekly 21.8 21.0 At least weekly 26.6 27.7
At least monthly 5.2 6.0 At least every 3 months 11.1 11.6
At least every 3 months 3.9 4.4 At least once per year 10.3 11.5
Less often or never 69.1 68.6 Never 52.0 49.2

Wilson then argues for safety-in-numbers or Smeed’s Law. There certainly is research that has found a negative association between cycling numbers and injuries per cyclist when comparing jurisdictions. However, there is no evidence of a dose response, i.e., cycling numbers increase and injuries per cyclist then decreases. I’ve blogged about this using NSW data here — as cycling participation increased in NSW, cycling injuries increased at a similar rate.

Wilson does correctly note there are no randomised controlled trials assessing the effect of bicycle helmets to mitigate head injury.There is a quite obvious reason for that — researchers are not ethically allowed to randomly assign cyclists to wear/not wear a helmet, instigate a bicycle crash and then clinically diagnose injury and severity. Observational study designs, like case-control studies, are the only real option here where participants self-select to a treatment (helmet) or control (no helmet).

Wilson then cites a Sydney Morning Herald article and later references an article in the Journal of the Australasian College of Road Safety by Chris Rissel (these are related) as evidence helmet legislation doesn’t work in Australia. Wilson doesn’t appear to be aware the Journal later retracted Rissel’s paper because the article contained serious data errors among other issues. Note the SMH article does link to a correction where Rissel admits to the errors. Part of the issue is evident from Table 2 of Rissel’s paper as the age specific totals do not add up to the all age total. It is also evident when plotting head injury counts by year (raw and ‘corrected’ by summing across age groups) compared to counts I published three years later in another journal (note Rissel’s data is aggregated by financial year and mine is by calendar year). NSW_Head_Inj The head injury counts rise rapidly from 1997/98 to 1998/99. This is the year the NSW Department of Health changed their coding practices from ICD-9-CM to ICD-10-AM. There usually are discrepancies between ICD versions; however, this amounted to about 10 cases per year when both codes were used. My guess is that Rissel made two errors — head injuries were not coded properly for ICD-10-AM years and those incorrect codes were incorrectly mapped to ICD-9-CM. Simply put, the data in Rissel’s retracted article are wrong and therefore no valid conclusions can be made from them.

Next up, Wilson makes a very shocking claim. She writes

Helmets have been shown to prevent injury from “linear speeding”. But many accidents occur from “angular” accidents caused when the head is rotated. Helmets actually CAUSE head rotation.

There is absolutely no evidence helmets exacerbate rotational injuries. I’ve blogged about it here and below is a summary for a paper published last year (the citation numbers correspond to the article).

 Curnow [26, 27] suggested helmets exacerbate rotational injuries; the more serious being diffuse axonal injury (DAI). Although Curnow only hypothesised the DAI/helmet link unsupported by any real world or experimental evidence, some have taken this as fact [11, 13, 42, 94, 82, 83, 14]. There is, however, no existing evidence to support the DAI hypothesis. McIntosh, Lai and Schilter [61] found, when testing oblique impacts on dummies to simulate head rotation, helmet wearing did not increase angular acceleration, a result unsupportive of Curnow’s hypothesis. In a study by Dinh et al. [34], using trauma registry data from seven Sydney area hospitals over one calendar year, 110 cyclists were identified and none were diagnosed with DAI regardless of helmet wearing. Walter et al. [110], using linked police and hospitalisation data in New South Wales (NSW) from 2001-2009, reported at most 12 possible DAI cases out of 6,745 cyclists in a motor vehicle collision. Seven of the twelve cyclists were unhelmeted. These results suggest the incidence of DAI among cyclists appears to be rare and unrelated to helmet wearing. Additionally, computer simulated studies of bicycle crashes found no evidence helmets increased the likelihood of neck injury among adults [63] nor was there evidence helmets increased the severity of brain or neck injury in children [62].

The arguments against helmets presented by Sarah Wilson are not supported by available evidence. To close she does link to anti-helmet websites like Cyclist Rights Action Group and Helmet Freedom who seem to have supplied her with information. She also regurgitates information by anti-helmet advocate Paul Martin whose only ‘research’ on this topic, as far as I know, is a an anti-helmet commentary published by MJA in 2011. He may be a physician, but he doesn’t appear to be a researcher.

As a bit of a side issue to the science around bike helmets, Wilson also makes a civil liberties argument. In my view, that is her only valid argument. Any time an individual’s rights are taken away in the name of the greater good, it should be vehemently challenged by the populace and only accepted with a majority approval. A 2012 Australian survey estimates 96% approve of helmet legislation. On the other hand, I have colleagues who make the argument those with avoidable head injuries are a steep cost to countries like Australia where medical costs are shared. I find both arguments quite compelling and perhaps that’s where the debate around bike helmets should lie.

Note: The discussion and links to Chris Rissel’s retracted JACRS paper have been removed from Sarah Wilson’s blog. 

Confidence Interval Based Sample Size for Two Means

A colleague recently inquired about computing sample size when comparing two group means. My initial response was to compute sample size based on the two sample t-test. After a few probing questions it became clear the colleague did not expect the means to differ and demonstrating they were similar was the aim of the study.

There are two primary methods for computing sample size — one is based on hypothesis testing and the other on confidence intervals. The goal of hypothesis testing is to demonstrate differences in means (or proportions, variances, etc) which is the antithesis of what the colleague wanted. Sample size based on a confidence interval is a much better option here.

The large sample confidence for the difference in two population means \mu_1 and \mu_2 is

\bar{x}_1-\bar{x}_2{\pm}z_{1-\alpha/2}\sqrt{\dfrac{s^2_1}{n_1}+\dfrac{s^2_2}{n_2}}

where z_{1-\alpha/2} is the quantile of a normal distribution corresponding to confidence level (1-\alpha)100\% and \bar{x}_i, s^2_i and n_i are the sample mean, sample variance and sample size respectively for group i=1,2.

The right side part of this equation is often termed the margin of error E, i.e.,

E=z_{1-\alpha/2}\sqrt{\dfrac{s^2_1}{n_1}+\dfrac{s^2_2}{n_2}}

This formula can be simplified if it’s reasonable to assume a common variance, i.e., s^2=s^2_1=s^2_2, and equal sample sizes, i.e., n=n_1=n_2. The equation simplifies to

E=z_{1-\alpha/2}\sqrt{\dfrac{2s^2}{n}}

We can then solve for n to get

n=2\left(\dfrac{z_{1-\alpha/2}s}{E}\right)^2

As an example, the sample size (per group) for a 95% confidence interval with a E=20 unit margin of error and standard deviation s=100 is

n=2\left(\dfrac{1.96\times{100}}{20}\right)^2=192.08

In practice, this would be rounded up to n=193 per group. It is common to choose 95% confidence (i.e., z_{0.975}=1.96) whereas the margin of error and standard deviation are context specific. One strategy for the margin of error is to choose the smallest value that represents a meaningful difference, so that any smaller value would be considered inconsequential. The choice of standard deviation can be informed by previous research.

Another consideration would be loss to follow up (if perhaps the outcome was the difference pre- and post-measurements). So, that with say a 20% attrition rate, the sample size per group would be increased to

n=\dfrac{192.08}{1-0.2}\approx241

Of course, the computation gets far more complex, and possibly intractable, when the equal variance and sample size assumptions are not reasonable.

Too much statistical power can lead to false conclusions

We recently published a letter in the journal Injury Prevention regarding a criticism of our re-analysis of Ian Walker’s bicycle overtaking study. To fit within the journal’s guidelines, we needed to shorten our original draft by a sizeable amount. A lot of important detail was omitted in the process, so I’ve posted the full version below.

Original Response to M Kary

Injury Prevention recently published a commentary critical of epidemiological approaches to cycling safety.[1] The author, M. Kary, suggests that our paper[2], which re-analysed Walker’s[3] study of motor vehicle overtaking distance for cyclists, made false claims about type I errors and confused statistical significance and clinical significance. Kary supports this critique, along with other points in the commentary, with a non-peer reviewed response posted by him to the journal’s website.[4]

In our paper, we note that increasing power when computing sample size leads to an increase in the probability of a type I error.[2] Kary[1] incorrectly repeats this contention as the probability of a type I error increasing with sample size, suggesting ours was a “false claim”. We will demonstrate that our original assertion regarding type I errors is correct and reinforce the points made in our reanalysis paper regarding statistical versus clinical significance. Both points are important – and often overlooked – issues in quantitative research.

Sample size when comparing two groups on a quantitative variable, such as comparing motor vehicle overtaking distance when wearing or not wearing helmet, is a function of effect size (\delta), the type I error rate (\alpha), and power (1-\beta). For example, using a formula for a two sample t-test (see appendix) the sample size to detect a small effect size[5] \delta=0.2 with 80% power and \alpha=0.05 is n=786 (or 393 per group). Leaving power and sample size fixed in this example, Figure 1 shows the type I error rate as a function of power.

PowerVsAlpha

Relationship between power and type I error rate for comparing two groups (n=786 and δ=0.2)

As power increases, so does the type I error rate. When power increases to 98%, as in the Walker study, the type I error rate is 0.45. It is true, the type I error rate decreases as sample size increases (while leaving the effect size and power fixed), as suggested by Kary; however, this point is not the claim made in our paper.

It is possible to maintain a nominal type I error while increasing power; however, this comes at a large cost. As in Walker’s study, he chose a small effect size,[5] \alpha=0.05 and 98% power. The calculated sample size for these inputs is n=1614. However, when a more conventional 80% power is used, the sample size is n=786. In other words, for a fixed type I error, the sample size doubles to increase power from 80% to 98%.

Clinical significance relates to the practical implications of the absolute size of an effect. In the Walker study, Table 1 shows differences (in metres) between helmet wearing and no helmet wearing for various cut points of overtaking distance. For motor vehicles overtaking cyclists at less than 1.5 metres the difference is at most 7 millimetres. The largest absolute effect size is for overtaking at more than 2 metres and amounts to 7.2cm. There is no established appropriate effect size for overtaking distance and neither Walker[3] nor Kary[1] have suggested one. Being overtaken at 2 metres or more seems unlikely to represent a major safety issue, while being overtaken much closer e.g., at 0.75 or 0.753 metres may both be considered risky scenarios. In other words the clinical significance of the helmet effect seems negligible.

Overtaking distance (m) Difference(no helmet vs. helmet) 95% CI
(0, 0.75) -0.052 -0.224, 0.121
(0.75, 1.00) 0.003 -0.061, 0.067
(1.00, 1.50) 0.007 -0.012, 0.027
(1.50, 2.00) 0.017 -0.003, 0.037
(2.00, ∞) 0.072 0.034, 0.109

Table 1. Absolute effect sizes comparing overtaking distance between helmeted and unhelmeted conditions (adapted from Table 8 of Olivier and Walter[2]).

Statistical significance is a function of sample size so it is possible to achieve a statistically significant result that is clinically meaningless when sample size is large.[6] Standarized indices, such as Cohen’s [7] provide a sample size independent quantification of effect size.

Walker found a statistically significant relationship between helmet wearing and overtaking distance (F_{1,2313}=8.71,p=0.003). This result comes from an ANOVA model with helmet wearing, overtaking distance and their interaction. When the F-statistic is converted to effect size \delta the estimated helmet wearing effect is \delta=0.12 which is trivial by Cohen’s definition. Additionally, Walker’s sample size was n=2355 which results in power of 99.8% to detect a small effect size for overtaking distance by helmet wearing status (assuming \delta=0.2 and \alpha=0.05).

As we aimed to demonstrate in our reanalysis of Walker’s data,[2] both the calculated effect size and the absolute effect size do not support helmet wearing as a major factor in overtaking distance between cyclists and motor vehicles. In a follow-up study, Walker and colleagues compared overtaking distances for seven types of cyclists with one type unhelmeted. Even when using a much larger sample size (n=5690), no statistically significant helmet effect was observed.[8]

Appendix

A formula for computing sample size per group for the two sample t-test is

n=\dfrac{2\sigma^2(z_{1-\alpha/2}-z_{1-\beta})}{(\mu_1-\mu_2)^2}=\dfrac{2\sigma^2(z_{1-\alpha/2}-z_{1-\beta})}{\delta^2}

where \delta=(\mu_1-\mu_2)/2 is the effect size and z_p is the p^{th} quantile of the standard normal distribution. In terms of the type I error rate, this equation is

\alpha=2\left(1-\Phi^{-1}\left(\delta\sqrt{\dfrac{n}{2}}-z_{1-\beta}\right)\right)

where \Phi(x) is the cumulative standard normal distribution. An F statistic for a binary variable can be converted to \delta using the following formula[9] which is further simplified for groups of equal sample size

\delta=\sqrt{F\left(\dfrac{n_1+n_2}{n_1n_2}\right)\left(\dfrac{n_1+n_2}{n_1+n_2-2}\right)}\approx2\sqrt{\dfrac{F}{df_d}}

Acknowledgement

This post was co-authored with Scott Walter.

References

  1. Kary, M. Unsuitability of the epidemiological approach to bicycle transportation injuries and traffic engineering problems. Inj Prev in press.
  2. Olivier, J, Walter, S. Bicycle Helmet Wearing Is Not Associated with Close Motor Vehicle Passing: A Re-Analysis of Walker, 2007. PLOS ONE 2013;e75424.
  3. Walker, I. Drivers overtaking bicyclists: Objective data on the effects of riding position, helmet use, vehicle type and apparent gender. Accident Analysis & Prevention 2007;39:417–425.
  4. Kary M. Fundamental misconceptions of safety and of statistics. Published 1 Dec 2013. PLoS ONE [eLetter] http://www.plosone.org/annotation/listThread.action?root=75587.
  5. Cohen, J. A power primer. Psychological Bulletin 1992;112:155–159.
  6. Sullivan, GM, Feinn, R. Using Effect Size—or Why the P Value Is Not Enough. Journal of Graduate Medical Education 2012; 4: 279-282.
  7. Cohen, J. Statistical power analysis for the behavioral sciences, 1988. Lawrence Erlbaum Associates: Hillsdale, NJ, USA.
  8. Walker, I, Garrard, I, Jowitt, F. The influence of a bicycle commuter’s appearance on drivers’ overtaking proximities: an on-road test of bicyclist stereotypes, high-visibility clothing and safety aids in the United Kingdom. Accident Analysis and Prevention 2014;64:69-77.
  9. Thalheimer, W, Cook, S. How to calculate effect sizes from published research articles: A simplified methodology. Available at: http://work-learning.com/