Celebrity Blogger Gets Helmet Research Wrong

I received a phone call last week from Nick Rushworth from Brain Injury Australia. He was in a rush and wanted to know if some comments made earlier in the day on 2UE had any validity. I was on my way to a meeting, so I flicked him links to websites and some journal articles to read. When I got some free time, I looked into what all the fuss was about. Celebrity blogger Sarah Wilson was telling 2UE listeners to get rid of the Australian bicycle helmet laws. She has also blogged about bike helmets on her website and contributed an article to the Daily Telegraph. After listening to her interview and reading her blog, I recommend Sarah Wilson do some fact checking before doling out recommendations.

Let me explain. To support her argument, Wilson states

…biggest stumbling block to getting people to riding is not the sweat, is not the exertion, it’s the bike helmets.

In 2011 and 2013, the Cycling Promotion Fund published survey results in which participants were asked about barriers to cycling. In the first report, helmets were the 10th and 13th most common response for current cyclists and non-cyclists respectively, and helmet wearing comprised 4% of all responses. When asked about encouraging women to cycle more in the later survey, 4.1% of responders gave the repeal of the helmet law as their main response. The most common barriers from both surveys were lack of cycling infrastructure and concerns around safety. Neither sweat nor exertion made either list as a cycling barrier.

Wilson also seems to claim Australia and New Zealand are the only countries with helmet legislation and other countries have rescinded their laws. There are helmet laws in various parts of the US and Canada as well as Dubai, Finland, Spain, Iceland, Czech Republic, Jersey and Japan. It has also been debated in other countries. More information can be found here. Note that most helmet laws outside of Australia and New Zealand are directed at children only. I do believe Israel and Mexico City repealed helmet legislation, but that doesn’t appear to be the norm as suggested.

During the interview, Wilson also indicated the science around helmets are not supportive of their use. She gives more details on her website and this is where her argument really starts to fall apart. First up is the argument helmet laws deter cycling. I’ve blogged about this before and discuss it more fully in a peer-reviewed article published last year. In short, this argument is only supported when cherry-picking through the available data. The data often used are cycling counts from helmet use surveys which have a clear selection bias. To my knowledge, there are only two surveys (data collected before and after legislation) that even attempt to collect a representative sample (one from South Australia and the other Western Australia). Here is a summary of the proportions cycling in a given time frame from those surveys. Cycling amounts are nearly identical before and after helmet legislation in those Australian states.

South Australia Western Australia
1990 1993 1989 1993
At least weekly 21.8 21.0 At least weekly 26.6 27.7
At least monthly 5.2 6.0 At least every 3 months 11.1 11.6
At least every 3 months 3.9 4.4 At least once per year 10.3 11.5
Less often or never 69.1 68.6 Never 52.0 49.2

Wilson then argues for safety-in-numbers or Smeed’s Law. There certainly is research that has found a negative association between cycling numbers and injuries per cyclist when comparing jurisdictions. However, there is no evidence of a dose response, i.e., cycling numbers increase and injuries per cyclist then decreases. I’ve blogged about this using NSW data here — as cycling participation increased in NSW, cycling injuries increased at a similar rate.

Wilson does correctly note there are no randomised controlled trials assessing the effect of bicycle helmets to mitigate head injury.There is a quite obvious reason for that — researchers are not ethically allowed to randomly assign cyclists to wear/not wear a helmet, instigate a bicycle crash and then clinically diagnose injury and severity. Observational study designs, like case-control studies, are the only real option here where participants self-select to a treatment (helmet) or control (no helmet).

Wilson then cites a Sydney Morning Herald article and later references an article in the Journal of the Australasian College of Road Safety by Chris Rissel (these are related) as evidence helmet legislation doesn’t work in Australia. Wilson doesn’t appear to be aware the Journal later retracted Rissel’s paper because the article contained serious data errors among other issues. Note the SMH article does link to a correction where Rissel admits to the errors. Part of the issue is evident from Table 2 of Rissel’s paper as the age specific totals do not add up to the all age total. It is also evident when plotting head injury counts by year (raw and ‘corrected’ by summing across age groups) compared to counts I published three years later in another journal (note Rissel’s data is aggregated by financial year and mine is by calendar year). NSW_Head_Inj The head injury counts rise rapidly from 1997/98 to 1998/99. This is the year the NSW Department of Health changed their coding practices from ICD-9-CM to ICD-10-AM. There usually are discrepancies between ICD versions; however, this amounted to about 10 cases per year when both codes were used. My guess is that Rissel made two errors — head injuries were not coded properly for ICD-10-AM years and those incorrect codes were incorrectly mapped to ICD-9-CM. Simply put, the data in Rissel’s retracted article are wrong and therefore no valid conclusions can be made from them.

Next up, Wilson makes a very shocking claim. She writes

Helmets have been shown to prevent injury from “linear speeding”. But many accidents occur from “angular” accidents caused when the head is rotated. Helmets actually CAUSE head rotation.

There is absolutely no evidence helmets exacerbate rotational injuries. I’ve blogged about it here and below is a summary for a paper published last year (the citation numbers correspond to the article).

 Curnow [26, 27] suggested helmets exacerbate rotational injuries; the more serious being diffuse axonal injury (DAI). Although Curnow only hypothesised the DAI/helmet link unsupported by any real world or experimental evidence, some have taken this as fact [11, 13, 42, 94, 82, 83, 14]. There is, however, no existing evidence to support the DAI hypothesis. McIntosh, Lai and Schilter [61] found, when testing oblique impacts on dummies to simulate head rotation, helmet wearing did not increase angular acceleration, a result unsupportive of Curnow’s hypothesis. In a study by Dinh et al. [34], using trauma registry data from seven Sydney area hospitals over one calendar year, 110 cyclists were identified and none were diagnosed with DAI regardless of helmet wearing. Walter et al. [110], using linked police and hospitalisation data in New South Wales (NSW) from 2001-2009, reported at most 12 possible DAI cases out of 6,745 cyclists in a motor vehicle collision. Seven of the twelve cyclists were unhelmeted. These results suggest the incidence of DAI among cyclists appears to be rare and unrelated to helmet wearing. Additionally, computer simulated studies of bicycle crashes found no evidence helmets increased the likelihood of neck injury among adults [63] nor was there evidence helmets increased the severity of brain or neck injury in children [62].

The arguments against helmets presented by Sarah Wilson are not supported by available evidence. To close she does link to anti-helmet websites like Cyclist Rights Action Group and Helmet Freedom who seem to have supplied her with information. She also regurgitates information by anti-helmet advocate Paul Martin whose only ‘research’ on this topic, as far as I know, is a an anti-helmet commentary published by MJA in 2011. He may be a physician, but he doesn’t appear to be a researcher.

As a bit of a side issue to the science around bike helmets, Wilson also makes a civil liberties argument. In my view, that is her only valid argument. Any time an individual’s rights are taken away in the name of the greater good, it should be vehemently challenged by the populace and only accepted with a majority approval. A 2012 Australian survey estimates 96% approve of helmet legislation. On the other hand, I have colleagues who make the argument those with avoidable head injuries are a steep cost to countries like Australia where medical costs are shared. I find both arguments quite compelling and perhaps that’s where the debate around bike helmets should lie.

Note: The discussion and links to Chris Rissel’s retracted JACRS paper have been removed from Sarah Wilson’s blog. 

Advertisements

Why does anecdote trump evidence? The cost of helmets in Melbourne

While working on another research paper, I came across a discussion regarding bicycle helmet legislation that I read around the middle of last year. It originated with an article arguing against bicycle helmet legislation[1] and was followed by two responses[2,3]. This was then followed by a reply from the original authors[4].

I will not regurgitate the arguments here, but I I believe the Biegler and Johnson response is outstanding and I highly recommend it be read by anyone with an interest in this topic. One issue I found curious was the advertised cost of a helmet. Biegler and Johnson state

Helmets retail for as low as A$5 while treatment for brain injury can run into millions.

A citation was given which directed the reader to a page on the Melbourne Bike Share website. Information on this page states

Free helmets are now available with our blue bikes.  Just leave the helmet with the bike upon completion of ride.  Easy! Go to our Gallery to view the video as to how to secure the free helmet onto the blue bike. Alternatively, helmets are available for just $5 at many retail outlets or vending machines at Southern Cross Station and Melbourne University. A limit of 2 helmets per customer applies.

A list of stores nearest each Melbourne Bike Share Station follows. If it’s not completely clear bicycle helmets meeting the Australian standard can be purchased for $5, here is a picture.

CheapHelmetsForShareScheme

In a reply to Biegler and Johnson, Hooper and Spicer state

Biegler and Johnson also rely on the claim that the cost of purchasing cycle helmets is fairly marginal. However, quite aside from the fact that most helmets cost far more than the $5 quoted by these authors, it is important to realise that many people are unlikely to skimp when they buy helmets.

No citation or evidence is used to support this statement. I suppose it is possible cyclists could be turned off by a helmet being too inexpensive, but where is the evidence this is actually happening with the Melbourne Bike Share helmets? There’s also no evidence to support the authors claim the “extra cost may well be prohibitive.”

I have a hard time believing a free or $5 helmet is prohibitive to anyone. Also, shouldn’t the lack of supportive evidence presented by Hooper and Spicer have been filtered out during the peer-review process? To paraphrase a famous quote, the presentation of anecdotal arguments against helmet legislation does not constitute a valid argument.

  1. Hooper C, Spicer J. Liberty or death; don’t tread on me. J Med Ethics 2012;38 (6):338–41.
  2. Biegler P, Johnson M. In defence of mandatory bicycle helmet legislation: response to Hooper and Spicer. J Med Ethics.
  3. Trégouët P. Helmets or not? Use science correctly. J Med Ethics.
  4. Hooper C, Spicer J. Bike helmets: a reply to replies. J Med Ethics.

Freestyle Cyclists and More Misinformation about Helmets

When someone comments or cites any of my research, I take it on faith the person has actually read and understands my work. I am often reminded how naive I am as evidenced by a recent interview on FIVEaa radio in Adelaide.

Due to the ongoing Velo-City conference in Adelaide, there is keen interest in cycling issues and attention has turned to bicycle helmet legislation (as it seems every conversation about cycling devolves to this topic). In conjunction with the conference events, the Freestyle Cyclist’s Group have organized a “unhelmeted cycling protest” ride from the Adelaide CBD to the beach. In the interview, spokesman Alan Todd gave his views on helmet legislation – much of it is tired criticisms without much of an evidence base. When asked to comment about a 2011 article[1] I co-authored with former student Scott Walter, Todd said

If Tim (Churches) had stretched out his study period another six months either side he would’ve found a story. What you find in Australia was is the head injury rate went down sharply when helmets were mandated but then the level of cycling went down even more.

I found that troubling because we actually extended the time after the NSW helmet law as part of a sensitivity analysis (it is impossible to go the other direction because usable data does not exist). We discuss this in our paper where we state (emphasis added)

Both models using arm injury rates as the comparison showed  approximately parallel trends in the post-law period while the models using leg injury rates as a comparison exhibited contrasting trends. With the inclusion of three or five years of post-law data these trends tended to approach stability. With 18 months of post-law data, trends ranged from −7.5% to 21.2% per year, whereas with five years of data the range of trends was −0.6 to 9.2. For all four models, a test of the Pearson’s chi-square statistic was nonsignificant at the 0.05 level indicating a reasonable fit.

A few years ago, I became concerned that people like Todd and anti-helmet advocates were distorting the data and analyses discussed in our paper (and the work of others in the broader scientific community that find any evidence positive of helmets). In particular, as it relates to my work, it’s the belief cycling head injuries increased after the NSW helmet law. As I note above, we actually directly addressed that issue in our paper, but that analysis seems (back then and now) to be routinely ignored.

Besides this being a clear case of cherry-picking data to support a cause, it troubles me when someone dismisses research because the researcher didn’t do the analysis that person wanted (whether coming from a biased position or not). This is not sufficient grounds to discredit someone’s research (keep in mind we actually performed the analysis Todd criticized us for). It just means we don’t know the outcome until that analysis is performed using relevant data (this does assume the criticism is legitimate).

Even though we knew cycling head injuries were relatively flat 5 years after the helmet law after an initial, abrupt drop with law, we petitioned the NSW government for all cycling injury hospitalizations since the helmet law in 1991 up to the most recently available data (which was 2010 at the time). This allowed us to estimate the trends in cycling head injuries after the law and to test whether the benefits of the law were maintained in the long term.

The results were staggering. Not only did head injuries remain low over the next 20 years, but they diverged from limb injuries during that time. We then compared that with estimates of cycling participants (available from 2001-2010), and found the increase in limb injuries coincided with increases in cycling, while head injuries steadily declined. This evidence is completely contrary to Todd’s comment.

Walter-Olivier Plot

The same oversight was committed by Prof Chris Rissel of The University of Sydney in his rejoinder to our paper[2] – a paper in which he was allowed to cite his own retracted paper as evidence against our own – where he states a longer post-law period would “significantly reduce any impact of helmet legislation in the regression analysis.” Note that Rissel is the NSW spokesperson for Freestyle Cyclists.

We even pointed this out in our response to Rissel’s rejoinder published last year[3], yet people like Todd seem to ignore research that doesn’t align with their advocacy position. I think this is troubling as someone like Todd is actively trying to shape public health policy.

Todd also commented about other research stating

If you have a prang, a helmet may offer you some protection. it probably won’t be as much as you think. The latest findings from international journal Accident Analysis and Prevention says that it’s between 5 and 15% improvement. It’s not very big.

I believe this is in reference to the re-re-re-analysis of a meta-analysis originally published in 2011. By my count, this paper was corrected twice and a third version was published as a corrigendum last year. The history of this paper is quite interesting and would make a good article just by itself. But, with regards to Todd’s comment, the paper estimates odds ratios from random effects models adjusting for possible publication bias as 0.50 (95% CI: 0.39-0.65) for head injury and 0.67 (95% CI: 0.56-0.82) when head, face and neck injuries are combined. So, the paper estimates statistically significant reductions in these injuries by 50% or 33% depending on the model. I find the latter analysis confusing as helmets are designed to protect the head and not the face or neck. None of these values are anywhere near the values quoted by Todd and he seems to have ignored the paper’s discussion which states:

With respect to head injury, the answer is clearly yes, and the re-analysis of the meta-analysis reported by Attewell et al. (2001) in this paper has not changed this answer.

Rissel has also misquoted this paper on at least two occasions (see here and here). He also claims helmets cause diffuse axonal injury which is not backed by any available evidence.

Bicycle helmet or helmet law effectiveness is certainly a controversial topic for some and, perhaps, this is a reason to have an active debate. However, the evidence presented should be factual and be the result of rigorous analysis using relevant data. Alan Todd and the Freedom Cyclists have presented neither. If you’re interested, they seem to have a new website that’s big on flash and small on substance.

Later during the interview, neurosurgeon John Close, when asked to comment about the issues Todd raises, called him an “idiot”. Although I believe name-calling is counterproductive in polite discussion, I agree with the sentiment.

  1. Walter, S.R., Olivier, J., Churches, T., & Grzebeita, R. (2011). The impact of compulsory cycle helmet legislation on cyclist head injuries in New South Wales,  Australia. Accident Analysis and Prevention, 43, 2064–2071.
  2. Rissel, C. (2012). The impact of compulsory cycle helmet legislation on cyclist head injuries in New South Wales, Australia: A rejoinder. Accident Analysis and Prevention, 45, 107-109.
  3. Walter, S.R., Olivier, J., Churches, T. & Grzebieta, R. (2013). The impact of compulsory helmet legislation on cyclist head injuries in New South Wales, Australia: A response. Accident Analysis and Prevention, 52, 204-209.

New Zealand Cycling Fatalities and Bicycle Helmets

A colleague sent me an assessment of cycling fatalities in New Zealand. The report’s author is Dr Glen Koorey of the University of Canterbury. He’ll be one of the keynote speakers at the upcoming Velo-City Conference in Adelaide. In particular, I was tasked to comment about his section regarding bicycle helmets as they, in part, now form the basis of the Wikipedia page on Bicycle Helmets in New Zealand.

In the report, Koorey states

Only nine victims were noted as not wearing a helmet, similar to current national helmet-wearing rates (92%). This highlights the fact that helmets are generally no protection to the serious forces involved in a major vehicle crash; they are only designed for falls. In fact, in only one case did the Police speculate that a helmet may have saved the victim’s life. There is a suspicion that some people (children in particular) have been “oversold” on the safety of their helmet and have been less cautious in their riding style as a result.

On the surface, he has a point based on independence for probabilities. In mathematical terms, Koorey is stating

P(helmet | fatality) \approx P(helmet)

which is, by definition, independence (if they are equal). So, if the helmet wearing proportion among fatalities is equal to that in population, then helmet wearing is independent of fatality.

As I see it, the problem is in the interpretation as it is not a pure measure of helmet effectiveness. Helmets are a directed safety intervention, so they won’t protect body parts other than the head and you can certainly die from other injuries. It could very well be that helmet wearing is independent of fatalities, but the the sheer force of the collision makes other serious (and possibly fatal) injuries more likely negating any benefit to helmet wearing.

I searched through the publicly available data (found here) and asked around about what’s available in the complete data. In the end, there’s not enough information to identify location or severity of injuries. If we had all the data, a more appropriate probability to investigate would be

P(helmet | \hbox{fatality due to head injury}) = P(helmet)

When looking at the reported data, however, Koorey’s claim the proportion of fatalities wearing a helmet is “similar to current national helmet‐wearing rates (92%)” doesn’t appear justified.

First, he states there were 84 cycling fatalities between 2006-2012 in New Zealand. Of these, about 10% did not have information about helmet wearing. So, there is information on 76 fatalities and 9 of those were not wearing helmets. This gives us the proportion of non-helmet wearers among fatalities of 11.84% (9/76). This is not an estimate since this figure comes from all cycling fatalities in New Zealand.

Koorey wants to compare this to estimates of helmet wearing in New Zealand. Over this time frame, I compute a yearly average helmet wearing rate of 92.57%. So, the proportion of cyclists not wearing helmets is 7.43% during that time. This data could then be summarized by a 2 \times 2 table as

Helmet
Yes No
Death Yes a b
No c d

From the data available, we do know a=67, b=9, \frac{c}{c+d}=0.9257 and \frac{d}{c+d}=0.0743. We would like to compute the risk of death for those wearing helmets versus those that do not; however, this is not possible using this summary data as we don’t really know how many cyclists there are.

Instead, we can compute the odds ratio (OR) which is a good estimate of relative risk for rare events (cycling deaths are certainly rare). The odds ratio is

OR=\dfrac{ad}{bc}=\dfrac{a\frac{d}{c+d}}{b\frac{c}{c+d}}=0.598

If helmet wearing were identical among fatalities and the general population, as Koorey has suggested, the odds ratio would be 1. Instead of being similar, the risk of death is 40% less among helmeted NZ cyclists versus those without a helmet. This figure is consistent with the latest re-re-analysis of a meta-analysis from case-control studies, although this is likely a conservative figure since head (or any other) injuries were not identified.

Statistical significance would be hard to come by here considering we don’t have the exact counts of cyclists from those surveys (or from the general population). However, the asymptotic variance of the log(OR) is

\widehat{var}(log(OR)) \approx 1/a + 1/b + 1/c + 1/d

The last available helmet use survey came from over 4600 cyclists (that is 7*4600 over the study period). Since this is such a sizable number, the last two terms of the variance formula do not contribute much.

Using only the fatalities in the variance formula gives us an asymptotic confidence interval for the odds ratio of

OR\times e^{\pm 1.96 \times s.e.} = (0.298, 1.198)

where the s.e. = \sqrt{1/a + 1/b} (this assumes both 1/c and 1/d are small). Note this result is not statistically significant; however, this is due to having relatively few cycling fatalities (which is good and having less would be better).

There’s also the issue regarding the effect of missing data. One method is to recompute the odds ratio assuming all missings did not wear helmets and repeat assuming all missings did wear helmets giving a range of possible values. The odds ratios are 0.316 and 0.669 respectively. So, at worst, there is an estimated 33% decrease in the risk of death when wearing a helmet versus not.

Koorey’s claims are therefore not justified as the risk of death was much less among helmeted cyclists.This is even without specific information about cause of death and properly assessing helmet effectiveness to lower the risk of a fatality.

I also take issue with Koorey’s statement “This highlights the fact that helmets are generally no protection to the serious forces involved in a major vehicle crash; they are only designed for falls.” A recently published article in Accident Analysis and Prevention states

Considering a realistic bicycle accident scenario documented in the literature (Fahlstedt et al., 2012) where a cyclist was thrown at 20 km/h (i.e. 5.6 m/s which corresponds to a drop height of approximately 1.5 m), our analysis indicates that a helmeted cyclist in this situation would have a 9% chance of sustaining the severe brain and skull injuries noted above whereas an unhelmeted cyclist would have sustained these injuries with 99.9% certainty. In other words, a helmet would have reduced the probability of skull fracture or life threatening brain injury from very likely to highly unlikely.

I also published a paper last year where we found helmets reduced the odds of severe head injury by up to 74% (these were NSW cyclists hospitalised after a motor vehicle crash and reported to the police from 2001-2009). Severe injuries included “Open wound of head with intracranial injury” (S01.83), “Multiple fractures involving skull and facial bones” (S02.7), “Fracture of skull and facial bones, part unspecified” (S02.9), “Loss of consciousness [30 mins-24hrs]” (S06.03), “Loss of consciousness prolonged without return of consciousness ” (S06.05), “Traumatic cerebral oedema” (S06.1), “Diffuse brain injury” (S06.2), “Other diffuse cerebral & cerebellar injury” (S06.28), “Traumatic subdural haemorrhage” (S06.5), “Traumatic subarachnoid haemorrhage” (S06.6), “Other intracranial injuries” (S06.8), and “Intracranial injury, unspecified” (S06.9). None of these are minor injuries.

Using available data, the evidence does suggest helmet wearing mitigates cycling fatalities and serious injury. It does not appear as though the public have been oversold on the benefits of bicycle helmets.

Update: The original version focused on the relative risk of helmet wearing among fatalities and helmet wearing surveys in New Zealand. This made the wording quite strange and difficult to interpret. However, the odds ratio isn’t as problematic and is a good estimate of relative risk of death in this instance.

Bicycle Helmet Research Foundation: A Reliable Resource?

The Bicycle Helmet Research Foundation (BHRF), and their website cyclehelmets.org, is an organization  that serves as a hub for material regarding the efficacy of either bicycle helmets or helmet legislation. The BHRF has an editorial board who is “responsible for the content” of the website which is “subjected to multi-disciplinary peer review”.

The BHRF is not affiliated with any university or academic society, yet seems to be influential in the discussion of the efficacy of bicycle helmets and whether jurisdictions should mandate their use. For example, the recent Queensland inquiry into cycling safety contained six footnotes linking to the BHRF website. In a submission to the Victorian Parliament, Colin Clarke cites four BHRF websites which he uses as evidence regarding the effectiveness of helmet legislation in that state. Jennifer Mindell, editor-in-chief of the Journal of Transport and Health, has used BHRF graphs to argue “cycle helmet use does not yield a population level effect“. This presentation was a collaboration with BHRF editorial board members Malcolm Wardlaw and John Franklin.

On many occasions, I have come across comment boards discussing virtually anything related to bicycle helmets where someone will write something like “everything you need to know about helmets is on this website cyclehelmets.org.” This has led me to wonder about the reliability of the BHRF as a resource for bicycle helmet-related research. Can they be trusted to present a fair assessment of the available scientific evidence regarding the efficacy of bicycle helmets and helmet legislation.

In their policy statement, the BHRF exists “to undertake, encourage, and spread the scientific study of the use of bicycle helmets.” This seems quite straightforward and aligns nicely with the name of their organization. However, the policy statement then seems to devolve into a rant against helmets or helmet legislation. They state

  • closer investigation has revealed serious flaws in the evidence most frequently cited in favour of helmet effectiveness“,
  • helmet laws have led almost universally to large declines in the number of people who cycle“, and
  • the promotion of cycle helmets has been to brand cycling as an inherently hazardous activity.”

In my opinion, this comes across as anti-bicycle helmet advocacy and does not remotely resemble a research organization.

It is not uncommon for advocacy groups to be an integral part of research. For example, the declared purpose of the National Heart Foundation of Australia is to “reduce premature death and suffering from heart, stroke and blood vessel disease in Australia.” This is accomplished, in part, by funding cardiovascular research. The Amy Gillett Foundation is another advocacy organization directed at “reducing the incidence of death and injury of bike riders”. These two groups, and many more like them, are important sources for the spread of research to a broad, non-scientific audience. The big difference between these organizations and the BHRF is they serve to facilitate research and not as the final arbiter of an issue.

I suppose my initial impression could be wrong and they aren’t an anti-helmet advocacy group. So, what can be discerned from the material on their website beyond their policy statement? Are there well-reasoned arguments discussing both sides of the argument leading them to a clear conclusion? Are commentaries provided by experts in the field who actively publish in scientific journals? How does the BHRF editorial board stack up against established research journals? These are important concerns due to the ease at which ideas are proliferated over the internet and the difficulty in discerning the reliability of resources. As HIV researcher Dr Seth Kalichman puts it “The Internet has made pseudoscience as accessible, or perhaps even more accessible, than quality medical science.”

In a series of posts, I will discuss the BHRF editorial board, the supportive/skeptical articles they list, and articles that are missing from their website. I will also respond to their criticism of one of my papers (because how else is an academic supposed to address unfounded criticisms posted on someone’s website).

Journal of Transport and Health and Publication Bias?

Scientific journals exist as an outlet for the promotion and discussion of scientific ideas and and the presentation of evidence that can support or not support such ideas. This is a crucial aspect of scientific discourse as it allows for the dissemination scientific research to a wide audience (even if that research is not understood by the populace). Otherwise, for example, Gossett’s (aka student) t-test might only have been known to those taking a tour of the Guinness brewery at St James’s Gate (however, I can attest that the best milieu for discussing statistics is over a few beers).

The peer-review process is used to decide what gets published and what does not as judged by one’s scientific peers. Importantly, those contributing to the peer-review process should check their personal biases at the door and judge the quality of the evidence presented. This is, generally speaking, the mantra of the highly respected Public Library of Science (PLoS) journals.

This brings me to the Journal of Transport and Health (JTH), a new Elsevier journal which published its first issue this month. The journal is officially affiliated with the Transport and Health Study Group (THSG) which has as one their policy objectives “To promote a more balanced approach to cycle safety and oppose cycle helmet legislation.” On the THSG website, is an article that questions the use of even promoting helmet use by way of the common Straw Man argument that motor vehicle drivers and passengers should also be compelled to wear helmets. This argument ignores the many safety features of modern motor vehicles that aren’t even possible on a bicycle. Think about it this way, if I turn the argument around a bit to state “Why mandate airbags (or insert any other safety feature) for cars since we don’t make them mandatory for cyclists?” Is that a valid argument?

As you would expect, the editorial board of the JTH is littered with members of the THSG (it’s their journal, so why not?). Below is a list of JTH editorial board members and their role in the THSG (thanks goes out to Tim Churches for supplying the information in this table). This is as far as can be discerned from their website and I suppose it’s entirely possible others could simply be members of the THSG.

Name Institution JTH role THSG role
J. Mindell University College London (UCL), London, England, UK Editor-in-Chief Co-Chair
S. Alvanides Northumbria University, Newcastle, England, UK Associate Editor Executive Committee
A. Davis University of the West of England, Bristol, UK Associate Editor Executive Committee
S. Gray University of the West of England, Stapleton, Bristol, England, UK Associate Editor Executive Committee
S. Handy University of California at Davis, Davis, CA, USA Associate Editor
R.A. Kearns University of Auckland, Auckland, New Zealand Associate Editor
T. Sugiyama Baker IDI Heart and Diabetes Institute, Melbourne, VIC, Australia Associate Editor
L.B. Andersen University of Southern Denmark, Odense, Denmark Editorial Board
J. Dill Portland State University, Portland, OR, USA Editorial Board
R. Ewing The University of Utah, Salt Lake City, UT, USA Editorial Board
L. Frank University of British Columbia, Vancouver, BC, Canada Editorial Board
J. Hine University of Ulster, Newtownabbey, UK Editorial Board
S. Inoue Tokyo Medical University, Tokyo, Japan Editorial Board
L.R. Int Panis Flemish Institute for Technological Research (VITO), Mol, Belgium Editorial Board
R. Mackett University College London (UCL), London, UK Editorial Board Executive Committee
C. Perez Agencia de Salut Publica de Barcelona, Barcelona, Spain Editorial Board European Committee
J. Pucher University of North Carolina at Chapel Hill, Chapel Hill, NC, USA Editorial Board
C. Rissel The University of Sydney, NSW, Australia Editorial Board
L. Rizzi Pontificia Universidad Católica de Chile, Santiago, Chile Editorial Board
D. Rodriguez University of North Carolina at Chapel Hill, Chapel Hill, NC, USA Editorial Board
H Rutter London Sch. of Hygiene & Tropical Medicine, London, England, UK Editorial Board
J.F. Sallis University of California at San Diego (UCSD), San Diego, CA, USA Editorial Board
Y. Shiftan Technion – Israel Institute of Technology, Haifa, Israel Editorial Board
N. Silverstein University of Massachusetts at Boston, Boston, MA, USA Editorial Board
M. Wardlaw Edinburgh, Scotland Editorial Board Executive Committee
S. Watkins Stockport Metropolitan Borough Council, Stockport, England, UK Editorial Board Co-Chair

What is surprising here is the lack of balance on the editorial board — there are no members who could provide counterpoint to the anti-helmet views of the THSG. Non-THSG editorial board members include Chris Rissel who is an outspoken advocate against bicycle helmet legislation (see here, for example). Another JTH editorial board member, Malcolm Wardlaw, is an editorial board member of the anti-helmet website Bicycle Helmet Research Foundation (by the way, I’m  not convinced just anyone with a gripe can legitimately establish an editorial board). I searched for Wardlaw on the academic search engine Scopus (also published by Elsevier) and I found 12 documents — 10 seemed to be commentaries against helmets, one is a magazine article, and one seemed to be an original research article.

What really concerns me is this — will a paper that says anything positive about helmets or helmet legislation (and backed by evidence) be peer-reviewed without the biases of the THSG influencing the decision to publish? Also, will authors shy away from citing peer-reviewed articles that demonstrate evidence supportive of helmets or helmet laws? Will reviewers insist on citing anti-helmet arguments that have no evidence base like the DAI hypothesis? This is important as selective citation seems to be common practice when publishing on this topic.[1]

The very first issue contains commentaries from the Editor-in-Chief Jennifer Mindell (also co-chair of the THSG) and another by Stephen Watkins (the other co-chair of THSG). On page 3 of the very first issue of JTH, Watkins states[2]

“We still find ourselves having to fight defensive campaigns, such as resisting proposals for compulsory cycle helmets which will do little for cycle safety but will have a serious adverse effect on cycling levels, not least because they feed misconceptions about cycle safety.”

This comment is not surprising as Watkins has also been openly critical of the Spanish government for concluding (original in Spanish, translated with the help of Google) “the scientific evidence of the decrease of morbidity and mortality due to the helmet use by cyclists is absolutely conclusive.” This makes me wonder if an article submitted to JTH would be outright rejected for such a comment even if supported by evidence.

As far as I can tell, the only justification given for the THSG’s position on bicycle helmets comes from this presentation. In this presentation, Mindell, Wardlaw and Franklin discuss a paper I co-authored.[3] Regarding trends in cycling injuries around the NSW helmet law, the authors present this graph

MindellHead

and state

“It has repeatedly been claimed that such improvements have occurred. But to date, not a single such claim has stood the test of close inspection. In New Zealand, promotion and legislation increased helmet wearing to nearly universal use, yet there is no noticeable improvement in the %HI trend for cyclists (red) relative to the control (black). In NSW, ratios of head to arm and head to leg injuries were compared before and after the helmet law. The helmet law rapidly increased the helmet wearing rate from about 20% to 80%. But again, it is difficult to discern any particular reduction in head injuries to cyclists (red) compared with pedestrians (blue), although the data are rather “noisy”.”

A similar argument is also presented by the BHRF, an anti-helmet advocacy group. A major problem with this plot is it doesn’t actually correspond to the plots or the data we used in our paper, as discussed elsewhere.[4] Instead, both time series plots have been rescaled and shifted so they overlap and are therefore not comparable (this is evidenced by the lack of units on the y-axis). This plot, by itself, is also problematic as the trend in either head or arm injuries are unclear. Considering this plot and the comments made by the authors, I wonder if they understand what “noisy” means in this context.

Time series data, when finely aggregated, can usually follow an up-and-down pattern. This is not necessarily random variability (or “noise”), but can be cyclical or seasonal patterns in the data. For example, there may be more cycling injuries during summer months and less during the winter (due to more or less cycling). This can be modeled or removed prior to analysis (as we did in our original paper). The amount of scatter around the fitted model gives us a measure of “noise”. In our analysis, we found very little “noise” relative to the effect of the helmet law.

The appearance of the “up-and-down pattern” can be lessened by aggregating the data at a higher level (e.g., quarterly or yearly). This is usually not a good idea for the analysis due to the reduction in efficiency (in this case, the number of time points would decrease from 36 to 12 for quarterly aggregated data). Below is a plot of the head and arm injury data when aggregated by month, quarter and semester (the data can be found here and courser aggregations were computing by averaging monthly injury rates over 3 and 6 month periods).

NSW-Head-Arm

It is clear that as the aggregation gets courser, the apparent noisiness disappears. It is also clear there was a profound change in head injury from the third to the fourth semesters which correspond to either side of the helmet law. As I stated before, it would be unwise to analyze the semesterly data (there are only six time points and the full model estimates eight parameters), but it may be useful to visually present data in this manner to non-statisticians to avoid confusing systematic with random variability. My guess is our paper would’ve been rejected by the JTH editors due to an apparent lack of statistical understanding — whether biased against helmets or not.

With those issues in mind, I wonder whether an anti-helmet advocacy group should be in charge of a scientific journal? I have written several original research articles and commentaries regarding the evidence around bicycle helmet legislation. Since much of my work has demonstrated positives, I’ve been labeled as “pro-helmet” by some. I find this categorization disturbing as I don’t ever set out to demonstrate helmets or helmet laws are beneficial. It always comes down to drawing scientific conclusions from performing rigorous analysis using the best available data. If I find strong evidence to the contrary, I will adjust my views accordingly, as would any scientist. However, I’m not convinced any of my research would be given a fair shake at the JTH.

This is my opinion and I could certainly be mistaken. What is your take on this journal?

  1. Olivier, J. (in press) . The apparent ineffectiveness of bicycle helmets: A case of selective citation. Gaceta Sanitaria. 
  2. Watkins, S.J. (2014). The Transport and Health Study Group. Journal of Transport & Health, 1, 3-4.
  3. Walter, S.R., Olivier, J., Churches, T. & Grzebeita, R. (2011). The impact of compulsory cycle helmet legislation on cyclist head injuries in New South Wales, Australia. Accident Analysis and Prevention, 43, 2064-2071.
  4. Olivier, J., Grzebieta, R., Wang, J.J.J. & Walter, S. (2013). Statistical Errors in Anti-Helmet Arguments. 2013 Australasian College of Road Safety Conference – “A Safe System: The Road Safety Discussion” Adelaide.