Psych Journal Bans Hypothesis Testing and Confidence Intervals

In a recent editorial, the scientific journal Basic and Applied Social Psychology have banned submissions that utilize null hypothesis significance testing (NHST) or confidence intervals (CI). Firstly, I can certainly appreciate what motivated the journal’s policy as I routinely read/review manuscripts that solely base decision making on p-values less than 5%, i.e., the (incorrect) belief research findings are important if p<0.05.

For example, I recently read a case-control study of helmet use and bicycle-related trauma by Heng et al.[1] One of their reported measures was alcohol use and they found “[a]lcohol consumption did not correlate with…helmet wearing” and report a p-value of “NS” (as in, not significant). A summary of that data is given below.

Helmet Use
Alcohol Involvement Yes No
Yes 0 18
No 17 125

The p-value from this table is certainly greater than 5% by Fisher’s exact test. For a measure of effect size, it isn’t possible to compute an odds ratio in the usual way. Instead, a continuity corrected version can be computed by adding a half to each cell.

OR_{cc}=\dfrac{(a+0.5)(d+0.5)}{(b+0.5)(c+0.5)}

The above 2\times2 table gives a continuity corrected odds ratio of OR_{cc}=0.19. In other words, there is an associated 81% reduction in the odds of alcohol involvement among helmet wearers versus those not wearing a helmet in this data set. That is not a trivial result and, perhaps, this is the type of situation the journal is trying to avoid. That is something I fully support.

On the other hand, I believe the journal has over-reached by banning NHST and CI. Granted there are problems with these methods of statistical inference, but I strongly disagree these methods are invalid. Perhaps the strengths and limitations of these methods are poorly understood, but they are certainly not invalid.

The journal does recommend an increased focus on descriptive statistics and correctly note they “become increasingly stable” as sample size increases. Of course, this does rely on descriptive statistics that are unbiased (at least asymptotically), otherwise you’ll get stable estimates that do not reliably estimate what you want estimated.

I also think there is a disconnect between the push for descriptive statistics and the ban on confidence intervals. So, for example, I may be encouraged to report the sample mean and its standard error

\bar{x}   and   \dfrac{s}{\sqrt{n}}

but I’m banned from combining these results as

\bar{x}\pm{1.96}\times\dfrac{s}{\sqrt{n}}.

The American Statistical Association is aware of this situation and has formed a committee to comment on the journal’s decision. I look forward to reading what they come up with.

  1. Heng KWJ, Lee AHP, Zhu S, Tham KY, Seow E. (2006) Helmet use and bicycle-related trauma in patients presenting to an acute hospital in Singapore. Singapore Med J 47(5): 367-372.

Effect Sizes, Overtaking Distance for Cyclists and Mandatory Helmet Legislation

I came across an interesting comment in an article by Alan Davies on The Urbanist. Davies discusses Ian Walker’s overtaking distance study which found a statistically significant association with helmet wearing and motor vehicle overtaking distance. Like a study I published last year, Davies did not find the results convincing with regards to helmets but found other factors like vehicle size and the cyclist’s position in the lane more important.

As expected, there were several comments defending Walker’s conclusions among those with anti-helmet views. A comment left by RidesToWork (who I believe is Bicycle Helmet Research Foundation editorial board member Dorothy Robinson) states

Many readers of this blog appear keen to dismiss the arguments than the difference in passing room might make a difference. Yet they don’t apply the same critical thinking to claims about helmet laws – such as Figure 2 of http://www.cyclehelmets.org/1228.html The effect, if there is one, is nothing like as clear as the effect of helmet wearing on passing distance.

I found this comment dubious, in part, because it links to an unfounded and misleading criticism of one my articles, but also because it’s quite a bold statement comparing the results from two very different studies. Walker’s analysis was a 2×5 ANOVA for motor vehicle overtaking distance while cycling in the UK with 2 levels of helmet wearing and 5 levels of distance to the kerb. Our study, used Poisson regression to model the rate of head and limb injury hospitalizations in NSW in the the three year period surrounding the helmet laws in 1991.

Note the Figure 2 mentioned in the comment has been dubiously manipulated through rescaling and shifting two time series so they overlap to produce a seemingly null effect. This seems to be a common tactic by anti-helmet advocates as I’ve discussed before here and here.

Regarding Robinson’s claim, the question is how can someone claim the results from one study are more “clear” compared to another, disparate study?

One method is to compare their effect sizes (ES) on some common scale. It is well known that large sample sizes can make unimportant differences statistically significant (the opposite is true for small sample sizes, i.e., important differences are not statistically significant). Take the one sample z-test for the population mean \mu, for example. The right tailed p-value converges to 0 as n tends to infinity for fixed values of \bar{x} and \sigma, i.e.,

\lim\limits_{n\rightarrow\infty}\mathcal{P}\left(Z>\sqrt{n}\dfrac{\bar{x}-\mu_0}{\sigma}\right)=0

As a result, a significant p-value is possible even when \bar{x} is infinitesimally close to \mu_0. For that reason, it is important not to overpower a study by choosing an excessively large sample size (Walker did just that in his study computing sample size based on 98% power instead of the usual 80% or 90%). Effect sizes are essentially the observed test statistic without the influence of sample size (conversely, it can be used to compute sample size a priori).

Jacob Cohen defined the difference in sample means divided by the sample standard deviation as an effect size for the two sample t-test, i.e.,

d=\dfrac{\bar{x}_1-\bar{x}_2}{s}.

Cohen further proposed operationally defined values of d\in\{0.2,0.5,0.8\} as smallmedium and large effect sizes respectfully[1]. The reasoning given by Cohen regarding these values is

My intent was that medium ES represent an effect likely to be visible to the naked eye of a careful observer… I set small ES to be noticeably smaller than medium but not so small as to be trivial, and I set large ES to be the same distance above medium as small was below it.

The results from both Walker’s data and our study on mandatory helmet legislation can be converted to Cohen’s d. An F statistic with 1 numerator degree freedom can be converted to Cohen’s d by

d=2\sqrt{\dfrac{F}{df_d}}

For helmet wearing, Walker reported F_{1,2313}=8.71 which correponds to d=0.12. This categorizes helmet wearing as a trivial, and therefore unimportant, effect size by Cohen’s definition.

It is also possible to convert to Cohen’s d for a linear model with a binary predictor x (takes on values 0 or 1) and corresponding coefficient \beta as

d=\dfrac{|\beta|}{se(\beta)}\sqrt{\dfrac{1}{n_1}+\dfrac{1}{n_2}}

where n_1 and n_2 are group sample sizes. In our re-analysis of Walker’s data, the adjusted effect of helmet wearing for Walker’s data is -0.058 respectively. There were 1206 and 1149 overtaking events when not wearing or wearing a helmet respectively which translates to d=0.16 (a more respectable value, although I doubt Walker or Robinson would ever agree with our results). Note that in my analysis, I did not use a square root transformation or remove 35 observations as Walker did as these considerations are unnecessary given the large sample size.

In our re-analysis of Walker’s paper, we also performed logistic regression to assess whether close overtaking was associated with helmet wearing as this is more relevant to cycling safety. Using the one meter rule as a cut point, we estimated a non-significant adjusted odds ratio of 1.13 (p=0.54). Odds ratios can also be transformed to Cohen’s d by

d=\dfrac{\sqrt{3}log(OR)}{\pi}

Using the adjusted odds ratio, we get d=0.07 which is again trivial by Cohen’s definition.

In our paper regarding mandatory helmet legislation, we reported a decline in bicycle related head injury relative to limb injury at the time of the NSW helmet law of 27.5% or 31% depending on whether arm or leg injuries were used as the comparator. These results can also be written as rate ratios of 0.725 or 0.69 respectively. If we assume the rate ratio here is equivalent to an odds ratio, Cohen’s d is 0.18 and 0.20 respectively.

Our analysis made the implicit assumption that no cyclist wore helmets pre-law and all cyclists wore helmets post-law. If, instead, the comparison was relative to the approximate 55% increase in helmet wearing in NSW, we get rate ratios of 0.56 and 0.51 and transformed Cohen d’s of 0.32 and 0.37.

I also published a paper last year that proposed operationally defined effect sizes for odds ratios that did not require transformation to Cohen’s d or make distributional assumptions regarding effect sizes. These were 1.22, 1.86 and 3.00 for small, medium and large odds ratios (or, equivalently, 0.82, 0.54 and 0.33).

These results suggest the helmet wearing effect from Walker’s data is trivial and the effect of helmet legislation is about a medium effect size. Of course, these results are from one cyclist and from one Australian state. So, over-generalizations should not be made without more data. However, note that Walker did repeat aspects of his first study comparing seven types of cyclists. The CASUAL type was the only one without a helmet with a mean overtaking distance of 117.61cm. This was approximately dead center of the means (range: 114.01cm – 122.12cm) suggesting the trivial helmet effect size from the original study was no coincidence.

  1. Cohen J (1992) A power primerPsychological Bulletin 112: 155–159.