C. p-value < significance level - Explanation
The significance level of a test is defined as the probability of rejecting the null hypothesis when the
null hypothesis is actually true (a Type I error). It is often represented by the Greek symbol alpha.
A study is only statistically significant if the p-value reaches the significance level set before the
study is started.
Popular levels of significance are 5% (0.05), 1% (0.01) and 0.1% (0.001).
Significance tests
A null hypothesis (H0) states that two treatments are equally effective (and is hence negatively
phrased). A significance test uses the sample data to assess how likely the null hypothesis is to be
correct.
For example:
- there is no difference in the prevalence of colorectal cancer in patients taking low-dose aspirin
compared to those who are not’
The alternative hypothesis (H1) is the opposite of the null hypothesis, i.e. There is a difference
between the two treatments
The p value is the probability of obtaining a result by chance at least as extreme as the one that was
actually observed, assuming that the null hypothesis is true. It is therefore equal to the chance of
making a type I error (see below).
Two types of errors may occur when testing the null hypothesis
- type I: the null hypothesis is rejected when it is true – i.e. Showing a difference between two groups
when it doesn’t exist, a false positive. This is determined against a preset significance level (termed
alpha). As the significance level is determined in advance the chance of making a type I error is not
affected by sample size. It is however increased if the number of end-points are increased. For
example if a study has 20 end-points it is likely one of these will be reached, just by chance. - type II: the null hypothesis is accepted when it is false – i.e. Failing to spot a difference when one
really exists, a false negative. The probability of making a type II error is termed beta. It is
determined by both sample size and alpha
Study accepts H0 | Study rejects H0 | |
Reality H0 | Type 1 error (alpha) | |
Reality H1 | Type 2 error (beta) | Power (1 – beta) |
The power of a study is the probability of (correctly) rejecting the null hypothesis when it is false, i.e.
the probability of detecting a statistically significant difference
- power = 1 – the probability of a type II error
- power can be increased by increasing the sample size