In hypothesis testing, there are two important values you should be familiar with: alpha (α) and beta (β). These values are used to determine how meaningful the results of the test are. So, let’s talk about them!
Alpha is also known as the level of significance. This represents the probability of obtaining your results due to chance. The smaller this value is, the more “unusual” the results, indicating that the sample is from a different population than it’s being compared to, for example. Commonly, this value is set to .05 (or 5%), but can take on any value chosen by the research not exceeding .05.
Alpha also represents your chance of making a Type I Error. What’s that? The chance that you reject the null hypothesis when in reality, you should fail to reject the null hypothesis. In other words, your sample data indicates that there is a difference when in reality, there is not. Like a false positive.
Multiple Hypothesis Testing
When a study includes more than one hypothesis test, the alpha of the test will not match the alpha for each test. There is a cumulative effect of alpha when multiple tests are being conducted such that three tests using alpha=.05 each would have a cumulative alpha of .15 for the study. This exceeds what is acceptable for quantitative research. Therefore, researchers should consider making an adjustment, such as a Bonferroni Correction. Using this method, the researcher takes the alpha of the study and divides it by the number of tests being conducted: .05/5 = .01. The result is the level of significance that will be used for each test to determine significance.
The other key-value relates to the power of your study. Power refers to your study’s ability to find a difference if there is one. It logically follows that the greater the power, the more meaningful your results are. Beta = 1 – Power. Values of beta should be kept small, but do not have to be as small as alpha values. Values between .05 and .20 are acceptable.
Beta also represents the chance of making a Type II Error. As you may have guessed, this means that you came to the wrong conclusion in your study, but it’s the opposite of a Type I Error. With a Type II Error, you incorrectly fail to reject the null. In simpler terms, the data indicates that there is not a significant difference when in reality there is. Your study failed to capture a significant finding. Like a false negative.
Type I Error: Testing positive for antibodies, when in fact, no antibodies are present.
Type II Error: Testing negative for antibodies when in fact, antibodies are present.