Research Question Error Types

If you review almost any research methods textbooks, there will be an explanation of Type I and Type II errors and why they must be avoided when performing quantitative research. To refresh –

  • A Type I error occurs when the null hypothesis is rejected in error, and the alternative hypothesis is accepted. This is a big deal since what you are saying is that there is some difference or relationship between two variables when there is not.
    • A frequent cause of Type I errors in research performed by novices has to do with performing multiple tests with the same dependent variable. Using the the widely-used p < .05 as the standard, what a researcher is saying is that they are willing to accept a 1 in 20 chance of error. If a dependent variable is examined more than once, then the 1 in 20 chance needs to be adjusted via a reduction in the accepted p-value, or the novice must accept that they are willing to make a Type I error. For example, if a dependent variable was tested 5 times, then there is a 20% chance of making an error.
    • I’ve seen too many students and faculty not understand this concept, and when this it is pointed out during a manuscript review or during a defense, it can be embarrassing for every for both the student and faculty. Bonferroni correction anybody?
  • A Type II error occurs when the null hypothesis is erroneously retained, when it should be rejected. This is an error, but its not as bad as a Type I error. This situation causes one’s work be referenced in the future as a need for future research (best case scenario), or as a study performed in error (a worst case scenario).
    • A common cause for a Type II error is misinterpretation. Another culprit is low statistical power.
    • A novice researcher (and their faculty) should have a full understanding on how to perform a power analysis. The team should be aware of prior research in the area and perform a weighted average of prior effect size measures (e.g., Pearson’s r, Cohen’s d) or, at a minimum, hypothesize an estimated effect size BEFORE determining the required sample size. A study that doesn’t have a sufficient sample size to identify a hypothesized effect is called underpowered, and a waste of time.
    • Conversely, using the wrong sampling method, such as a method for proportional sampling, might result in a sample size in excess of what is necessary to identify a hypothesized effect size. An overpowered study is a waste of resources and, in some domains, unethical.

How could Type I and II errors occur with computer software (e.g., R, SPSS, SAS, G*Power) readily available? Who knows? But, I want to explore two other types of errors that novice researchers make.

Type III Error

A Type III error is closely related to a Type I error. However, instead of rejecting the null hypothesis in error, the null hypothesis is rejected for the wrong reason. This type of error is not as severe as a Type I error since one arrives at the correct conclusion. Contributing factors to a Type III error are incorrect definition or operationalization of variables or poor theory. As stated by Schwartz and Carpenter (1999), a Type III error is a situation of obtaining the right answer to the wrong question.

Type IV Error

A Type IV error is also related to a Type III error. In fact, some scholars say it is a subset of the Type III error. Regardless, a Type IV error involves correctly rejecting the null hypothesis but misinterpreting the data. Common reasons are running the wrong test based on the data structure, collinearity in a regression model, or interpreting variables incorrectly (a three-level ordinal variable treated as interval).

To learn more about Type III and Type IV errors, see Gelman and Carlin (2014) for their discussion of Type S and Type M errors, Tate (2015) on Type III errors relating to mediation, MacKinnon and Pirlott (2014) for their discussion of Type IV errors relating to confounding in mediators, , and Umesh et al. (1996) for Type IV errors in marketing research.

References:

Gelman, A., & Carlin, J. (2014). Beyond power calculations: Assessing Type S (Sign) and Type M (Magnitude Errors). Perspectives on Psychological Science, 9(6), 641-651. https://doi.org/10.1177/2F1745691614551642

MacKinnon, D. P., & Pirlott, A. G. (2014). Statistical approaches for enhancing causal interpretation of the M to Y relation in mediation analysis. Personality and Social Psychology Review, 19(1), 30-43. https://doi.org/10.1177/2F1088868314542878

Schwartz, S., & Carpenter, K. M. (1999). The right answer for the wrong question: Consequences of Type III error for public health research. American Journal of Public Health, 89(8), 1175-1180. https://doi.org/10.2105/ajph.89.8.1175

Tate, C. U. (2015). On the overuse and misuse of mediation analysis: It may be a matter of timing. Basic and Applied Social Psychology, 37(4), 235-246. https://doi.org/10.1080/01973533.2015.1062380

Umesh, U. N., Peterson, R. A., McCann-Nelson, M., & Vaidyanatyan, R. (1996). Type IV error in marketing research: The investigation of ANOVA interactions. Journal of the Academic of Marketing Science, 24(1), 17-26. https://doi.org/10.1007/bf02893934

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s