At the time of writing this post, I’m looking at a recently introduced colleagues doctoral dissertation. The faculty member went to the same school as me, but several years later. It’s always interesting looking at people’s dissertations because it can show a person’s beginning academic success. This faculty member will be mentoring emerging scholars so I always hope more knowledge has been acquired. Why do I say that? Well…

This faculty, as a doctoral student, explored the influence of, what are described as, critical business variables on solo criminal law practitioner success. In the study, the faculty utilized an instrument that purports to have been validated many times in many countries. I don’t feel like exploring that claim so I skipped to the data analysis plan and results.

First, when determining a sample size, one estimates the desired effect size. An effect size is commonly based on prior research, but can include other factors (e.g., practicality). In this study, the *then* emerging scholar reported the size of the population (*N* = 530) but stated that an *a priori* sample size-based effect size wasn’t needed since the faculty was planning on performing a census. What? Surveying a population is fine; however, one should have some idea on the expected effect size so if less than the required number of samples are returned, follow-up efforts can be initiated to reach the desired level of practicality. This issue will rear its ugly head later.

Second, a total of 16 research hypotheses were explored. Of the 16, 8 IVs were ordinal, 4 were interval, and 4 were nominal. The DV was titled “Degree of Success” and was treated as an ordinal variable. In the study, the then emerging scholar decided to perform the analyses using the Kendall Rank-Order Correlation Coefficient (tau-b). For a quick review of this technique, see link. This is fine and appropriate if two variables are not normally distributed. However, what about the 3 IVs that are nominal? Kendall is not the right solution. These relationships should have been explored by one-sample *t*-tests; parametric or non-parametric. Wrong test…

Third, 16 IVs and 1 IV? Bonferroni anyone? With 16 IVs, a p-value of .003125 (.05/16) would be required so as not to make a Type I error caused by cumulative, family-wise hypothesis testing issues.

So, what happened? Well, let’s count the issues –

- Only 31 responses were received. Was this expected? I bet not. Normally, a social science researcher shoots for an 80%
*SP*(1-(4*p-value)). This study was severely under-powered to start (*SP*~ .09). Perhaps if the then emerging scholar had calculated a sample size a priori… How big of sample was needed? Well, to identify a moderate effect size with a p-value of .003125, about 155 observations were needed. I wonder how much time it would have taken to get the*N*= 31 closer to*N*= 155? - A p-value of .05 was used, not a p-value of .003125. Thus, any statistically significant results reported had a high probability of being a Type I error.
- Guess what? One statistical result was reported (tau-b = .322, p = .037). I’m not going to list it since it should be ignored
- There may be something to report in this study, but since descriptive statistics were NOT reported relating to the sample, it’s hard to tell.

I’m thinking about asking for this faculty’s data. Perhaps there is something there…who knows?