When to quit reading…

Long (2021) studied “the relationship between internal control weaknesses and lower profitability” (p. ii). Sounds straight forward to me. Internal control weaknesses could be interval and represent the count of weaknesses, and profitability would be measured by the firm reporting. Count data normally follows a Poisson distribution, but perhaps it could be transformed. Profit data can be normalized through log transformation. A correlation test or regression could be performed.

Stop Sign Stock Photo, Picture And Royalty Free Image. Image 8623324.

The emerging researcher explains later (pp. 2-5), that the focus is on Internal Control Weakness factors (whatever that is), that reduce the Return on Net Operating Assets (RNOA) post-merger and acquisition. It appears this type of research was recommended by two accounting academics. Good thing there is an IT professional to perform this study!

To perform the analysis, Mergers and Acquisition (M&A) and Internal Control Weakness (ICW) were dichotomizied (0 = No; 1 Yes). Companies were divided into four groups: Group 1 (M&A = No; ICW = No); Group 2 (M&A = No, ICW = Yes); Group 3 (M&A = Yes; ICW = No); and Group 4 (M&A = Yes, ICW = Yes). Then, the emerging scholar explains that three types of tests will be performed –

  • Paired Sample t-test (RNOA as DV)
  • Correlation
  • Multiple Regression

First, a paired-sample t-test is used to evaluate variables at different points in time. What the student should have performed is a two-sample t-test where between group differences are evaluated. Who reviewed this study? It doesn’t matter, no statistical differences were found. Could that be caused by the wrong test? Maybe. Could it be caused by sample size (An N = 119 was determined [p. 63], but only 38 companies were listed on pp. 83-85), or significant differences in sample sizes between groups? Also, maybe. The reason I answer maybe is that the emerging scholar failed to report descriptive statistics for the study. No Group n. Just the Group M. Regardless, was it the wrong test? Absolutely! But, I’m still scratching my head about why do this test when it wasn’t the focus of the study. I speculate the emerging scholar “mimiced” another study without understanding what was going on, or was advised by faculty to do this.

Second, the emerging scholar performed regression analysis using Cash Flow and Board Size as IVs and RNOA as the DV. Nothing was significant. Finally, the emerging scholar performed two regression analyses using an unknown value related to Groups 1 and 3 and an unknown value related to Groups 2 and 4 as IVs, and (a) RNOA and (b) Cash Flow as DVs. Again, nothing was significant; however, the emerging scholar did identify that the “Control Groups” (Groups 1 and 3) coefficient was significant (p = 0.011) in one model. Unfortunately, I don’t know how to interpret a B = -0.999 when the actual values are not described reported.

What’s funny is that a third regression analysis, using the same IVs, was performed. This time with Board Size as a DV. So, these two questionable IVs can predict board size? What does that have to do with the study? Plus, don’t get me started about the performance of tests of normality on categorical variables (see p. 77).

What happened here? I have no idea. I should have stopped reading at the paired samples t-test…


Long, L. G. (2021). The effects of internal control weaknesses that undermine acquisitions (Doctoral dissertation). ProQuest Dissertations & Theses Global: The Humanities and Social Sciences Collection. (28315391)


Why not give small- and medium-sized enterprises a copy of Kotler and Keller’s textbook?

Guarduno (2021) framed her study of understanding the cause of poor marketing strategies by small- and medium-sized enterprises in Texas by looking to research in the following countries –

  • China
  • Nigeria
  • Northern & Southern Europe
  • South Africa
  • Germany

As someone who has lived in Texas for 6 of the last 40 years, I would argue the State of Texas is ahead of most of Northern and Southern Europe in development and Nigeria! But I digress…

The emerging scholar followed the traditional yet questionable, semi-structured “Q&A” format found at online universities. Interviews were performed using 18 participants who identified themselves as owners or managers of a business and chose their respective business’s marketing strategy. While a range of business start dates (1988-2020) was provided, no information was reported about the marketing education or experience of the participants. Perhaps the people interviewed who started/managed the company, and led its marketing efforts don’t know what they’re doing?

Guarduno summarized all of her findings as aligning with prior research –

  • Marketing skills and knowledge, along with financial resources, are essential to implement an effective marketing strategy.
  • External factors, such as the economy, firm location, competition, and the supply chain, also influence success.

For the non-marketers following this blog: Follow the four P’s (Product, Price, Place, Promotion), perform a SWOT analysis, and, buy a copy of the Phil Kotler and Kevin Keller Marketing Management textbook.

I question the need for the study.


Guarduno, C. (2021). Determinants of small and medium-size enterprises selection of marketing strategy (Doctoral dissertation). ProQuest Dissertations & Theses Global: The Humanities and Social Sciences Collection. (28545470)

What could be found with 16 IVs and an N = 31?

At the time of writing this post, I’m looking at a recently introduced colleagues doctoral dissertation. The faculty member went to the same school as me, but several years later. It’s always interesting looking at people’s dissertations because it can show a person’s beginning academic success. This faculty member will be mentoring emerging scholars so I always hope more knowledge has been acquired. Why do I say that? Well…

This faculty, as a doctoral student, explored the influence of, what are described as, critical business variables on solo criminal law practitioner success. In the study, the faculty utilized an instrument that purports to have been validated many times in many countries. I don’t feel like exploring that claim so I skipped to the data analysis plan and results.

First, when determining a sample size, one estimates the desired effect size. An effect size is commonly based on prior research, but can include other factors (e.g., practicality). In this study, the then emerging scholar reported the size of the population (N = 530) but stated that an a priori sample size-based effect size wasn’t needed since the faculty was planning on performing a census. What? Surveying a population is fine; however, one should have some idea on the expected effect size so if less than the required number of samples are returned, follow-up efforts can be initiated to reach the desired level of practicality. This issue will rear its ugly head later.

Second, a total of 16 research hypotheses were explored. Of the 16, 8 IVs were ordinal, 4 were interval, and 4 were nominal. The DV was titled “Degree of Success” and was treated as an ordinal variable. In the study, the then emerging scholar decided to perform the analyses using the Kendall Rank-Order Correlation Coefficient (tau-b). For a quick review of this technique, see link. This is fine and appropriate if two variables are not normally distributed. However, what about the 3 IVs that are nominal? Kendall is not the right solution. These relationships should have been explored by one-sample t-tests; parametric or non-parametric. Wrong test…

Third, 16 IVs and 1 IV? Bonferroni anyone? With 16 IVs, a p-value of .003125 (.05/16) would be required so as not to make a Type I error caused by cumulative, family-wise hypothesis testing issues.

So, what happened? Well, let’s count the issues –

  • Only 31 responses were received. Was this expected? I bet not. Normally, a social science researcher shoots for an 80% SP (1-(4*p-value)). This study was severely under-powered to start (SP ~ .09). Perhaps if the then emerging scholar had calculated a sample size a priori… How big of sample was needed? Well, to identify a moderate effect size with a p-value of .003125, about 155 observations were needed. I wonder how much time it would have taken to get the N = 31 closer to N = 155?
  • A p-value of .05 was used, not a p-value of .003125. Thus, any statistically significant results reported had a high probability of being a Type I error.
  • Guess what? One statistical result was reported (tau-b = .322, p = .037). I’m not going to list it since it should be ignored
  • There may be something to report in this study, but since descriptive statistics were NOT reported relating to the sample, it’s hard to tell.

I’m thinking about asking for this faculty’s data. Perhaps there is something there…who knows?

Alignment of themes to research question…

I started writing this blog post about priming interviewees in qualitative research. However, once I got into writing, I realized I simply found another poorly performed qualitative study. However, I did want to discuss aligning research-deduced themes with research questions. Here’s the study –

Job Satisfaction and Job-Related Stress among NCAA Division II Athletic Directors in
Historically Black Colleges and Universities

Name withheld (but you can search for the study)

I’ve been involved with many students who are exploring job satisfaction and job-related stress in a variety of industries, but I’ve never heard of a study on this topic in university athletic directors (AD’s). What surprised me was the study wasn’t quantitative; it was qualitative.

The emerging scholar’s overarching research question was –

What strategies do ADs at HBCUs implement to manage departments with limited resources?

p. 14

What does the phrase ‘limited resources’ mean? It would seem that some form of quantitative measure would need to be used to separate athletic departments into categories based on resources. However, I found this sentence –

…there was an assumption that HBCU athletic directors would experience job dissatisfaction and
job-related stress due to decreased funding, inadequate facility management, and
inconsistent roster management

p. 19

Wow! This statement makes it easy for a researcher…I’ll just assume something is happening whether true or not.

Now, a quick note about priming. The interview guide can be found on Appendix C of the dissertation. Honestly, it’s not really an interview guide. The student employed the ‘oral survey’ Q&A approach often suggested by faculty that have limited understanding of qualitative data collection methodologies. Rather than critique the self-described “interview questions,” I will point out one issue –

Q3 – What strategies have you implemented to motivate your staff and thereby increase
job satisfaction?

p. 133

This question requires the interviewee to –

  • Understand the word strategy or, at a minimum, understand the researcher’s definition of the term
  • Differentiate a strategy from a tactic
  • Reflect on how a strategy has been specifically applied to or influenced staff motivation
  • Reflect on staff responses to the strategy and subjectively estimate its influence on their own level of job satisfaction

In other words, the emerging scholar placed the responsibility for the study’s results on the interviewee responses, not on the interpretation of the responses. Ugh!

What would have happened if the emerging scholar simply started with –

  • How do you motivate your employees?
  • How do your employees respond to the techniques you employ to motivate?
  • When do you decide to change methods?

The aforementioned approach allows the interviewees to describe the methods they use to motivate employees, which would then be analyzed by the emerging scholar as a strategy or tactic. Each motivational technique could be explored in-depth by follow-up questions and, subsequently, tied back to the literature. Next, the emerging scholar could explore in-depth with the interviewee the responses by employees. Did the description provided by the interviewee align with the expectations found in the literature? Finally, discussing a change in methods and its impetus, could result in an alignment with the research question?

When I finally got to the themes, I chuckled:

  • Shared responsibility – “participants believed the workplace demands they face daily do not allow them to have the ability to make all decisions for the department. Having shared responsibilities among other leaders within the department was essential for each athletic director” (p. 97). Every job has some level of work demand. Some demands are based on the lack of resources (e.g., human capital), some are note (e.g., heavy lifting). In the academic literature, sharing responsibility within an organizational unit is the tenant of work-based teams. It would seem the study participants are simply employing widely-referenced management techniques. However, since the emerging scholar assumed all HBCU ADs face limited resources, this had to be a theme.
  • Empowering staff – The emerging scholar didn’t describe the meaning of this phrase; rather, paraphrased material was listed from external sources (two sources cited weren’t listed in the References). However, similar to shared responsibility, employee empowerment is an oft-studied topic in the literature.
  • Limited resources to grow facilities – The term ‘resources’ in this context relates to financial resources. ADs are often held accountable for promotion of their programs; however, how much of that job is part of their normal duties? Based on how the emerging scholar phrased the research question, this theme is not aligned with the research question.
  • Limited female participation – The emerging researcher delved into gender equity, the recruitment of females to play sports, and the balance between males and females in sports. This topic relates to recruitment, probably more about society than management…again unrelated to the research question.

In the emerging scholars biography she stated that she works for an HBCU athletic department, so I acknowledge the interest. She also stated that she would like to pursue an athletic department job. That’s great! If you, too, are an emerging researcher and you look at this study for references, that’s fine…just be wary about citing these results. Redo the research.

Face Validity…

Yesterday, I briefly discussed face validity in the context of a student creating an instrument to measure a latent variable (e.g., usefulness, intention). Someone read my post and sent me an email asking “How would I measure face validity?” Well, face validity can’t be measured. Face validity answers the question – “Does this test, on the face, measure what is says to measures?” In other words, face validity is the perception that the test is appropriate or valid.

Why is face validity important? Rogers (1995) posited that if test takers believe a test does not have face validity, they would take it less seriously and hurry through; conversely, if they believe the test of have face validity, the test takers would make a conscientious effort to answer honestly.

I advise students (and faculty) to impanel a few experts in the domain under study and get their thoughts on whether the pool of items in the test appear to measure what is under study. If they agree, the first hurdle is passed. The next hurdle is to perform an exploratory factor analysis.

I emphasized the word pool in the prior paragraph for a reason. Developing a valid survey instrument takes time. One of the most time-consuming tasks is creating a pool of items that appear to form a measurable dimension. The reason why one has to create a pool is that until the survey instrument is distributed, feedback is received, and exploratory factor analysis is performed, there is no way to confirm which items strongly form a construct. For example, to get the 36-item Job Satisfaction Survey, Spector (1985) reported he started with 74 items.


Rogers, T. B. (1995). The psychological testing enterprise: An introduction. Brooks-Cole.

Spector, P. E. (1985). Measurement of human service staff satisfaction: Development of the Job Satisfaction Survey. American Journal of Community Psychology, 13(6), 693-713. https://doi.org/10.1007/bf00929796