Book Review: Evaluating Research in Academic Journals: A Practical Guide to Realistic Evaluation (Pyrczak & Tcherni-Buzzeo, 2019)

I’m back! I’ve taken off a few months to read, recharge, and frame some study proposals with colleagues. I have a lot of things in the hopper…

During my break, I had a chance to help a PhD student graduate. It took some heavy lifting (another blog post by itself), but he made it! During my discussion with the emerging scholar, he asked me a question –

Which books do you own that could help me?

Hmmm…I have many, but let me start a list.

First up: Evaluating Research in Academic Journals: A Practical Guide to Realistic Evaluation

The most recent 7th Edition is written by Maria Tcherni-Buzzeo of New Haven University and pays homage to the originator of the book, the late Fred Pyrczak (1945-2014). The authors focus on how to read everything from Abstracts, Introductions, and Literature Reviews through the Analysis and Results section, ending in the Discussion section. In a checklist/rubric format, the book provides items (with example narrative in most places) such as –

  • Are primary variables mentioned in the title? (p. 17)
  • Does the introduction move from topic to topic instead of from citation to citation? (p. 43)
  • If the response rate was low, did the researcher make multiple attempts to contact potential participants? (p. 67)
  • If any differences are statistically significant but substantively small, have the researches noted they are small? (p. 123)

There are also specific sections in QUAN, QUAL, and MM research, which I have found invaluable.

This book is great for emerging scholars as they can apply it to learn how to critique academic research. It’s also great for chairpersons and people like me that critique research all day. It’s a must read (and buy!).

Advertisement

Doctoral Study Page Counts…

Every hear this?

What is the minimum page count for a doctoral study?

Doctoral student on any given day of the week

I suspect the average student that asks this question has realized that completing a doctoral study will take an investment of time and effort, and they are mentally working through a Gantt chart to show percentage completion. However, any learned committee member understands that it will take as long as it takes to explore a topic and complete the study.

There are some observations I have made –

  • Introduction – This section should be the same regardless of study type (QUAN, QUAL, or MM)
  • Literature Review – A QUAN study will have a more lengthy literature review than a QUAL study because the researcher has to explore and substantiate the inclusion of each variable of interest. Conversely, a QUAL study will have a less lengthy literature review than a QUAN study because the purpose is to obtain a better understanding of a phenomena than what has already explored. If the topic of inquiry has explored in- depth, the reason to perform a QUAL study may not be justified. A MM study will be longer than a QUAN since it includes both QUAN and QUAL components.
  • Methodology – This section should be the about the same for both QUAL or QUAN, but a MM study will be longer since it includes both QUAL and QUAN aspects.
  • Results – A QUAN study will have a less lengthy results sections than a QUAL study because the focus is on the statistical tests. The section be especially less lengthy if tables and figures are placed in an Appendix rather than embedded in the text. A QUAL study will be larger than a QUAN study because it includes support for the thematic development. To do that, a researcher includes anecdotal quotes from interviews and, possibly, documentation obtained from the data collection phase. Finally, connecting themes to prior research in the area and, if not found, performing a mini-literature review will add more length to this section. It could be extremely more lengthy if transcripts are included in this section (rather than an Appendix). A MM study, obviously, will be much larger since it includes both a QUAL and QUAN component.
  • Recommendations – This section should be the same in size regardless of methodology.

Since I’m looking at doctoral studies published by ProQuest in 2019, I thought I would examine page counts. Based on an M = 158.6, SD = 55.18, Mdn = 147.5, a 100-200 page estimate appears right (Figure 1).

Figure 1. Boxplot of Doctoral Study Page Counts for DBA degrees awarded in 2019 (as reported by ProQuest)

Note the 400 and 600 page count studies….ugh!

Next, I wanted to focus on the Top Five schools that create doctors of business administration to see if they differed (Figure 2).

Figure 2. Boxplot of Doctoral Study Page Counts for the Top Five Universities that awarded DBA degrees in 2019 (as reported by ProQuest)

The 100-200 page guidance appears reasonable…

Note: Boxplots were created using R and the ggplot2 package.

DBA’s awarded by school in 2019

As part of a larger research project relating to the accuracy and quality of student research and committee supervision over DBA degrees, I needed to determine how many degrees were issued. Without an easy way to ascertain that information, I chose the ProQuest Dissertations and Theses database as my source. Through a process of multiple downloads and data manipulation using R, Proquest reported 738 degrees in 2019. Of those degrees, three schools produced over 64% of the graduates –

  • Walden University – 237 (32%)
  • Northcentral University – 168 (23%)
  • Capella University – 69 (9%)

Forty-three schools accounted for the remaining 36% of the degrees stored in ProQuest.

What’s unique about the three schools? All three were classified as for-profit universities for part of 2018, and each has been part of a merger or acquisition since then. Capella University merged with Strayer University in August 2018. Northcentral was purchased by the National University System in January 2019. Finally, Laureate Education, owner of Walden University, began the process of transferring ownership to Adtalem Global Education last month.

I wonder if the quantity of graduates created by these three schools will change in the next few years based on these corporate changes?

“Bridging the gap between current knowledge and ignorance…”

During my reading, I came across a passage in a medical research article –

To find new and useful answers to important problems that have not already been resolved, you need to know a lot about the problem and precisely where the boundary between current knowledge and ignorance lies…without knowing the current state of knowledge, it is difficult to know whether one is heading in the right ‘next step’ direction

Haynes, 2006, p. 882

While Haynes’ thoughts apply to any novice researcher, they resonate with me. In mentoring doctoral students, I always start my first conversation with a student by asking two questions –

  • In which business domain do you have the most experience or expertise?
  • Within that domain, what is your research interest?

The first question prompts them to focus on a domain. It’s sad to say this but sometimes I have to explain to students that business is not a domain per se but a term that encompasses domains such as Accounting, Finance, Management and Marketing. Once that’s clarified, the student often gets it and we move on to Q2. However, other times I have to delve further into their academic background (e.g., undergraduate, master’s) and their work experience. If they’re academic generalists and they’ve supervised employees, then I propose the management discipline because hopefully they’ve been exposed to general literature on management. Plus, it would take them too long to become a domain expert in the other areas. If they have specific domain experience or expertise, it should be a logical connection between Q1 and Q2, right? Unfortunately, the answer is No.

What I’ve found is that doctoral students don’t have sufficient knowledge in their discipline, or haven’t retained the knowledge from graduate school. Many can come up with a tentative, overarching research question, which is fine to start; however, the length of time between step 1 and step 2, developing a problem statement, seems to take an eternity. Rather than being well-read in the current literature in their discipline, they look for research that aligns with their world view or, worse, mischaracterize research to fit their need. This leads to student frustration as they often feel the University is taking their money but not giving them anything for it (anything = approve what is submitted). This can cause tension between the student and the chairperson (and University), and can lead to a student requesting a change in chairperson. I call this an “academic opinion-shopping,” where a student seeks to find a faculty member that will “understand” their situation and approve their research topic. Sometimes, a University will put pressure on faculty to approve research to “push students forward to the next phase,” increase graduation rates, and reduce student complaints. What happens then? A newly minted Doctor of Something who doesn’t realize how the research process operates.

References:

Haynes. R. B. (2006). Forming research questions. Journal of Clinical Epidemiology 59(9), 881-886. https://doi.org/10.1016/j.jclinepi.2006.06.006

Research Question Error Types

If you review almost any research methods textbooks, there will be an explanation of Type I and Type II errors and why they must be avoided when performing quantitative research. To refresh –

  • A Type I error occurs when the null hypothesis is rejected in error, and the alternative hypothesis is accepted. This is a big deal since what you are saying is that there is some difference or relationship between two variables when there is not.
    • A frequent cause of Type I errors in research performed by novices has to do with performing multiple tests with the same dependent variable. Using the the widely-used p < .05 as the standard, what a researcher is saying is that they are willing to accept a 1 in 20 chance of error. If a dependent variable is examined more than once, then the 1 in 20 chance needs to be adjusted via a reduction in the accepted p-value, or the novice must accept that they are willing to make a Type I error. For example, if a dependent variable was tested 5 times, then there is a 20% chance of making an error.
    • I’ve seen too many students and faculty not understand this concept, and when this it is pointed out during a manuscript review or during a defense, it can be embarrassing for every for both the student and faculty. Bonferroni correction anybody?
  • A Type II error occurs when the null hypothesis is erroneously retained, when it should be rejected. This is an error, but its not as bad as a Type I error. This situation causes one’s work be referenced in the future as a need for future research (best case scenario), or as a study performed in error (a worst case scenario).
    • A common cause for a Type II error is misinterpretation. Another culprit is low statistical power.
    • A novice researcher (and their faculty) should have a full understanding on how to perform a power analysis. The team should be aware of prior research in the area and perform a weighted average of prior effect size measures (e.g., Pearson’s r, Cohen’s d) or, at a minimum, hypothesize an estimated effect size BEFORE determining the required sample size. A study that doesn’t have a sufficient sample size to identify a hypothesized effect is called underpowered, and a waste of time.
    • Conversely, using the wrong sampling method, such as a method for proportional sampling, might result in a sample size in excess of what is necessary to identify a hypothesized effect size. An overpowered study is a waste of resources and, in some domains, unethical.

How could Type I and II errors occur with computer software (e.g., R, SPSS, SAS, G*Power) readily available? Who knows? But, I want to explore two other types of errors that novice researchers make.

Type III Error

A Type III error is closely related to a Type I error. However, instead of rejecting the null hypothesis in error, the null hypothesis is rejected for the wrong reason. This type of error is not as severe as a Type I error since one arrives at the correct conclusion. Contributing factors to a Type III error are incorrect definition or operationalization of variables or poor theory. As stated by Schwartz and Carpenter (1999), a Type III error is a situation of obtaining the right answer to the wrong question.

Type IV Error

A Type IV error is also related to a Type III error. In fact, some scholars say it is a subset of the Type III error. Regardless, a Type IV error involves correctly rejecting the null hypothesis but misinterpreting the data. Common reasons are running the wrong test based on the data structure, collinearity in a regression model, or interpreting variables incorrectly (a three-level ordinal variable treated as interval).

To learn more about Type III and Type IV errors, see Gelman and Carlin (2014) for their discussion of Type S and Type M errors, Tate (2015) on Type III errors relating to mediation, MacKinnon and Pirlott (2014) for their discussion of Type IV errors relating to confounding in mediators, , and Umesh et al. (1996) for Type IV errors in marketing research.

References:

Gelman, A., & Carlin, J. (2014). Beyond power calculations: Assessing Type S (Sign) and Type M (Magnitude Errors). Perspectives on Psychological Science, 9(6), 641-651. https://doi.org/10.1177/2F1745691614551642

MacKinnon, D. P., & Pirlott, A. G. (2014). Statistical approaches for enhancing causal interpretation of the M to Y relation in mediation analysis. Personality and Social Psychology Review, 19(1), 30-43. https://doi.org/10.1177/2F1088868314542878

Schwartz, S., & Carpenter, K. M. (1999). The right answer for the wrong question: Consequences of Type III error for public health research. American Journal of Public Health, 89(8), 1175-1180. https://doi.org/10.2105/ajph.89.8.1175

Tate, C. U. (2015). On the overuse and misuse of mediation analysis: It may be a matter of timing. Basic and Applied Social Psychology, 37(4), 235-246. https://doi.org/10.1080/01973533.2015.1062380

Umesh, U. N., Peterson, R. A., McCann-Nelson, M., & Vaidyanatyan, R. (1996). Type IV error in marketing research: The investigation of ANOVA interactions. Journal of the Academic of Marketing Science, 24(1), 17-26. https://doi.org/10.1007/bf02893934