Alignment of themes to research question…

I started writing this blog post about priming interviewees in qualitative research. However, once I got into writing, I realized I simply found another poorly performed qualitative study. However, I did want to discuss aligning research-deduced themes with research questions. Here’s the study –

Job Satisfaction and Job-Related Stress among NCAA Division II Athletic Directors in
Historically Black Colleges and Universities

Name withheld (but you can search for the study)

I’ve been involved with many students who are exploring job satisfaction and job-related stress in a variety of industries, but I’ve never heard of a study on this topic in university athletic directors (AD’s). What surprised me was the study wasn’t quantitative; it was qualitative.

The emerging scholar’s overarching research question was –

What strategies do ADs at HBCUs implement to manage departments with limited resources?

p. 14

What does the phrase ‘limited resources’ mean? It would seem that some form of quantitative measure would need to be used to separate athletic departments into categories based on resources. However, I found this sentence –

…there was an assumption that HBCU athletic directors would experience job dissatisfaction and
job-related stress due to decreased funding, inadequate facility management, and
inconsistent roster management

p. 19

Wow! This statement makes it easy for a researcher…I’ll just assume something is happening whether true or not.

Now, a quick note about priming. The interview guide can be found on Appendix C of the dissertation. Honestly, it’s not really an interview guide. The student employed the ‘oral survey’ Q&A approach often suggested by faculty that have limited understanding of qualitative data collection methodologies. Rather than critique the self-described “interview questions,” I will point out one issue –

Q3 – What strategies have you implemented to motivate your staff and thereby increase
job satisfaction?

p. 133

This question requires the interviewee to –

  • Understand the word strategy or, at a minimum, understand the researcher’s definition of the term
  • Differentiate a strategy from a tactic
  • Reflect on how a strategy has been specifically applied to or influenced staff motivation
  • Reflect on staff responses to the strategy and subjectively estimate its influence on their own level of job satisfaction

In other words, the emerging scholar placed the responsibility for the study’s results on the interviewee responses, not on the interpretation of the responses. Ugh!

What would have happened if the emerging scholar simply started with –

  • How do you motivate your employees?
  • How do your employees respond to the techniques you employ to motivate?
  • When do you decide to change methods?

The aforementioned approach allows the interviewees to describe the methods they use to motivate employees, which would then be analyzed by the emerging scholar as a strategy or tactic. Each motivational technique could be explored in-depth by follow-up questions and, subsequently, tied back to the literature. Next, the emerging scholar could explore in-depth with the interviewee the responses by employees. Did the description provided by the interviewee align with the expectations found in the literature? Finally, discussing a change in methods and its impetus, could result in an alignment with the research question?

When I finally got to the themes, I chuckled:

  • Shared responsibility – “participants believed the workplace demands they face daily do not allow them to have the ability to make all decisions for the department. Having shared responsibilities among other leaders within the department was essential for each athletic director” (p. 97). Every job has some level of work demand. Some demands are based on the lack of resources (e.g., human capital), some are note (e.g., heavy lifting). In the academic literature, sharing responsibility within an organizational unit is the tenant of work-based teams. It would seem the study participants are simply employing widely-referenced management techniques. However, since the emerging scholar assumed all HBCU ADs face limited resources, this had to be a theme.
  • Empowering staff – The emerging scholar didn’t describe the meaning of this phrase; rather, paraphrased material was listed from external sources (two sources cited weren’t listed in the References). However, similar to shared responsibility, employee empowerment is an oft-studied topic in the literature.
  • Limited resources to grow facilities – The term ‘resources’ in this context relates to financial resources. ADs are often held accountable for promotion of their programs; however, how much of that job is part of their normal duties? Based on how the emerging scholar phrased the research question, this theme is not aligned with the research question.
  • Limited female participation – The emerging researcher delved into gender equity, the recruitment of females to play sports, and the balance between males and females in sports. This topic relates to recruitment, probably more about society than management…again unrelated to the research question.

In the emerging scholars biography she stated that she works for an HBCU athletic department, so I acknowledge the interest. She also stated that she would like to pursue an athletic department job. That’s great! If you, too, are an emerging researcher and you look at this study for references, that’s fine…just be wary about citing these results. Redo the research.

Converting a qualitative interview guide to a survey?

Recently, I reviewed a doctoral proposal where a student cited results from a peer-reviewed article. In the article Startup Success Trends in Small Business Beyond Five-Years: A Qualitative Research Study (Perry, Rahim, & Davis, 2018), the authors described how they interviewed 20 hair salon owners in New Jersey to explore how their businesses survived beyond five years. What caught my eye in this study was a reference to using “six questions” and SurveyMonkey. This piqued my interest, so I read deeper. After completing my reading, I began to think that this article read like a mini-dissertation. I found the lead author’s dissertation published in ProQuest (Perry, 2012), and it mirrored the three-author article. The second-listed author was the student’s chairperson. I don’t know the role of the third author of the study.

I began my first read of the Perry dissertation. He represented that he was extending research of another doctoral student – Schorr (2008). Rather than bias my view of Perry’s dissertation too much, I shifted to reading Schorr. Schorr performed a phenomenological inquiry into the “essence” of a successful entrepreneur by interviewing 10 entrepreneurs. Schorr included 100 pages (!!!) of contextual description and interview notes in his Appendix H. In my opinion, he performed a qualitative study properly in that he obtained deep, rich narrative descriptions from his participants. He used those descriptions as a basis for his thematic development. He integrated quotes from the descriptions in his study so others can assess the quality of the themes. Now that I read Schorr, I moved back to Perry.

Well…what a difference! Let me list and comment on some problems I have with this study –

Perry makes reference to obtaining Schorr’s approval to use his interview “questions.” It always amazes me that students try to use the same requests for information (I hate the term ‘questions’) and expect the same results in a qualitative inquiry. Can a qualitative researcher extend another qualitative researcher’s work by merely mirroring the same starting point? Wouldn’t follow-up inquiries unique to each participant’s narrative, individual researcher interpretation of each interview, and researcher observational notes render an extension impossible? It might be acceptable in some disciplines to use the same starting point as another researcher in a qualitative study; however, based on the response from the participant, this is where each interview (and study) can and, most likely, will diverge. Student Note #1: Become a near-expert in your selected methodology before you begin data collection.

It appears Perry didn’t go beyond the initial interview ‘starter’ questions. Qualitative research is about making sense of deep, rich narratives provided by participants via interviews; interviews that could take hours to complete. Not to drain the energy of the participant or the researcher, interviews often occur over a series of days or weeks. See Student Note #1.

The researcher used SurveyMonkey to distribute an open-ended ‘survey’ to participants. What?!?!? How can a researcher obtain deep, rich narrative descriptions in a Q&A format? Perry, citing McCoyd and Kerson (2006) as the source for this type of approach, wrote “an electronic open-ended survey allowed for more convenience for participants than face-to-face interviews and is just as reliable and accurate as face-to-face interviews” (pp. 51-52; emphasis added). I read the article cited and, in my opinion, Perry misrepresented the substance of the article. McCoyd and Kerson were exploring computer-mediated communication in qualitative research and were comparing email interviews with face-to-face interviews in social work. Surveys were not part of their research.

I reached out to Dr. Judith McCoyd, the co-author of the referenced study and Associate Professor at the Rutgers University School of Social Work, to get her opinion on the author’s characterization. She responded to my email with the following comments –

In the 2018 article by Perry, Rahim and Davis, it is asserted that my article with Toba Kerson “noted that an electronic open-ended survey allows for more convenience for participants, but is just as reliable and accurate as face-to-face interviews.” That is not accurate at all.

Our article compared the experience of long, multi-occasion, prolonged interview engagement by email with bereaved women to single face-to-face or telephone interviews and found that the data were much richer and more nuanced, as well as lengthier, when collected over an extended period of time and in multiple interactions tailored to explore the respondent’s earlier answers. Under NO circumstances would a one-shot Survey Monkey (or any survey method) be able to do the same thing. 

I also find the assertion of phenomenological understandings (from a survey!) unbelievable on their face. Phenomenology requires sustained and engaged interaction to gain a deep understanding of the phenomena being explored.  Qualitative research methodologists are clear about this. Again, there is no way to do that with a survey. Although the authors may have gotten some degree of detail in some of the responses to their survey, that is NOT the same thing as an interview process that is iterative and allows clarification and deepening of responses.  Narrative data is qualitative, but it can never be fully developed without some degree of interaction or iterative ability.

Small sample sizes are common in qualitative research using ethnographic methods or intensive interviews, perhaps.  However, a Survey Monkey instrument is not the same as an intensive interview protocol, regardless of the authors’ astonishingly uninformed claims. Interviews require interaction and probing to get to the heart of how the phenomena under study unfold. These involve fully exploring a respondent’s initial response in order to understand any complexity, ambivalence, or nuance more fully.  

I am distressed that my findings were misrepresented and concerned that peer reviewers did not catch this error.  Further, there are such obvious methodological problems that I am surprised that this was published.  

I feel saddened for a scholar who was not mentored well enough to know that all cited literature should be correctly portrayed. Additionally, this is a study that needed to be framed in much more humble ways. A survey of 20 may suggest common features that allowed those hair salons to thrive where others failed, but it is certainly not phenomenological, conclusive, nor generalizable.

personal correspondence with J. L. M. McCoyd, September 8, 2020

Student Note #2: Read carefully so as not to mischaracterize another’s research.

What does this all mean? The results of Perry et al. (2018), which are simply the results of Perry (2012) repackaged, should be ignored due to a lack of internal validity caused by a mischaracterization of research which led to a poor research design; specifically, failing to perform in-depth interviews.

Reference:

McCoyd, J. L. M., & Kerson, T. S. (2006). Conducting intensive interviews using email: A serendiphttps://cran.r-project.org/web/packages/tm/tm.pdfitous comparative opportunity. Qualitative Social Work, 5(3), 389-406. https://doi.org/10.1177/1473325006067367

Perry, A. S. (2012). Determining the keys to entrepreneurial sustainability beyond the first 5 years (Doctoral dissertation). ProQuest Dissertations and Theses database. (UMI No. 3552459)

Perry, A., Rahim, E., & Davis, B. (2018). Startup success trends in small business beyond five-years: A qualitative research study. International Journal of Sustainable Entrepreneurship and Corporate Social Responsibility, 3(1), Article 1. https://doi.org/10.4018/ijsecsr.2018010101

Schorr, F. (2008). Becoming a successful entrepreneur: A phenomenological study (Doctoral dissertation). ProQuest Dissertations and Theses database. (UMI No. 3326847)