Recently, I reviewed a doctoral proposal where a student cited results from a peer-reviewed article. In the article Startup Success Trends in Small Business Beyond Five-Years: A Qualitative Research Study (Perry, Rahim, & Davis, 2018), the authors described how they interviewed 20 hair salon owners in New Jersey to explore how their businesses survived beyond five years. What caught my eye in this study was a reference to using “six questions” and SurveyMonkey. This piqued my interest, so I read deeper. After completing my reading, I began to think that this article read like a mini-dissertation. I found the lead author’s dissertation published in ProQuest (Perry, 2012), and it mirrored the three-author article. The second-listed author was the student’s chairperson. I don’t know the role of the third author of the study.
I began my first read of the Perry dissertation. He represented that he was extending research of another doctoral student – Schorr (2008). Rather than bias my view of Perry’s dissertation too much, I shifted to reading Schorr. Schorr performed a phenomenological inquiry into the “essence” of a successful entrepreneur by interviewing 10 entrepreneurs. Schorr included 100 pages (!!!) of contextual description and interview notes in his Appendix H. In my opinion, he performed a qualitative study properly in that he obtained deep, rich narrative descriptions from his participants. He used those descriptions as a basis for his thematic development. He integrated quotes from the descriptions in his study so others can assess the quality of the themes. Now that I read Schorr, I moved back to Perry.
Well…what a difference! Let me list and comment on some problems I have with this study –
Perry makes reference to obtaining Schorr’s approval to use his interview “questions.” It always amazes me that students try to use the same requests for information (I hate the term ‘questions’) and expect the same results in a qualitative inquiry. Can a qualitative researcher extend another qualitative researcher’s work by merely mirroring the same starting point? Wouldn’t follow-up inquiries unique to each participant’s narrative, individual researcher interpretation of each interview, and researcher observational notes render an extension impossible? It might be acceptable in some disciplines to use the same starting point as another researcher in a qualitative study; however, based on the response from the participant, this is where each interview (and study) can and, most likely, will diverge. Student Note #1: Become a near-expert in your selected methodology before you begin data collection.
It appears Perry didn’t go beyond the initial interview ‘starter’ questions. Qualitative research is about making sense of deep, rich narratives provided by participants via interviews; interviews that could take hours to complete. Not to drain the energy of the participant or the researcher, interviews often occur over a series of days or weeks. See Student Note #1.
The researcher used SurveyMonkey to distribute an open-ended ‘survey’ to participants. What?!?!? How can a researcher obtain deep, rich narrative descriptions in a Q&A format? Perry, citing McCoyd and Kerson (2006) as the source for this type of approach, wrote “an electronic open-ended survey allowed for more convenience for participants than face-to-face interviews and is just as reliable and accurate as face-to-face interviews” (pp. 51-52; emphasis added). I read the article cited and, in my opinion, Perry misrepresented the substance of the article. McCoyd and Kerson were exploring computer-mediated communication in qualitative research and were comparing email interviews with face-to-face interviews in social work. Surveys were not part of their research.
I reached out to Dr. Judith McCoyd, the co-author of the referenced study and Associate Professor at the Rutgers University School of Social Work, to get her opinion on the author’s characterization. She responded to my email with the following comments –
In the 2018 article by Perry, Rahim and Davis, it is asserted that my article with Toba Kerson “noted that an electronic open-ended survey allows for more convenience for participants, but is just as reliable and accurate as face-to-face interviews.” That is not accurate at all.
Our article compared the experience of long, multi-occasion, prolonged interview engagement by email with bereaved women to single face-to-face or telephone interviews and found that the data were much richer and more nuanced, as well as lengthier, when collected over an extended period of time and in multiple interactions tailored to explore the respondent’s earlier answers. Under NO circumstances would a one-shot Survey Monkey (or any survey method) be able to do the same thing.
I also find the assertion of phenomenological understandings (from a survey!) unbelievable on their face. Phenomenology requires sustained and engaged interaction to gain a deep understanding of the phenomena being explored. Qualitative research methodologists are clear about this. Again, there is no way to do that with a survey. Although the authors may have gotten some degree of detail in some of the responses to their survey, that is NOT the same thing as an interview process that is iterative and allows clarification and deepening of responses. Narrative data is qualitative, but it can never be fully developed without some degree of interaction or iterative ability.
Small sample sizes are common in qualitative research using ethnographic methods or intensive interviews, perhaps. However, a Survey Monkey instrument is not the same as an intensive interview protocol, regardless of the authors’ astonishingly uninformed claims. Interviews require interaction and probing to get to the heart of how the phenomena under study unfold. These involve fully exploring a respondent’s initial response in order to understand any complexity, ambivalence, or nuance more fully.
I am distressed that my findings were misrepresented and concerned that peer reviewers did not catch this error. Further, there are such obvious methodological problems that I am surprised that this was published.
I feel saddened for a scholar who was not mentored well enough to know that all cited literature should be correctly portrayed. Additionally, this is a study that needed to be framed in much more humble ways. A survey of 20 may suggest common features that allowed those hair salons to thrive where others failed, but it is certainly not phenomenological, conclusive, nor generalizable.personal correspondence with J. L. M. McCoyd, September 8, 2020
Student Note #2: Read carefully so as not to mischaracterize another’s research.
What does this all mean? The results of Perry et al. (2018), which are simply the results of Perry (2012) repackaged, should be ignored due to a lack of internal validity caused by a mischaracterization of research which led to a poor research design; specifically, failing to perform in-depth interviews.
McCoyd, J. L. M., & Kerson, T. S. (2006). Conducting intensive interviews using email: A serendiphttps://cran.r-project.org/web/packages/tm/tm.pdfitous comparative opportunity. Qualitative Social Work, 5(3), 389-406. https://doi.org/10.1177/1473325006067367
Perry, A. S. (2012). Determining the keys to entrepreneurial sustainability beyond the first 5 years (Doctoral dissertation). ProQuest Dissertations and Theses database. (UMI No. 3552459)
Perry, A., Rahim, E., & Davis, B. (2018). Startup success trends in small business beyond five-years: A qualitative research study. International Journal of Sustainable Entrepreneurship and Corporate Social Responsibility, 3(1), Article 1. https://doi.org/10.4018/ijsecsr.2018010101
Schorr, F. (2008). Becoming a successful entrepreneur: A phenomenological study (Doctoral dissertation). ProQuest Dissertations and Theses database. (UMI No. 3326847)