Pilot Studies…

I recently had the opportunity to consult about the use pilot studies. A colleague brought to my attention a student-developed survey instrument that (allegedly) measured three dimensions –

  • Perceived Usefulness
  • Perceived Ease of Use
  • Intention to Use

These three dimension are normally associated with the Technology Acceptance Model (TAM), developed by Davis (1989), and modified since then.

These three items are often modified to focus on a specific technology. However, the student did something I had never seen before: He measured six different types of technology under the banner of the “technologies of the next industrial revolution.” Rather, than use a validated instrument as a base and modify the subject in each item, he wrote new items and allocated one item for each of the six different types of technology.

Rather than go deeper into psychometrics and why the student’s approach was flawed, let’s get back to topic.

The purpose of a pilot study is to examine the feasibility of an approach before a large scale study is performed. Pilot studies are often performed in both qualitative and quantitative research. Since the student created a new instrument, a pilot study was warranted. The student should have surveyed between 50-100 people to validate the a priori face validity properties via Exploratory Factor Analysis.

What did the student do? He surveyed 13 people to ask if the items were valid based on the wording!?!? What type of validity was he seeking? Face Validity? Language validity?

Cut to the end…the survey instrument failed confirmatory factor analysis (Tucker-Lewis Index < .90), an exploratory factor analysis resulted in the elimination of three items due to loading < .50, and the resultant 15-item survey resulting in three dimensions that made no sense (e.g., a prior face validity was incorrect). What was left was an attempt to salvage the student’s research…an endeavor I dislike but understand why its done.

Where were the red flags? I see three –

  • Attempting to measure usefulness, use of, and intention to use six different types of technology with one item each. (Buy a book! I prefer Furr, but also have Miller and Lovler)
  • Failing to impanel a group of experts to evaluate the face validity of each items
  • Failing to understand the purpose of a pilot study

Who’s to blame? The three issues mentioned above fall directly on the lap of the faculty and University. Not everybody has the skills and knowledge to advise doctoral students. Some need to focus on their strength: teaching in their domain.

Advertisement

Measuring perception…

A week ago, a doctoral business student was referred to me for a discussion about a specific research design. While the primary research question was somewhat convoluted, once edited the focus was on measuring perception of a behavior in a group through the lens of another (the sample), then relating that perception of the group to the sample’s perception of project performance. If one were to ask a survey participant a single question, it would be “What do you think the relationship is between A and B?” This type of question falls into the category (loosely) of (public) opinion research, not business research.

Research into perceptions (and attitudes) are often studied using a qualitative research methodology. The perception of something or someone is generally explored via interviews, where the researcher collapses groups of thoughts into themes that answer the research question. This type of approach is often covered in some depth in many research methods and design textbooks.

When it comes to quantitative research, though, measuring perception is focused on the self-assessment of a sample. For example, the Perceived Stress Scale for measuring the perception of stress, or the Buss-Durkee Hostility Inventory for measuring aspects of hostility and guilt; both instruments developed by psychologists.

Using a subject’s perception of another person is problematic due to cognitive bias. This type of bias involves the systematic error in one’s perception of others. Within cognitive bias, there are three groups –

  • Fundamental Attribution Error, which involves labeling people without sufficient information, knowledge or time
  • Confirmation Bias, which is widely referred to as a common judgmental bias. Research has shown that people trick their minds on focusing on a small piece of information to confirm already developed belief
  • Self-serving Bias, which involves perceiving a situation in a manner so to plays the one perceiving in a more positive light

How would you measure validity in the proposed study? Have the sample assess behaviors in people, measure the behaviors of the people, compare the two assessments for accuracy, and factor that accuracy into the study? Seems like a long way to go, and all you are really doing is measuring the assessment ability of the sample.

I don’t know who’s at fault here for not identifying this type of fundamental issue before the student’s research proposal was developed. It may have been identified by faculty along the way and ignored by the student? It could be faculty didn’t really understand what the student was proposing due to how the research question was formed?

Examining influence in a qualitative study?

It’s very important to use the correct terminology associated with a research method and design. For example, the word influence is widely-associated with quantitative research. Influence can be measured by examining the change in the Y variable when the X variable is manipulated or involving a third variable (Z). Quantitative research is more scientific, less subjective, is repeatable, and can be generalized. Qualitative research is based on the knowledge and skill of the researcher. There are times when an experienced researcher will explore influence in a qualitative study but that is few and far between, generally related to specific disciplines (e.g., medical, social work), and is supported with significant academic research (see here, here, and here). I don’t recommend emerging scholars perform qualitative research. Besides skill, the time needed to complete a qualitative study is much longer than the time needed to complete a quantitative study.

Grant (2019) is an example of why emerging scholars shouldn’t do qualitative research. This emerging scholar explored the influence of leadership behaviors on two dimensions: employee engagement and collaboration (the organization is not germane to this discussion). To perform this study, the emerging scholar created a 7-item open-ended survey and distributed it anonymously to 10 people in an organization exceeding 3,800 people. The emerging scholar would interpret the responses and categorize them to answer the following two research questions –

  • What leadership styles and behaviors are being utilized at [organization]?
  • What is the influence of existing leadership styles and behaviors on employee engagement and collaboration?

Yin (2018) describes five situations where a single case study would be appropriate: critical, unusual, common, revelatory, or longitudinal (pp. 48-50). In addition, Yin describes two types of single case studies: holistic and embedded (pp. 51-53). When reviewing the dissertation, the researcher is attempting a build a common, holistic single-case study. Common because leadership is an everyday situation. Holistic because the organization appears to have a single purpose. However, a case study focuses on “how” or “why” a situation occurred (perhaps leadership style evolution); not “what” style is prevalent or which specific styles influence two outcomes. With an anonymous survey, there is no way to follow up with a participant to clarify their responses. To quote a colleague –

Who’s the researcher? Carnac the Magnificent?

Name withheld

As a result, the research method (QUAL) and design (case study) doesn’t appear to align with the research questions. The results of the study should be ignored. However, I wanted to discuss the themes identified by Grant –

  • A collaborative, or transformational, leadership style is present
  • Organizational leaders are engaging
  • Unfair hiring practices have become standard

First, are collaborate and transformational the same? They’re close, but I believe some scholars would say they’re different. Second, what does the organization’s hiring practices have to do with leadership in an organization? Plus, how can one generalize to an organization of 3,800+ from a sample of 10? Do the math: That’s a 95% CI of nearly 31 points! Even if 90% of the sample described an organization leaders as collaborative, as interpreted by the researcher, that means the 95% CI would between 60% and Inf. What are the other 40%? Non-collaborative?

Reference:

Grant, R. M. (2019). Investigating the influence of leadership behaviors on employee engagement and collaboration in a Federal organization (Doctoral dissertation). ProQuest Dissertations & Theses Global: The Humanities and Social Sciences Collection. (22615969)

Yin, R. K. (2018). Case study research and applications: Design and methods (6th Ed.). SAGE Publications.

Ethnicity: M = 1.26, SD = .529?

I understand that one has to “numericize” categories for quantitative analysis, but any student and faculty should understand the numbers mean nothing when compared (see Figure 1).

Figure 1: Barplot of Ethnicity (Race) with Distribution overlay (Deonarinesingh, 2019, p. 59).

A chairperson has to perform a lot of reading when reviewing a student’s dissertation. A committee member can hopefully help. But this type of error appeared on every chart in this study’s Chapter 4; regardless of the type of variables (e.g., categorical, interval). Did the faculty not know, or did they simply not read the study?

Student Note: Understand your variables and how best to display them. Don’t rely on your committee; they might not know or remember.

This study will return in a later post…stay tuned.

Reference:

Deonarinesingh, S. (2019). The effect of cultural intelligence upon organizational citizenship behavior, mediated by openness to experience (Doctoral dissertation). ProQuest Dissertations & Theses Global: The Humanities and Social Sciences Collection. (13880805)