Assignment 1: Quantitative Research Manuscript Critique
In this module we expanded our knowledge about quantitative methodology.By the due date assigned, complete and post the following research manuscript critique for the Quantitative research article you selected in Module 1. Provide feedback to at least two of your peers through the end of the module.
Type of Study: Quantitative
- Research Topic:
- Purpose of the Study:
- Overarching Research Question or Theory:
- Specific Research Questions/ Hypotheses:
- Quantitative Research method design: (survey, experimental, quasi-experimental, pretest/posttest control group, cross-sectional, longitudinal, etc.)
- Procedure: (How was the data collected?)
- Variables: (Identify the Dependent and Independent Variables. For each variable identify the measurement scale: nominal, ordinal, continuous)
- Instrument(s) analysis: (Discuss reliability and validity of the measures included in the study.)
- Data analysis: (Identify the statistical software, if any, used in analysis of the data and they type of analyses included.)
- Consent: What type of consent, if any, was obtained from the participants?
Module 4 Overview (1 of 2)
Research Methods: Quantitative Approach
In Module 4 we focus on the essential components in designing a method procedure for a quantitative study. Chapter 8 of your textbook, Research design: Qualitative, quantitative, and mixed methods approaches, provides an excellent review of quantitative methods including instrumentations, experimental and survey designs, and issues related to threats to validity. Creswell (2009) also provides an example of an experimental method section on pages 167–169. You will learn more about the data analysis strategies in the next research course (R7031 Methods and Analyses of Quantitative Research) in the research curriculum.
We covered issues related to selecting participants and sampling in Module 3, so we will focus more on the issues related to instrumentations in this module. We have asked you to identify some potential research questions you are interested in investigating in this course. At the end of this module, you should be able to operationalize the constructs by selecting specific variables and specific instruments that best assess those variables.
While students sometimes create their own instruments to assess variables in dissertation studies (in some disciplines more than others), it is best to use psychometrically adequate instruments that have been validated in order to assess the variables of interests meaningfully and accurately. Typically, 11 essential elements of an inventory should be included in the description of each instrument in the method section (Heppner & Heppner, 2004) including:
- Instrument name
- Key reference(s)
- A brief description of the construct the instrument assesses
- Number of items
- Type of items (e.g., Likert scale)
- Factors or subscales and their definitions
- Indication of the direction of scoring and what a high score means
- Reliability estimates
- Validity estimates
Among these 11 elements, reliability and validity are probably the most critical. We will review them further on the next page.
Heppner, P.P. & Heppner, M. M. (2004). Writing and publishing your thesis, dissertation, and research: a guide for students in the helpingprofessions. Thousand Oaks, CA: Thomson/Brooks-Cole.
Module 4 Overview (2 of 2)
Research Methods: Quantitative Approach
Refers to consistency between measurements at different time intervals. The two primary ways of reporting reliability are alpha coefficients (also known as Cronbach’s Alpha, α, for internal consistency) and test-retest correlations (for stability). Internal consistency tells us how well the items hold together in a scale when they are designed to measure the same construct. The rule of thumb is that, in social sciences, the alpha coefficients should be above .7. Test-retest correlations reflect the stability of a person’s scores on the same inventory over time.
Refers to the ability an inventory can accurately assess the construct that it intends to measure. Construct validity (the degree to which the scores reflect the construct you are trying to measure) is particularly important. If the instrument is not really measuring what we intend to measure, then the results/interpretations of the data cannot be meaningful. It is often recommended that you use multiple dependent variables (Cook & Campbell, 1979). For example, in outcome research, we can use behavioral observations, self-reports, reports of others, and rater’s ratings, multiple ways to operationalize the construct of outcome.
This brings us back to the importance of using validated (published) instruments of which reliability and validity have been examined and evaluated. Finding out how your constructs of interest have been operationalized and assessed in the past is one of the critical goals of conducting a literature review.
In addition to the assigned reading, please also read the following two articles. One is by Yegan Pillay (2005), which gives an excellent example of a method section for a quantitative study. The other one is by Patrick, et al. (2008), which by contrast, does not provide adequate detail in the method section. Both articles can be located through EBSCO/PsycINFO in the Argosy University online library resources.
Cook, T. D. & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally
Pillay, Y. (2005). Racial identity as predictors of the psychological health of African American students at a predominantly White university. Journal of Black Psychology, 31, 46–66.
Patrick, M. E., Rhoades, B. L., Small, M., & Coatsworth, J. D. (2008). Faith-placed parenting intervention. Journal of Community Psychology, 36, 74–80.