EDU7002+Research+Methods


 * **EDU7002-8** ||  ||
 * **Educational Research Methodology** || **6 Compare and Contrast Qualitative and Quantitative Methods** ||
 * Stephen – you are a breath of fresh air! Wonderful work! **
 * Another excellent example of how to do it right. I could not find anything but great information and writing. Have a wonderful holiday! **
 * Another excellent example of how to do it right. I could not find anything but great information and writing. Have a wonderful holiday! **

=Qualitative vs. Quantitative Methods: Compare and Contrast= My emerging prospective research topic is to determine the impact of technological tools on learner satisfaction, participation, and perceived learning in an online instructor-led professional development environment. At this point a research method has not been determined, consequently the purpose of this paper is to propose several research approaches aligned with the prospective research topic, discuss how they align, and then compare and contrast them to identify advantages and disadvantages accruing to each. In the environment in which I propose to conduct my research it is not possible to randomly place learners into control and test groups since learners purchase the appropriate class for their professional development needs and such other motivators that are personal to each student. Similarly whether a student purchases a technological course that is conducted in a traditional, face-to-face environment or in a digital, live virtual classroom may not be completely at the discretion of the learner, and in no way is determined by the company that provides the class, or the instructor who teaches the material. For this reason any consideration of a true experimental design to determine cause and effect is not possible. The courses taught by this US-based Technological Company are all technical in nature and provide the learner professional development in a needed or desired technology. Each course has a student and activity guide specific to that course and the instructor is mandated to fully cover the material in the guide whether that course is held in a traditional or digital classroom. Each course shares the same teaching staff that uses the same slides to present the subject. Each course has the same labs available to allow learners hands-on experience with the technology that they are learning, run on the same machines, and supported by the same technical staff. Whether the course is delivered traditionally or digitally the lab machines are always remotely accessed. Each course is concluded with the same evaluation that uses a 5-point Likert scale regarding the student’s perceptions of the instructor’s skills, the relevance, accuracy and ease of use of the course materials, the technical and physical environment, and their own overall satisfaction with the course. The primary difference is the medium through which the material is delivered. In the traditional classroom instructors make use of slides to present the subject face-to-face. No asynchronous communication tools are utilized in the traditional setting. In the digital classroom instructors present the same slides over the internet while using a conference call for the audio portion of the presentation. Learners can ask questions or make comments over the phone or through a chat window. Individuals in the digital classroom, both instructor and learners, are able to present video feeds of themselves through use of a standard web cam. The instructor in this environment also has access to the use of polls, a set of questions that can be presented to the learners that they can answer individually to access understanding of learning or feedback. =The Approaches = For the proposed study the independent variable would be the technology emphasized during a particular module, and the dependent variables would be learner satisfaction, learner participation, and perceived learning during that module.

Case Study Approach
In a case study the research is centered on “ gaining an in-depth understanding of particular phenomena in real-world settings ” ( Blichfeldt & Andersen, 2006, “Similarities”, para. 2 ). This case study will study an online instructor-led professional development environment in two phases. In the first phase historical data is available and will be categorized by course, and delivery method to determine if there are differences in satisfaction and perceived learning based on delivery method. The information from two open-ended questions on the evaluation, “ Would you recommend training to others? Why or why not? <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">” and “ <span style="color: #0000ff; font-family: 'Times New Roman',Times,serif; font-size: 120%;">Suggest how we could improve your satisfaction with the course <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">” will be analyzed and categorized to cluster factors of satisfaction and dissatisfaction. In the second phase learners will be surveyed for a period of time to validate the factors that increase learner satisfaction, in class participation, and perceived learning.

<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">Quasi-Experimental Approach
<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">The population of this study group does not allow for randomizing learners into control and experimental groups, nor is there the possibility of blocking certain technologies to some members of a class and not others, so a true experimental design is not possible. For a set of multiple day digital classes a validated survey will be presented to learners at the end of each day. Selected classes would be chosen to receive the experimental treatment, in this case an expanded use of either the video component, or polling for a specific day. A nonrandomized control group pretest-posttest design will be conducted with the findings of the control groups surveys compared to the experimental groups surveys before treatment to determine if there are any significant differences between the groups. Using the pretest scores as a covariate in an analysis of covariance (ANCOVA) minimizes error variance, eliminates systematic bias, while identifying differences between the experimental and non-experimental surveys to determine which tool, if any, fosters greater satisfaction, participation, and perceived learning in the learners.

<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">Paired Comparison with Separate-Sample Pretest-Post-test Control Group Approach
<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">The population of this study group does not permit randomizing learners, but this threat to internal validity can be strengthened by using two quasi-experimental designs in conjunction with each other ( <span style="font-family: 'Times New Roman',Times,serif; font-size: 90%;">de Anda, 2007 <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">). In the first design learners will be used as their own controls with the difference of each individuals pretest and post-test satisfaction scores calculated, and those differences used to determine if there are significant differences between pretest and post-test for each class. The second design mitigates history by emphasizing different technologies at differing times in a sequentially staggered approach whereby the post-test of one group is collected at the same time that the pretest for the next group is collected. This pretest will be compared with the previous pretest to determine equivalency of the two groups. This design allows the treatment to be given to all learners in such a way as to increase the generalizability of the finding while also adding to the validity of the research. =<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">Compare and Contrast Approaches = <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">This section presents discussions regarding the advantages and similarities of the approaches, internal validity, external validity, and the amount of work involved in each.

<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">Advantages and Similarities
<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">The primary advantages of each approach are different. Using a case study has the advantage of using large amounts of historical data that can then be used to determine specific factors of satisfaction and dissatisfaction of learners. Using the quasi-experimental approach has good internal and reasonable external validity and can focus on whether specific technologies foster greater satisfaction, in class participation and perceived learning in the learners. Using the paired comparison with separate sample pretest-post-test control group approach has the advantage of a larger experimental and control size while still being able to determine whether specific technologies foster satisfaction, participation, and perceived learning. <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">The similarities between the approaches are that in each a survey will need to be discovered, or developed and validated, before the data gathering can occur, while the case study has a large amount of data that can be used to develop a survey for that approach. The instructors in the effected classes will need to be motivated as well as trained on what is needed in order to receive valid and reliable results.

<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">Internal Validity
<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">One difference between the approaches reflects on internal validity, or the ability to determine cause-and-effect from the internal data. The case study allows for a determination if historical students in classrooms for a single US-based Technology Company are more satisfied with a traditional versus a virtual medium of delivery, and the internal validity will be helped if in phase two current students confirm the historical findings, and provide additional information regarding what factors may contribute to those findings. The best this approach can do is infer that certain factors cause learner satisfaction or perceived learning through logical reasoning and discussion of the theoretical framework, but the statistics will not be able to ‘prove’ relationships between the data ( <span style="font-family: 'Times New Roman',Times,serif; font-size: 90%;">Blichfeldt & Andersen, 2006; Edgington, 1966 <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">). The quasi-experimental approach includes a control group, so that equivalency between the two groups can be determined, so that any differences between control and experimental groups lend greater internal validity to the findings. The control group also allows for statistical manipulation so that “ <span style="color: #0000ff; font-family: 'Times New Roman',Times,serif; font-size: 120%;">a close approximation <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">” ( <span style="font-family: 'Times New Roman',Times,serif; font-size: 90%;">Edgington, 1966, p. 487; see also Wright, 2006 <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">) of randomized results can be determined. Finally, de Anda (2007) proposed the combining of alternating treatments and multiple baseline design where internal validity is improved by using each participant as his or her own control, calculating the individual differences between pretest and posttest, and using a paired t-test analysis to identify significance. By also ensuring that groups are sequentially staggered, where one group is doing the posttest, while another is doing the pretest the threat of history is eliminated, and any confounding variables can be identified by an analysis of covariance (ANCOVA).

<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">External Validity
<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">None of the approaches presented involve randomized subjects because of the environment in which this study will be conducted, eliminating the possibility of determining causal relationships between the expanded use of a specific technology and an increase or decrease in learner satisfaction, participation, and perceived learning. External validity can be questioned in nonrandomized groups because this lack of ‘control’ does not provide the rigidity to state unequivocally that treatment A caused affect B, which lessens the ability to generalize the findings to a larger population. True experiments in the behavioral sciences are rare because random samples of a population are often “ <span style="color: #0000ff; font-family: 'Times New Roman',Times,serif; font-size: 120%;">so specific as to be of little interest <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">” ( <span style="font-family: 'Times New Roman',Times,serif; font-size: 90%;">Edgington, 1966 <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">). Even though lack of randomization may prevent authors from determining that A causes B it does not prevent them from presenting inferences based on logic to persuade their audience that the findings within their research ‘should be’ generalizable to this population or that.

<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">Work to be done
<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">In all three approaches there is much work to be done. In the case study, the historical data has to be collected, ordered into groups, and analyzed. Three years of open-ended questions need to be collected, categorized, and analyzed for positive or negative affect. Factors will be determined from those categories, and a survey instrument will be created to measure the presence of satisfiers and the absence of dissatisfiers, and validated. Instructors will need to be notified on the procedure of getting to the survey online, and instructed to collect the data. The data can then be analyzed to verify the earlier findings and determine which factors are significant for satisfaction, and if there is a significant difference between delivery methods among the factors. <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">For the quasi experimental approach best practices in the use of the experimental technologies will be researched, and made into a presentation for instructors. A survey regarding the students satisfaction, their activity, and their perceived learning will be created, and validated to determine the differences in those factors during the experiment. Each instructor will have to ‘buy in’ to presenting their classes in the normal way, but to use the experimental technologies using the best practices on specific days to provide the experimental treatment, and make sure that the survey is accessed and completed after both non-experimental and experimental modules each day. Many of the steps in the quasi experimental approach will be the same with the paired comparison with separate-sample pretest-post-test control group approach. In this approach, however, the instructors will have to administer the surveys after each module, and a schedule presented to each instructor on which experimental technology should be emphasized for a given module. =<span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">Conclusion = <span style="font-family: 'Times New Roman',Times,serif; font-size: 120%;">Each of the three approaches presented align with the prospective research topic. Each allows for the collection of interesting data that is internally consistent and may be generalizable to other institutions that teach adults over the internet. Each requires a good amount of work and the cooperation of many others to reach fruition.


 * = References ||
 * * Blichfeldt, B. S., & Andersen, J. R. (2006). Creating a wider audience for action research: Learning from case-study research. //Journal of Research Practice, 2(1)//. Article D2. Retrieved from http://jrp.icaap.org/index.php/jrp/article/view/23/43
 * de Anda, D. (2007). Intervention research and program evaluation in the school setting: Issues and alternative research designs. Children & Schools, 29(2), 87-94. Retrieved from ERIC Database. (EJ762838)
 * Edgington, E. S. (1966). Statistical inference and nonrandom samples. //Psychological Bulletin, 66(6)//, 485-487. Retrieved from http://homepage.mac.com/psychresearch/Sites/site2 /psy779readings/Edgington1966.pdf
 * Leedy, P. D., & Ormrod, J. E. (2010). //Practical research: Planning and design//. Upper Saddle River, NJ: Merrill.
 * Wacker, D., McMahon, C., Steege, M., Berg, W., Sasso, G., & Melloy, K. (1990). Applications of a sequential alternating treatment design. //Journal of Applied Behavior Analysis, 23(3)//, 333-339. doi:10.1901/jaba.1990.23-333
 * Wright, D. B. (2006). Comparing groups in a before-after design: When //t// test and ANCOVA produce different results. //British Journal of Educational Psychology, 76//, 663-675. doi:10.1348/000709905X52210 ||