#1
Reference (APA):
Ross, S., Morrison, G., & Lowther, D. (2010). Educational technology research past and present: Balancing rigor and relevance to impact school learning. Contemporary Educational Technology, 1(1), 17–25.
Annotation
Summary:
Ross, Morrison, and Lowther (2010) provide an overview of the history and evolution of educational technology research, highlighting the tension between maintaining rigor in research design and ensuring relevance to real-world educational practice. They argue that impactful educational technology research must balance methodological strength with practical significance for educators, policymakers, and learners.
Evaluation:
The article is valuable in framing how educational technology research has historically struggled with the dual demands of rigor and relevance. Its strength lies in connecting the research community’s goals with the needs of practitioners in schools. However, the article primarily focuses on broad themes rather than offering specific case studies, which limits its immediate applicability for practice.
Reflection:
This reading reminds me of my own teaching and professional work, where I often have to balance rigor (measuring student outcomes through platforms like SimNet) with relevance (ensuring the tools actually help students succeed in Canvas or Engageli). The article reinforces the need to design research that is both methodologically sound and directly beneficial for classrooms.
#2
Reference (APA):
Hoepfl, M. C. (1997). Choosing qualitative research: A primer for technology education researchers. Journal of Technology Education, 9(1), 47–63.
Annotation
Summary:
Hoepfl (1997) outlines the philosophical underpinnings, strengths, and limitations of qualitative research, specifically as applied to technology education. The article serves as a primer for researchers less familiar with qualitative traditions, highlighting when and why qualitative approaches may be most appropriate.
Evaluation:
The strength of the article is its accessibility—it clearly explains qualitative methods for readers who may be new to them. It also situates qualitative research as an important counterbalance to the dominance of quantitative methods in technology education. A limitation is that it does not deeply engage with the more advanced nuances of qualitative research design, but its purpose as a primer makes it effective.
Reflection:
As someone who values both data-driven insights and human experience, I find Hoepfl’s argument convincing. In my teaching with online technologies, I often see how qualitative feedback from students (interviews, reflections) provides insights that numbers alone cannot. This article validates the importance of collecting both voices and metrics in educational research.
#3
Reference (APA):
Cobb, P., Confrey, J., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9–13.
Annotation
Summary:
Cobb et al. (2003) introduce design experiments as a research methodology in education, aimed at both testing and refining theories in real-world classroom settings. Design experiments are iterative, involving cycles of design, implementation, analysis, and revision, with the goal of generating practical knowledge that informs both theory and practice.
Evaluation:
This article is highly influential because it bridges the gap between controlled experiments and naturalistic classroom studies. Its strength is in emphasizing iterative, theory-driven designs that are responsive to context. However, the methodology can be resource-intensive and difficult to generalize beyond specific classroom environments.
Reflection:
I connect strongly with the concept of design experiments because in my teaching with Canvas and Engageli, I often test small adjustments (like changing discussion structures or using breakout groups) and reflect on their impact. This iterative cycle aligns with design-based research principles, and I can see myself applying this approach in my own doctoral work.
#4
Reference (APA):
Randolph, J. J. (2007). Multidisciplinary methods in educational technology research and development (Chapters 1 & 2). HAMK Press.
Annotation
Summary:
In the opening chapters of his book, Randolph (2007) introduces the importance of multidisciplinary methods in educational technology research. He argues that educational technology, by its very nature, draws on multiple disciplines such as psychology, sociology, instructional design, and computer science, making methodological diversity essential.
Evaluation:
Randolph’s strength lies in clearly articulating why educational technology research cannot be confined to a single disciplinary lens. The chapters provide a useful foundation for understanding how varied methods can be integrated. However, because the material is introductory, it does not yet provide the depth that more advanced readers may seek.
Reflection:
I see clear relevance of Randolph’s points in my professional experience, where integrating perspectives from IT, pedagogy, and administration is essential. For example, when migrating institutions to new technologies like M365 or implementing SimNet labs, I’ve had to consider both technical requirements and educational impact. This reinforces the need for multidisciplinary approaches.
#5
Reference (APA):
Drost, E. A. (2011). Validity and reliability in social science research. Education Research and Perspectives, 38(1), 105–123.
Annotation
Summary:
Drost (2011) explains the key concepts of validity and reliability in social science research, outlining the different types of validity (construct, internal, external) and reliability (test-retest, inter-rater, internal consistency). The article provides definitions, examples, and guidance on how to ensure both validity and reliability in research design.
Evaluation:
The article is a useful resource for researchers seeking clarity on foundational methodological concepts. Its strength lies in its systematic explanations and practical orientation. A limitation is that it serves more as a reference guide rather than engaging in debates about the complexities of applying validity and reliability in real-world educational contexts.
Reflection:
This article is particularly helpful for me as I begin doctoral-level research. Ensuring validity and reliability is critical when evaluating educational technologies. For example, if I’m measuring the impact of Engageli on student engagement, I need to be confident that my instruments are valid and reliable to produce trustworthy results.
Leave a comment