#1
Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156–167. https://doi.org/10.1016/j.compedu.2013.02.019
Summary
In this article, Erhel and Jamet (2013) examine how goal framing (learning vs. entertainment) and feedback influence both the cognitive and motivational outcomes of digital game-based learning (DGBL). They use a multimedia learning game called ASTRA and conduct two experiments with university students to better understand how instructions shape attention and how feedback supports performance. In Experiment 1, they find that students instructed to “learn” perform significantly better on comprehension tasks than those asked to “play,” despite both groups reporting similar motivation levels. Experiment 2 introduces Knowledge-of-Correct-Response feedback, which improves performance across conditions and reduces the gap created by the initial instructional framing. Throughout the study, the authors draw on Cognitive Load Theory (CLT), the Cognitive Theory of Multimedia Learning (CTML), and Flow Theory to explain how instructional cues and feedback direct attention, support information processing, and influence perceived engagement. Their overall conclusion is that digital games are most effective when instructional scaffolds—clear goals and structured feedback—are intentionally designed into the learning experience.
Evaluation
As I reviewed this study, I found its theoretical grounding to be one of its strongest features. The authors intentionally align the design of ASTRA with CTML and CLT principles, integrating text, visuals, and guided prompts to reduce extraneous load and support comprehension. Their literature review also situates the study well within broader DGBL research by highlighting the inconsistent findings across the field and arguing that these inconsistencies stem from focusing too heavily on whether games “work” rather than examining which design elements contribute to learning.
At the same time, I identified several limitations—many of which I explored in detail in my Critical Review of Research paper. Although the authors reference Flow Theory and briefly acknowledge Self-Determination Theory, they do not meaningfully incorporate motivational constructs such as autonomy, competence, or intrinsic interest. Their motivational measurement relies exclusively on self-report questionnaires based on the 2×2 Achievement Goal Framework, and they do not report reliability statistics such as Cronbach’s alpha, making it difficult to determine how consistently the scale functioned. In addition, their operationalization of motivation is narrow and lacks behavioral indicators that would have strengthened their claims.
Methodologically, the two-experiment structure is appropriate for isolating specific variables, but the short, laboratory-based sessions limit ecological validity. As I noted in my CRR, the homogeneity of the sample—French university students of similar age and background—also restricts generalizability to more diverse populations such as adult learners, K–12 students, or corporate trainees. Although the statistical analyses are appropriate, the absence of effect-size reporting weakens the practical interpretability of the findings. Despite these issues, I still view the study as an important and well-designed contribution to understanding how instructional scaffolds support learning within DGBL environments.
Reflection
This article is directly relevant to my doctoral research interests in motivation and engagement in online learning environments. As I worked through the study, I found myself consistently applying its insights to both my instructional design decisions and my classroom practices. In my CRR paper, I emphasized that instructional framing and feedback are not peripheral elements—they fundamentally shape how learners allocate attention, regulate their understanding, and experience engagement during technology-mediated tasks. The authors’ findings reinforce the value of being intentional with the cues I provide in Canvas modules, Engageli sessions, assignment prompts, and weekly learning pathways. When learners clearly understand that the goal is mastery rather than entertainment, they tend to engage more deeply without experiencing a decline in motivation—something that aligns with my own teaching observations.
The feedback findings in Experiment 2 also resonated strongly with me. In my teaching, I have seen how structured, timely feedback—whether through weekly discussions, Excel labs, or project-based work—becomes metacognitive support rather than simple correction. The fact that feedback reduced the instructional gap between the “learn” and “play” groups reinforces my belief that feedback is a central component of both engagement and persistence.
From a research standpoint, this article reinforces the importance of examining cognitive and motivational dimensions of engagement simultaneously. I was particularly struck by the motivational gaps in the study, which I highlighted in my CRR paper. These gaps help clarify the direction I intend to take in my dissertation work: integrating motivational constructs more deeply—especially those from Self-Determination Theory—to understand how specific design elements within online learning environments influence learner autonomy, competence, and persistence. Ultimately, this study strengthens my view that digital tools—whether games, Engageli interactions, or Canvas modules—are only effective when paired with purposeful instructional design that provides clear goals, structured guidance, and meaningful feedback.
#2
Drost, E. (2011). Validity and reliability in social science research. Education Research and Perspectives, 38(1), 105–123.
Summary
Drost (2011) provides a comprehensive overview of validity and reliability as foundational concepts in social science research, explaining how they ensure the rigor, credibility, and trustworthiness of empirical findings. The article differentiates among major types of validity—content, construct, criterion, internal, and external—and outlines how each contributes to the accuracy of measurement and the soundness of causal claims. Drost also reviews key forms of reliability, including test–retest, inter-rater, parallel-forms, and internal consistency reliability, explaining how each index reflects the stability and consistency of a measurement tool. Throughout the paper, the author emphasizes the interdependence of validity and reliability while illustrating common threats such as measurement error, poorly defined constructs, and inconsistent administration procedures. By situating these concepts within practical research contexts, the article helps clarify why methodological transparency and rigorous instrument design are essential in social science inquiry.
Evaluation
As I reviewed this article, I found its clarity and organization to be two of its strongest features. Drost synthesizes complex measurement concepts in a way that is accessible without oversimplifying the underlying theory. The distinctions she draws among different types of validity and reliability are particularly useful for students and researchers who need to operationalize constructs in empirical studies. I also appreciated how the article highlights frequent misunderstandings—such as the assumption that reliability alone guarantees validity—and uses concrete examples to correct them.
At the same time, I noticed that while the conceptual explanations are thorough, the article offers limited guidance on selecting validity or reliability strategies for specific research designs. For instance, although Drost describes internal consistency measures such as Cronbach’s alpha, she does not discuss the assumptions behind these statistics or how researchers should interpret variations across subscales. Likewise, although she differentiates between internal and external validity, the article provides fewer examples of how researchers can proactively strengthen these areas in real-world studies. Despite these limitations, the paper delivers a strong foundational overview that is highly valuable for understanding the mechanics of high-quality research design.
Reflection
This article directly supports my doctoral research and my growth as a scholar-practitioner by deepening my understanding of how validity and reliability underpin the integrity of educational research. As I design CRR papers, annotated bibliographies, and eventually my dissertation study, I need to be precise about how constructs such as motivation, engagement, autonomy, and persistence are defined and measured. Drost’s explanation of construct validity helps reinforce why I must ensure that the instruments I select genuinely capture the psychological constructs I aim to study—especially in online and blended learning contexts where motivation can manifest differently across learners.
I also found myself reflecting on the role of validity and reliability in my professional practice as an educator. When I evaluate student engagement patterns in Canvas, analyze participation data from Engageli sessions, or interpret assessment results, I need to be mindful of how consistently and accurately these tools measure what I believe they are measuring. Drost’s discussion of measurement error reminds me that learning analytics often appear objective but can be influenced by missing data, system limitations, or inconsistent student behavior.
Overall, this article reinforces the importance of aligning my future dissertation research with rigorous methodological standards. It encourages me to be intentional about instrument selection, transparent about measurement limitations, and thoughtful about how validity and reliability shape the credibility of my findings—especially as I explore how online instructional design influences learner motivation and engagement.
#3
Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., … & Mong, C. (2007). Using peer feedback to enhance the quality of students’ online discourse. Journal of Computer-Mediated Communication, 12(2), 412–433.
Summary
This article investigates how structured peer feedback influences the quality of students’ online discussion postings in a graduate-level online course. Grounded in socio-cognitive learning theory and supported by literature on formative assessment, the authors argue that high-quality feedback is essential for fostering higher-order thinking in online environments. They designed an exploratory case study examining three research questions: whether peer feedback improves the cognitive level of discussion posts, how students perceive receiving peer feedback, and how they perceive giving it. Using Bloom’s taxonomy to evaluate posting quality, combined with surveys and interviews to assess perceptions, the authors found modest improvements in cognitive depth for some students and strong positive perceptions of peer feedback overall. Students reported that giving feedback promoted reflection and helped them internalize analytical criteria, although the actual cognitive improvement across postings remained limited. The authors conclude that peer feedback shows promise for enhancing online discourse but requires careful implementation, including training and well-designed prompts.
Evaluation
This study offers a well-designed exploratory examination of peer feedback but has clear methodological limits. A major strength is the use of multiple data sources—rubric-based scoring, surveys, and interviews—to triangulate findings. The authors also acknowledge the challenge of objectively scoring cognitive depth and attempt to mitigate it through inter-rater reliability procedures. However, the study relies on a convenience sample of 15 graduate students from a single course, significantly restricting generalizability. The Bloom’s taxonomy rubric, while theoretically appropriate, is applied in a way that risks ceiling effects because most postings clustered at lower levels, limiting detectability of changes. Timing issues in delivering peer feedback—delays of up to two weeks—further compromise internal validity and weaken claims about feedback effects on posting quality. Additionally, the conceptual framework, though grounded in socio-cognitive theory, is not fully integrated; relationships among constructs such as scaffolding, higher-order thinking, and peer interaction are implied rather than explicitly modeled. Nonetheless, the study remains valuable as an early empirical exploration of peer feedback in online environments.
Reflection
This article is highly relevant for understanding how structured peer feedback may support deeper cognitive engagement in online learning environments, especially those relying on discussion forums. Its insights reinforce the importance of scaffolding, rubric-based evaluation, and clear expectations for students—elements directly applicable to online courses I teach and to ongoing research in educational technology. The study also highlights practical challenges that resonate with current instructional contexts, such as inconsistent quality of peer comments, student discomfort with evaluative roles, and the difficulty of reliably assessing cognitive depth. These findings connect well with broader themes in the literature on formative assessment, self-regulated learning, and cognitive presence, offering a foundation for future inquiry into how digital tools and instructional design can better support higher-order thinking. This article deepens my understanding of both the potential and the limitations of peer feedback as an instructional strategy, informing future work in designing effective online discussion frameworks.
Leave a comment