Article: Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156–167. https://doi.org/10.1016/j.compedu.2013.02.019

Reviewer: Anthony Kaczoroski, Central Michigan University

Course: EDU800 – Research in Educational Technology

Date: November 2, 2025

A. Problem

1. Clarity of the Research Problem: The authors clearly identify the central issue they aim to study—whether digital game-based learning (DGBL) improves learning and motivation (Erhel & Jamet, 2013). They observe that although educational games have become increasingly popular, research findings on their effectiveness remain inconsistent. This inconsistency creates uncertainty about when and how games truly enhance learning outcomes.

To address this problem, the authors examine how different types of instructional framing—specifically telling participants to learn versus to have fun—and the presence or absence of feedback affect learning results. The problem is clearly stated, logically structured, and well aligned with current discussions in educational research on optimizing the design and implementation of digital game-based learning.

2. Need and Educational Significance: This topic is important because many teachers and trainers are trying to use digital games in education, yet we still do not fully understand what makes them successful. By focusing on specific factors like instruction type and feedback, the authors are moving beyond just asking “Do games work?” to “What parts of game design actually make a difference?” That kind of question has practical value for educators who design lessons or training programs.

3. Researchability of the Problem: The research question is clear and testable. The authors measure learning outcomes such as recall and comprehension, and they use motivation scales to compare how people respond to different types of instructions. Since the study involves measurable variables and can be tested through experiments, it is definitely researchable.

B. Theoretical Perspective and Literature Review

4. Critique of the Conceptual Framework: The study is grounded in several well-established theories that explain how people learn through multimedia environments, including Mayer’s Cognitive Theory of Multimedia Learning (CTML), Sweller’s Cognitive Load Theory (CLT), and Csikszentmihalyi’s Flow Theory. These frameworks collectively emphasize how learners process information, manage mental effort, and experience engagement during learning activities. The authors effectively apply these perspectives to the design of their educational game, ASTRA, which integrates text, images, and interactive elements to facilitate comprehension while minimizing cognitive overload.

This theoretical alignment represents one of the study’s key strengths because it demonstrates careful attention to cognitive design principles that promote effective learning. The combination of CTML and CLT supports the study’s emphasis on how instructional elements influence comprehension, while Flow Theory connects these cognitive processes to learners’ enjoyment and focus. Together, these perspectives provide a coherent rationale for the study’s focus on instructional framing and feedback.

However, the framework is heavily weighted toward cognitive processing, with far less attention given to the motivational dimension of learning. Although the authors briefly reference Self-Determination Theory (Deci & Ryan, 2000), they stop short of examining how feedback and goal orientation might enhance learners’ intrinsic motivation, sense of competence, or autonomy. By not fully integrating motivational theory alongside cognitive theory, the framework provides only a partial view of what drives engagement and learning in digital game-based environments.

In summation, the conceptual foundation of the study is theoretically sound and clearly articulated, but it could be strengthened through a deeper synthesis of cognitive and motivational perspectives. Doing so would more fully capture the complex interplay between mental effort, enjoyment, and personal investment that defines effective game-based learning.

5. Connection to Prior Theory and Research: The literature review is detailed and demonstrates that the authors conducted a comprehensive and well-informed examination of prior research. They refer to earlier studies and meta-analyses that show mixed results regarding whether games improve learning. For example, some studies (such as Randel et al., 1992, and Vogel et al., 2006) found that games enhance motivation, while others did not. Erhel and Jamet argue that these inconsistent results occur because many studies compare games to traditional lessons rather than examining how specific features, such as instructions or feedback, influence learning and motivation.

The authors also connect their work to research in reading and goal setting, showing that learners’ goals—such as learning versus entertainment—can influence how deeply they process information. This theoretical linkage makes their experimental focus logical and well-grounded, demonstrating a clear bridge between existing literature and their own hypotheses.

One area that could be improved is how the authors address motivation. While they mention concepts such as “flow” and “self-determination,” they do not explore how these theories connect to their own variables. For instance, they could have examined how feedback might enhance a learner’s sense of competence, a central element of Self-Determination Theory. Overall, their review effectively establishes the cognitive foundation for the study but leaves the emotional and motivational dimensions underdeveloped.

6. Summary and Implications of the Literature Review: The literature review presented by Erhel and Jamet (2013) provides a concise synthesis of existing research on digital game-based learning (DGBL). The authors highlight that prior studies consistently acknowledge the motivational potential of games but reveal inconsistent evidence regarding their effectiveness in promoting deeper learning outcomes. By reviewing this body of literature, Erhel and Jamet identify a key gap in previous research: much of the work compares games to conventional instructional methods rather than examining which specific game design elements—such as instructional framing or feedback—have a measurable impact on learning and motivation.

In this way, the literature review functions as more than a summary of past findings; it establishes a rationale for the present study. The authors use prior research to justify the need for a focused investigation of instructional guidance and feedback mechanisms within DGBL environments. Their synthesis directly links the problem of inconsistent results in prior studies to their experimental design, which isolates these variables to test their specific effects.

The implications of this literature review are twofold. First, it emphasizes that effective game-based learning depends on intentional instructional design rather than the mere inclusion of games. Second, it positions the study within a broader scholarly effort to refine how motivation and feedback are conceptualized in digital learning contexts. Including a visual model illustrating the relationships among instruction type, feedback, motivation, and learning outcomes could have further clarified these conceptual links for readers.

7. Research Questions and Hypotheses: The research questions in Erhel and Jamet’s (2013) study are clearly articulated and directly aligned with the stated problem. The first experiment investigates whether providing learners with explicit “learning” instructions leads to higher comprehension compared to telling them to “have fun.” The second experiment builds upon the first by introducing feedback as an additional variable to determine whether it enhances both learning performance and motivation. Together, these experiments address the central question of how instructional framing and feedback interact to influence cognitive and motivational outcomes in digital game-based learning.

The hypotheses supporting these questions are logical and grounded in prior theory, but they could have been presented more formally. Stating them explicitly as H₁ and H₂ would have clarified the progression from the first to the second experiment and underscored their conceptual connection.

Although the study measures motivation through self-report questionnaires, integrating behavioral indicators—such as time on task or frequency of interaction—would have provided a more comprehensive understanding of how the hypothesized relationships manifest in actual learner behavior. This additional data could have strengthened the link between the study’s cognitive and motivational dimensions, offering a more holistic view of the learning process.

C. Research Design and Analysis

8. Appropriateness and Adequacy of the Design: The study used a two-part experimental design. In Experiment 1, the independent variable was the instruction type (learning vs. entertainment). In Experiment 2, feedback was added as another variable. This was a sound methodological choice because it allowed the authors to test one factor at a time and observe how each influenced learning outcomes.

The design was well controlled; however, it took place in a laboratory setting with short sessions, which differs significantly from real classroom environments. As a result, the findings may not fully generalize to everyday learning situations. Nevertheless, as an early investigation of this topic, the design was appropriate for testing specific theoretical ideas and establishing a foundation for future research.

9. Sampling Methods and Generalizability: The participants in this study were French university students of similar age and educational background. While this homogeneity supported internal consistency and control over extraneous variables, it also limited the extent to which the findings can be generalized to broader populations, such as younger learners, adult professionals, or students from different cultural and educational contexts. Because all participants were recruited from a single university and volunteered to participate, there is also potential for self-selection bias—those who already enjoy or feel confident with digital games may have been more inclined to take part, potentially inflating motivation levels or learning outcomes.

Generalizability is critical in educational research because it determines whether the findings can meaningfully inform teaching practices or learning design beyond the immediate study sample. In this case, the limited participant diversity makes it difficult to know whether the same instructional and feedback strategies would be effective in more varied real-world environments, such as K–12 classrooms, corporate training, or cross-cultural settings.

By expanding future studies to include participants of different ages, learning contexts, and levels of gaming experience, researchers could provide stronger evidence that the observed relationships between instructional framing, feedback, and motivation are not confined to a specific demographic but represent broader principles of digital game-based learning.

10. Adequacy of Procedures and Materials: The experiment utilized a multimedia learning game called ASTRA, which focused on teaching concepts related to aging. The game incorporated multiple modes of presentation—videos, text, quizzes, and a guide character—providing an opportunity to engage learners through both verbal and visual channels. From a cognitive perspective, this design aligns with Mayer’s Cognitive Theory of Multimedia Learning (CTML), which posits that combining words and images can enhance comprehension by activating dual processing pathways. The inclusion of a guide character also served an instructional purpose: it provided a sense of structure and continuity throughout the learning sequence, functioning much like a tutor who directs attention and reinforces key concepts. These elements contributed to the clarity and consistency of the instructional delivery.

However, the design also presented limitations that may have affected learner engagement and motivation. The game’s structure was largely linear, with few opportunities for open-ended exploration or social interaction. According to Flow Theory and Self-Determination Theory, such interactivity and autonomy are critical for sustaining intrinsic motivation. Without these features, learners may have processed the material cognitively but lacked the deeper engagement that comes from meaningful choice or collaboration. Additionally, the article provides little detail about ethical considerations or how missing data were managed, both of which are important for ensuring transparency and replicability in experimental research.

Overall, while the materials and procedures were effective for controlling instructional variables and ensuring consistency, their limited interactivity and incomplete methodological transparency constrain the broader applicability and motivational richness of the findings.

11. Appropriateness and Quality of Measures: The study employed a sound combination of instruments to assess both learning and motivation. Cognitive outcomes were measured using recall and inference tests, which captured different levels of understanding—from simple factual retention to higher-order reasoning about the material. For motivation, the authors used a standardized scale based on Elliot and McGregor’s (2001) 2×2 Achievement Goal Framework, a well-established model for evaluating learners’ mastery, performance-approach, and performance-avoidance orientations. These tools demonstrate that the authors drew upon recognized theoretical and empirical foundations in educational psychology.

However, one key limitation is the lack of reported reliability statistics, such as Cronbach’s alpha. Cronbach’s alpha is a measure of internal consistency that indicates how reliably a set of survey or test items measure a single construct—in this case, motivation. Values typically range from 0 to 1, with coefficients above .70 generally considered acceptable for educational research. Without reporting this statistic, it is difficult to determine whether the items on the motivation scale produced consistent results across participants. In the absence of reliability evidence, the validity of the motivational findings remains uncertain, even if the tool itself is theoretically grounded.

Beyond reliability, the study could have benefited from incorporating objective behavioral measures of motivation, such as tracking the amount of time participants spent interacting with the game, the number of attempts they made to complete tasks, or their persistence in optional activities. These data would provide a valuable complement to self-reported motivation scores, offering observable evidence of engagement.

Finally, while the authors appropriately applied ANOVA and Mann–Whitney tests to analyze differences between groups, the addition of effect size reporting would have clarified the magnitude of those differences, helping readers understand not only whether effects were statistically significant but also how meaningful they were in practical terms. Overall, the study’s measurement approach was conceptually sound but would have been strengthened by providing reliability statistics, behavioral indicators, and effect sizes to enhance both the transparency and interpretive power of the results.

D. Interpretation and Implications of Results

12. Findings and Data Interpretation: In Experiment 1, students who were instructed to “learn” demonstrated higher comprehension scores than those who were told to “play.” However, both groups reported similar levels of motivation. This suggests that emphasizing a learning goal improved cognitive performance but did not necessarily enhance or diminish enjoyment. In other words, students were able to learn more effectively when explicitly guided to focus on understanding, but this focus did not change their perceived level of engagement or satisfaction.

In Experiment 2, the introduction of feedback produced notable changes. When corrective feedback was added, both the “learn” and “play” groups achieved higher learning outcomes, and the performance gap between them largely disappeared. The authors interpreted this to mean that feedback acted as a stabilizing instructional feature—it compensated for differences in goal framing and allowed even those pursuing the “fun” condition to engage more effectively with the material. The inclusion of feedback appeared to make the game experience more purposeful by giving learners information about their progress, thereby linking enjoyment with comprehension.

The authors discussed these findings in the context of cognitive and motivational theories, emphasizing that well-structured guidance and feedback can enhance both learning and engagement. They proposed that instructional framing influences how learners allocate attention and process information, while feedback supports metacognitive regulation—helping learners monitor their understanding in real time. However, while their discussion effectively connects results to theory, it provides limited detail about how they defined and measured “motivation.” Their interpretation relies heavily on questionnaire data without much explanation of how motivation was operationalized or whether it referred to intrinsic interest, persistence, or perceived competence. This lack of precision makes it somewhat difficult to fully interpret the strength and scope of their motivational findings.

Overall, the authors’ discussion adequately explains the cognitive implications of their results and partially addresses the motivational component. By providing a clearer definition of motivation and elaborating on how feedback influenced students’ affective experiences, not just their test scores, the study’s interpretation could have been made more robust and comprehensive.

13. Relation of Findings to the Theoretical Framework: The results align closely with the learning theories applied in the study. Both CTML and CLT help explain why the learning-focused instructions were effective, as they guided students to concentrate their attention on relevant information while minimizing cognitive overload. Flow Theory helps explain why feedback increased engagement. However, the study didn’t go far enough to explain why motivation didn’t improve much. Theories like Self-Determination Theory could help explain this better in future research.

14. Implications for Practice and Research: In discussing the implications of their findings, Erhel and Jamet (2013) emphasize that the effectiveness of digital game-based learning depends less on the presence of a game itself and more on the intentional design of instructional elements within it. They relate this conclusion to their theoretical foundation by highlighting how guided instructions and structured feedback support the cognitive processes described in Mayer’s Cognitive Theory of Multimedia Learning (CTML) and Sweller’s Cognitive Load Theory (CLT). According to the authors, clear instructional framing directs learners’ attention to essential information, thereby reducing unnecessary cognitive load, while timely feedback reinforces understanding and sustains engagement throughout the learning experience. In this way, their discussion successfully bridges theory and practice by demonstrating how design principles derived from cognitive learning theories can improve learning outcomes in multimedia environments. 

However, the authors’ consideration of practical implications remains primarily cognitive and could have benefited from a stronger integration of motivational theory. While they acknowledge that feedback enhances engagement, they do not fully explore how it might influence learners’ intrinsic motivation, self-efficacy, or sense of autonomy—key components emphasized in Self-Determination Theory. Additionally, their discussion does not address how these design principles might vary across different educational contexts, learner populations, or subject areas. As a result, their practical recommendations, while relevant, are somewhat narrow in scope. 

For future research, Erhel and Jamet suggest extending the study to include longer experimental durations, more diverse participant samples, and variations in feedback type. These suggestions reflect their awareness of the study’s limitations and point toward the need for continued exploration of how cognitive and motivational factors interact in digital game-based learning. Expanding on this integration in future work would allow for a more comprehensive understanding of how instructional design, feedback, and motivation collectively shape both learning outcomes and learner experience.

15. Limitations and Future Directions

The study presents several noteworthy limitations that influence how its findings should be interpreted and applied. First, the participant sample was relatively small and drawn from a single French university, which limits the diversity of perspectives represented in the data. This homogeneity reduces the ability to generalize the results to other populations, such as younger students, adult learners, or individuals from different cultural or educational backgrounds. A broader and more varied sample could help clarify whether the observed effects of instructional framing and feedback extend to a wider range of learners. 

Second, the tests administered measured only short-term learning outcomes. While these immediate assessments demonstrate that the learning-focused instructions and feedback were effective in promoting comprehension, they do not reveal whether the gains were retained over time. Longitudinal studies would be valuable to determine if the benefits of digital game-based learning persist beyond the initial session or if they diminish without reinforcement.

A third limitation involves the use of only one feedback type—Knowledge-of-Correct-Response feedback, which simply tells learners whether their answer was right or wrong. Although this type of feedback can reinforce factual knowledge, it does not promote deeper reflection or self-regulated learning. Including more elaborate feedback, such as explanations or hints, could provide insight into how different forms of guidance affect both understanding and motivation.

Fourth, the measurement of motivation was somewhat superficial. The study relied exclusively on self-report questionnaires, which, while convenient, may not fully capture the complexity of learners’ motivational states. Combining self-reports with behavioral data, such as persistence, time on task, or voluntary engagement, would yield a richer picture of how motivation influences learning in game-based environments.

Finally, the controlled laboratory setting, while useful for isolating variables, lacks the ecological validity of real classroom conditions. Students’ behavior in a lab may differ substantially from how they engage in authentic learning contexts, where social dynamics, time constraints, and instructional demands are more complex. Conducting similar studies in classroom or blended learning environments would make the findings more applicable to educational practice.

Despite these limitations, the study contributes meaningfully to the growing body of research on digital game-based learning. By illustrating that instructional framing and feedback design can influence both learning and motivation, Erhel and Jamet provide a foundation for future work that integrates cognitive and motivational theories in more diverse, authentic, and longitudinal contexts.

References

Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and

the self-determination of behavior. Psychological Inquiry, 11(4), 227–268.

Elliot, A. J., & McGregor, H. A. (2001). A 2×2 achievement goal framework. Journal of

Personality and Social Psychology, 80(3), 501–519.

Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback

on motivation and learning effectiveness. Computers & Education, 67, 156–167. https://doi.org/10.1016/j.compedu.2013.02.019

Mayer, R. E. (2005). The Cambridge handbook of multimedia learning. Cambridge University

Press.

Sweller, J. (1999). Instructional design in technical areas. ACER Press.