EDU800 – Week 12 Annotated Bibliographies

#1

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 38–51.

Summary
In this article, Nemorin et al. (2023) conduct a horizon scan of the ways artificial intelligence in education (AIED) is represented across academic, commercial, and policy discourses. The authors analyze texts from multiple sectors to map how AI is framed in terms of potential, risk, equity, automation, and educational transformation. Their findings reveal that hype-driven narratives often exaggerate AI’s capabilities while obscuring structural and ethical issues such as surveillance, bias, and inequitable access. The article argues that many educational stakeholders adopt AI rhetorics without critically assessing assumptions about learning, teacher roles, or sociotechnical context. The authors conclude that more critical, interdisciplinary scrutiny is needed to temper inflated expectations and more realistically guide AI adoption in educational settings.

Evaluation
As I reviewed this study, I was particularly struck by its balanced approach. Many current discussions of AI—especially in mainstream media or vendor marketing—tend to be polarized, either overly optimistic or overly alarmist. The authors successfully position their analysis between these extremes by grounding their claims in discourse analysis and sociotechnical perspectives. I also appreciated their interrogation of how language shapes assumptions about what AI “should” do in education. This focus on discourse feels especially timely, as many instructors and institutions are still experimenting with ChatGPT, automated grading tools, and AI-supported tutoring systems. The main limitation is that horizon scanning provides breadth rather than depth; while the analysis is comprehensive, it does not include empirical classroom data that could illuminate how AI discourses translate into actual practice.

Reflection
This article directly informs my doctoral research interests, especially as I explore how technology, learning, and motivation intersect in online environments. In my own teaching at DeVry, I have already witnessed how students’ perceptions of AI shape their willingness to engage deeply with course content. Additionally, in my professional work as an IT project manager and Customer Success Specialist, I regularly confront hype-cycles surrounding AI tools and must help clients evaluate them realistically. Nemorin et al.’s findings reinforce the importance of grounding technology adoption in pedagogical goals rather than the promises of vendors. As I continue my DET coursework, this article encourages me to incorporate critical perspectives that challenge assumptions about AI’s role in shaping learning, agency, and equity.

#2

Sofia, M., Fraboni, F., De Angelis, M., Puzzo, G., Giusino, D., & Pietrantoni, L. (2023). The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science: The International Journal of an Emerging Transdiscipline, 26, 39–68.

Summary
In this study, Sofia et al. (2023) investigate how the adoption of artificial intelligence is reshaping workplace skill requirements and organizational strategies for upskilling and reskilling employees. Using a mixed-methods approach that includes surveys, interviews, and organizational case analysis, the authors identify three key trends: (1) AI increases the demand for hybrid skillsets that blend technical and human-centered competencies; (2) organizations are experiencing widening skills gaps as automation accelerates more quickly than training systems; and (3) successful adaptation requires continuous learning cultures supported by leadership, structured development programs, and employee agency. The study highlights that emotional intelligence, problem solving, communication, and digital literacy remain essential complements to AI-augmented roles.

Evaluation
This article provides a well-structured and current analysis of how AI adoption impacts real-world organizational practices. I found the mixed-methods design especially effective because it captures both quantitative trends and qualitative insights from employees experiencing these changes. The discussion on hybrid skillsets is well aligned with ongoing workforce research and clearly articulated. The authors also acknowledge the complexity of implementing large-scale reskilling initiatives, especially in organizations without strong learning cultures. One limitation is that the study focuses heavily on European organizations, which may not generalize fully to U.S. corporate environments. However, the broader themes—skills gaps, digital readiness, and organizational learning—are widely applicable.

Reflection
This article resonates strongly with my work in IT project management and customer success, where I routinely support organizations undergoing digital transformation. Many of the challenges described—skills gaps, user readiness, and uneven adoption—mirror what I observe when rolling out new platforms, implementing MFA campaigns, or training distributed teams. From an educational perspective, the findings reinforce the importance of embedding digital literacy and problem-solving skills into my teaching at DeVry, particularly in courses like BIS155 and BIS310. As a doctoral student, this study strengthens my understanding of how AI intersects with workforce development and highlights opportunities for future research on how higher education can better prepare learners for AI-mediated professional environments.

#3

Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019). Envisioning AI for K-12: What should every child know about AI? In Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9795–9799.

Summary
Touretzky et al. (2019) present a foundational framework for K–12 artificial intelligence education through the AI4K12 initiative. The authors outline five “big ideas” that they argue all students should understand: perception, representation and reasoning, learning, natural interaction, and societal impact. The goal is to create a developmentally appropriate roadmap for integrating AI concepts across grade levels in a way that prepares students to navigate AI-mediated societies. The article emphasizes core competencies such as understanding how machines learn from data, recognizing bias, evaluating limitations of AI systems, and considering ethical implications. The authors argue that early AI education is essential for building an informed and critically aware citizenry.

Evaluation
I found the framework presented in this article to be practical, forward-thinking, and highly relevant to current educational technology discussions. The “big ideas” approach offers a structured way to introduce complex AI concepts without overwhelming younger learners, and it provides a common language for curriculum designers, teachers, and policymakers. Because the article is presented as a conference paper, it is necessarily concise and does not provide empirical classroom data or instructional evaluations. However, the conceptual clarity and actionable vision make it a strong foundation for K–12 AI curricula. One minor limitation is that the paper discusses implementation challenges only briefly, even though resource constraints and teacher training gaps pose significant barriers.

Reflection
This framework aligns closely with my interest in instructional design, student engagement, and digital literacies. While I teach adult learners at DeVry rather than K–12 students, the principles outlined here mirror many of the pedagogical decisions I make when introducing technologies in BIS155, BIS310, and SEC440. For example, helping students understand not only how AI tools function but also their limitations and ethical considerations is becoming increasingly central to my teaching practice. The article also informs my doctoral perspective on how early exposure to AI concepts could support long-term motivation, autonomy, and digital agency. In my IT project management role, I also see firsthand how lack of foundational AI literacy contributes to misunderstandings, unrealistic expectations, and training gaps—reinforcing the importance of frameworks like AI4K12.