Download Table of Content in PDF
Download Complete Issue in PDF
Si Zhang, Qian Yang, Honghui Li and Chaowang Shang
Si Zhang
Hubei Key Laboratory of Digital Education, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China // djzhangsi@mail.ccnu.edu.cn
Qian Yang
Hubei Key Laboratory of Digital Education, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China // yqian@mails.ccnu.edu.cn
Honghui Li
Hubei Key Laboratory of Digital Education, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China // edulihonghui@mails.ccnu.edu.cn
Chaowang Shang
Hubei Key Laboratory of Digital Education, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China // scw@mail.ccnu.edu.cn
ABSTRACT:
To address issues such as insufficient depth and low efficiency in online discussions within computer-supported collaborative learning (CSCL), this study aims to enhance regulated learning by integrating group awareness tools (GATs) with collaborative reflection scripts, thereby alleviating these challenges. A total of 33 student teachers participated in the experiment. Epistemic network analysis (ENA) was used to analyze group discourse data, investigating differences in regulated learning behaviors among student teachers with high- and low-self-regulation (SR) levels under the support of the GAT integrated with a collaborative reflection script, as well as the changes in regulated learning patterns across stages. Results indicated that the GAT effectively supported regulated learning, primarily promoting “monitoring and evaluation,” social regulation, and “content monitoring” among both high- and low-SR groups. Specifically, the high-SR groups primarily exhibited a regulation focus pattern characterized by “content monitoring-organizing,” while the low-SR groups primarily exhibited a pattern centered on “content monitoring-process monitoring.” The GAT may have a more significant impact on facilitating regulatory behaviors among low-SR groups. Data from interviews explained the differences in the use of the GAT between high- and low-SR groups, as well as the impact of this tool on regulated learning. This study provides evidence of how GATs integrated with collaborative scripts can promote regulated learning, providing insights into instructional practices for integrating collaborative scripts and GATs to support student teachers engaging in CSCL.
Keywords:
Cooperative/collaborative learning, Improving classroom teaching, Regulated learning, Reflection script
Leon Yufeng Wu, Jia-Wei Liu and Chih-Chang Yu
Leon Yufeng Wu
Graduate School of Education, Chung Yuan Christian University, Taiwan // leonwu@cycu.edu.tw
Jia-Wei Liu
Department of Information and Computer Engineering, Chung Yuan Christian University, Taiwan // 10827231@cycu.org.tw
Chih-Chang Yu
Department of Information and Computer Engineering, Chung Yuan Christian University, Taiwan // ccyu@cycu.edu.tw
ABSTRACT:
Novice programmers face significant challenges in comprehending abstract concepts, tracking program execution flow, and debugging. This study developed a code visualization system (COVIS) and introduced it to an introductory programming course. COVIS visualized variable states, call stack status, and pointer-referenced data while featuring user-defined input, selective display of code snippets, and seamless integration within online learning platforms. Using the Motivated Strategies for Learning Questionnaire (MSLQ) and COVIS usage questionnaires, this study identified multiple positive impacts of COVIS. Students who consistently used COVIS demonstrated increased intrinsic and extrinsic goals, alongside enhanced self-efficacy. These students also demonstrated superior effort regulation capabilities and greater persistence when confronting challenges. Regarding learning effectiveness, COVIS demonstrated delayed cumulative effects. While no significant differences were observed during the midterm exam, final exam results revealed that consistent COVIS users outperformed inactive users, particularly in implementation problems. Additionally, approximately half of the students utilized COVIS to interpret AI-generated code, rather than relying on the AI’s output unthinkingly; regarding peer discussions, the number of students increased to over 70%, indicating that students use COVIS in multiple ways to improve themselves. These findings suggest that COVIS successfully enhanced novice programmers’ performance in terms of motivation, strategic learning approaches, and academic achievement, providing a practical instructional resource for programming education. A demonstration site can be found at https://cyculab618.github.io/COVIS-demo-site/.
Keywords:
Code visualization, Learning motivation and strategy, Introductory programming learning
Miguel Nussbaum, Zvi Bekerman and Carla Gallardo-Estrada
Miguel Nussbaum
Pontificia Universidad Católica de Chile, Chile // mn@uc.cl
Zvi Bekerman
The Hebrew University of Jerusalem, Israel // zvi.bekerman@mail.huji.ac.il
Carla Gallardo-Estrada
Pontificia Universidad Católica de Chile, Chile // cogallardo@uc.cl
ABSTRACT:
Artificial Intelligence (AI) is increasingly integrated into educational practice, assisting teachers in planning, content creation, assessment, and administrative tasks. Yet, most existing studies focus on specific contexts—individual teachers, subjects, or institutions—limiting the generalizability of their findings. This study addresses that gap by analyzing how AI is incorporated into teaching across primary, secondary, and university levels. Drawing on the experiences of 770 educators from Latin America and Spain, we identify patterns in teachers’ reported uses of AI and their perceptions of its pedagogical benefits. Results reveal a Use–Impact mismatch: educators frequently rely on AI for routine and logistical tasks such as lesson planning and content generation (“high use, low perceived impact”), while underutilizing areas where AI could yield greater pedagogical value—such as time management, student feedback, and material preparation (“low use, high perceived impact”). The analysis also uncovers gender-related differences in AI engagement and a progressive sophistication of use across educational levels, with university educators displaying more strategic and pedagogically aligned integration. Methodologically, the study introduces a grounded categorization framework that enables systematic comparison of AI use and impact across diverse educational contexts. These findings highlight the need for professional development initiatives that help teachers move from instrumental to pedagogically meaningful use of AI, particularly in primary and secondary education.
Keywords:
Artificial intelligence, Teachers, Reported use, Pedagogical benefit
K Kavitha and V P Joshith
K Kavitha
Department of Education, Central University of Kerala, India // kavithakrishnan229@gmail.com
V P Joshith
Department of Education, Central University of Kerala, India // getjoshith@gmail.com
ABSTRACT:
The integration of Artificial Intelligence technologies into educational settings has accelerated, necessitating a deeper understanding of the factors influencing their acceptance and use. Grounded in the Unified Theory of Acceptance and Use of Technology (UTAUT2), this meta-analysis examines 20 empirical studies (N = 9,630) published between 2020 and 2025 that investigate AI acceptance across K-12 and higher education. Following PRISMA guidelines, the study evaluates key UTAUT2 constructs including Performance Expectancy (PE), Effort Expectancy (EE), Social Influence (SI), Facilitating Conditions (FC), Hedonic Motivation (HM), Price Value (PV), and Habit (HT), in relation to Behavioral Intention (BI) and Use Behavior (UB). Reliability analysis indicated acceptable internal consistency across constructs (α = 0.826 [FC] to 0.890 [SI]) with minimal variance. Meta-analytic findings showed that PE (r = 0.313, 95% CI [0.231–0.391]), HT (r = 0.284, 95% CI [0.149–0.409]), HM (r = 0.189, 95% CI [0.132–0.245]), and FC (r = 0.242, 95% CI [0.172–0.309]) have the strongest predictive power. Exploratory moderator analysis reveals variability in effect sizes across regions, user types, educational levels, and SEM techniques; however, the findings should be interpreted cautiously given the limited number of studies (n = 20). The findings offer theoretical insights into technology adoption and provide practical implications for educators, developers, and policymakers to support the integration of AI in education.
Keywords:
Artificial intelligence, Education, Meta-analysis, Moderators, UTAUT2
Chih-Chung Lin
Language Center, National Chiayi University // chihchunglin09@gmail.com
Fu-Yun Yu
Institute of Education, National Cheng Kung University // fuyun.ncku@gmail.com
ABSTRACT:
The pivotal role of English grammar in second language acquisition has been well-documented. Among various instructional approaches, student-generated questions (SGQ) activities have received increasing empirical support with their potential to enhance learner engagement and language awareness. However, prior SGQ applications often lacked contextual grounding, which may lead to decontextualized and pragmatically inappropriate language production. To address this issue, grounded in contextual learning theory, the present study proposed and assessed a contextualized student-generated questions (cSGQ) approach that integrates contextual learning principles into SGQ tasks. This quasi-experimental study involved 79 non-English-major university students, employing a pretest–posttest control group design to examine the effects of cSGQ on learners’ English grammar performance, specifically, form-based linguistic sentence structures (pragma-linguistic) and meaning-focused contextual understanding of linguistic forms (socio-pragmatic), and cognitive load. MANCOVA and follow-up ANCOVA results indicated that although no significant group difference was observed in pragma-linguistic performance, the cSGQ group significantly outperformed the SGQ group in socio-pragmatic performance (partial η² = .06), indicating a medium effect size, without incurring higher cognitive load. These findings underscore the pedagogical value of embedding contextual cues in SGQ tasks and offer empirical guidance for context-based grammar instruction in EFL settings.
Keywords:
Cognitive load, Contextual learning, English grammar instruction and learning, Student-generated questions
Lanqin Zheng, Zhe Shi, Zehao Liu, Yusheng Gao and Jingjie Zheng
Lanqin Zheng
School of Educational Technology, Faculty of Education, Beijing Normal University, Beijing, China // bnuzhenglq@bnu.edu.cn
Zhe Shi
School of Educational Technology, Faculty of Education, Beijing Normal University, Beijing, China // 202321010189@mail.bnu.edu.cn
Zehao Liu
School of Educational Technology, Faculty of Education, Beijing Normal University, Beijing, China // 202421010145@mail.bnu.edu.cn
Yusheng Gao
School of Educational Technology, Faculty of Education, Beijing Normal University, Beijing, China // 202422010207@mail.bnu.edu.cn
Jingjie Zheng
School of Educational Technology, Faculty of Education, Beijing Normal University, Beijing, China // 202422010243@mail.bnu.edu.cn
ABSTRACT:
In recent years, pedagogical agents have attracted increasing attention. However, few studies have explored the impact of pedagogical agents on both learning achievements and learning perceptions. To address these research gaps, this study conducts a three-layer meta-analysis from 2000 to 2024 to examine the overall influence of pedagogical agents on learning achievements and learning perceptions. A total of 116 studies with 10211 participants are included in this study. The results indicate that pedagogical agents have positive impacts on learning achievements and learning perceptions. Additionally, 19 moderating variables are examined in depth. The results reveal that educational levels, study setting, study types, agent types, agent appearance, and the number of agents substantially moderate the effect size of pedagogical agents. This study also delves into the findings and their practical implications for researchers and practitioners.
Keywords:
Pedagogical agents, Meta-analysis, Learning achievements, Learning perceptions
Chien-Hung Lai and Cheng-Yueh Lin
Chien-Hung Lai
Chung Yuan Christian University, Taiwan // soulwind@cycu.org.tw
Cheng-Yueh Lin
Chung Yuan Christian University, Taiwan // linek0820@gmail.com
ABSTRACT:
This study investigates the effects of a generative-AI (GAI)–assisted programming system on first-year undergraduates in an introductory C course. Using a quasi-experimental pre–post design, students completed a single structured session with six programming tasks supported by three scaffolded modalities—Hint, Debug, and User-defined Question—while external AI tools were restricted. Learning achievement was assessed with parallel programming tests; learning motivation followed the ARCS framework; and cognitive load was surveyed immediately before and after the session. To examine heterogeneity, students were grouped by pre-test motivation and by pre-test cognitive load via a two-stage clustering procedure. Results show that overall learning achievement improved following the GAI-assisted session. By contrast, overall learning motivation exhibited no observable change, and cognitive load displayed a slight, non-meaningful increase. Cluster analyses revealed heterogeneous effects: learners with the lowest initial motivation and those with lighter initial cognitive load benefited the most, whereas peers with higher motivation or heavier cognitive load showed no discernible gains. These findings suggest that a short, focused dose of GAI assistance can enhance programming performance, but its effectiveness is moderated by learners’ motivational and cognitive profiles. Designing autonomy-supportive scaffolds and calibrating task difficulty and information volume to learner readiness may broaden benefits while mitigating potential overload in programming education.
Keywords:
Generative AI, Programming instruction, Learning motivation, Cognitive load, Learning achievement
Na Luo, Yifan Wang, Yile Zhou and Rongfu Zhao
Na Luo
School of Foreign Languages and Literatures, Lanzhou University, China // luona@lzu.edu.cn
Yifan Wang
School of Foreign Languages and Literatures, Lanzhou University, China // wangyifan2024@lzu.edu.cn
Yile Zhou
Freelancer, China // zhouyile6@gmail.com
Rongfu Zhao
School of Foreign Languages and Literatures, Lanzhou University, China // zhaorf2024@lzu.edu.cn
ABSTRACT:
Automated writing evaluation (AWE) tools have been used to detect language errors for writers including L2 English learners. While such tools are quite capable of catching rule-governed errors, they are limited in flagging context-sensitive errors which often cripple L2 writing due to their high frequency and gravity. Large language models (LLMs) like ChatGPT may offer a promising remedy. In this exploratory study, we examined the potential of ChatGPT-4 (web-based version) to address traditional AWE tools’ limitations in flagging context-sensitive errors in 30 argumentative essays written by undergraduates at a Chinese university. Three prompts with progressive sophistication were designed, namely a general prompt and two domain-specific prompts (zero-shot and one-shot). The performance of ChatGPT across the prompts was benchmarked against Grammarly, using accuracy, coverage and their balance which were respectively measured by precision, recall and F1-score. Findings show that ChatGPT demonstrated consistently higher precision than Grammarly, with recall and F1-scores increasing progressively as prompts became more sophisticated. While it underperformed Grammarly with the general prompt, it outstripped the latter with the two domain-specific prompts. With the one-shot domain-specific prompt, it dwarfed Grammarly with significantly higher recall and F1-score. Meanwhile, the growing capacity of ChatGPT manifested saliently in flagging the most frequent and serious errors in this corpus. The results suggest that LLMs like ChatGPT, with appropriate prompts, can effectively mitigate the limitations of traditional AWE tools in detecting context-sensitive errors. Our findings have important pedagogical and technical implications.
Keywords:
Automated writing evaluation, ChatGPT, Grammarly, Accuracy, Coverage
Benazir Quadir, Kazi Mostafa and Jie-Chi Yang
Benazir Quadir
Learning Institute for Future Excellence, Xi’an Jiaotong-Liverpool University, China // benazir.quadir@xjtlu.edu.cn
Kazi Mostafa
School of Intelligent Manufacturing Ecosystem, Xi’an Jiaotong-Liverpool University, China // kazi.mostafa@xjtlu.edu.cn
Jie-Chi Yang
Graduate Institute of Network Learning Technology, National Central University, Taiwan (ROC) // yang@cl.ncu.edu.tw
ABSTRACT:
This study developed the iRead game by integrating the ARCS model with seven game dimensions to support reading comprehension in higher education. Using a mixed-methods design with 80 university students, the study examined pre–post changes in reading comprehension, relationships between iRead game dimensions and ARCS motivational factors, and the extent to which these factors predicted reading performance. Results showed a significant improvement in reading comprehension from pre-test to post-test. Correlation analysis indicated strong associations between the seven game dimensions and the four ARCS components, and regression analysis identified attention, confidence, and satisfaction as significant predictors of reading performance, whereas relevance was not. Semi-structured interviews further revealed four themes: engaging and interactive learning design, perceived educational value and future utility, guided progression and feedback-driven learning, and interactive aesthetics and progress-tracking rewards. Overall, the findings provide context-specific evidence on how ARCS-aligned gamified design relates to motivation and short-term reading outcomes, offering practical implications for gamified reading design in higher education.
Keywords:
ARCS model, Gamification, Reading comprehension, Instructional design, Gamified reading environment
Elif Meral, Zeynep Başcı Namlı and Türkan Karakuş Yılmaz
Elif Meral
Department of Social Studies Education, Atatürk University, Turkey // elif.meral@atauni.edu.tr
Zeynep Başcı Namlı
Department of Primary Education, Atatürk University, Turkey // zbasci@atauni.edu.tr
Türkan Karakuş Yılmaz
Department of Instructional Technologies, Atatürk University, Turkey // turkan.karakus@gmail.com
ABSTRACT:
This mixed–methods research examined how the frequency and intensity of gamified mobile–guided museum visits aligned with the current curriculum influenced students’ attitudes towards museum visits, spatial perception, and academic achievement, while also qualitatively exploring students’ opinions of the experience. The same gamified mobile guide was used in both groups to ensure consistency, while the experimental condition differed in the number and pacing of visits and tasks. A total of 85 fifth–grade students participated in the study. The experimental group visited the museum twice, completing approximately 30 tasks during the first visit and 35 during the second, whereas the comparison group visited the museum once and completed 60 tasks in a single session. Both groups explored the museum setting through a virtual tour and mobile app before the actual visits. Quantitative analyses revealed that participants in the experimental group showed significant improvements in academic achievement, attitudes toward museum visits, and spatial perception, while certain within– and between–group differences were also observed. Qualitative data indicated that students in both groups generally held positive views of the gamified mobile–guided experience. Overall, the findings suggest that the frequency and intensity of gamified museum visits, rather than the mere use of a mobile guide, play a critical role in enhancing engagement, enjoyment, and learning outcomes during school–led museum experiences.
Keywords:
School–led museum visits, Gamified mobile guiding, Museum learning experience, Museum visiting attitude, Spatial perception
Starting from Volume 17 Issue 4, all published articles of the journal of Educational Technology & Society are available under Creative Commons CC-BY-ND-NC 3.0 license.