July 2025, Volume 28, Issue 3
Special Issue on "The application and research of generative AI in education"
Guest Editor(s): Jiun-Yu Wu, Morris Siu-Yung Jong and Oi-Man Kwok
Download Table of Content in PDF
Download Complete Issue in PDF
Special Issue Articles
Guest editorial: The application and research of generative AI in education
Jiun-Yu Wu, Morris Siu-Yung Jong and Oi-Man Kwok
Lucas Kohnke
Department of English Language Education, The Education University of Hong Kong, Hong Kong, China // lucaskohnke@gmail.com, lmakohnke@eduhk.hk
Di Zou
Department of English and Communication, The Hong Kong Polytechnic University, Hong Kong, China // dizoudaisy@gmail.com, dizou@ln.edu.hk
ABSTRACT:
This study explored the impact of ChatGPT on classroom teaching and lesson preparation through the experiences of 12 English language teachers at a Hong Kong university. The primary research method, qualitative interviews, provided critical insights into the intersection of artificial intelligence (AI) and pedagogy. Four themes emerged: increased reliance on ChatGPT for lesson planning due to its convenience; the transformation of teaching methodologies, with ChatGPT becoming an important tool; the challenges presented by ChatGPT (inappropriate content; the risk of becoming over-reliant on it); and the new opportunities that ChatGPT offers for differentiated instruction and customised assessment. While ChatGPT significantly reshaped teaching practice, it was clear that systematic training was needed to allow them to leverage its benefits fully. Addressing challenges such as adjusting content to student abilities and balancing AI with other resources is vital to the optimisation of AI in pedagogy. This study underscores the need for continuous professional development, balanced resource management and curriculum revisions to incorporate AI in the teaching of English; encourages the exploration of innovative AI-enhanced pedagogies; contributes to the understanding of how AI is reshaping pedagogical practice; and offers practical recommendations for educators navigating the integration of AI into their teaching processes.
Keywords:
ChatGPT, Generative Artificial Intelligence, English language education, Lesson planning
Cite as:Kohnke, L., & Zou, D. (2025). The role of ChatGPT in enhancing English teaching: A paradigm shift in lesson planning and instructional practices. Educational Technology & Society, 28(3), 4-20. https://doi.org/10.30191/ETS.202507_28(3).SP02
Published December 12, 2024
Yulei Ye, Hanglei Hu and Bo Jiang
Yulei Ye
Department of Educational Information Technology, East China Normal University // 51254108022@stu.edu.ecnu.cn
Hanglei Hu
Department of Educational Information Technology, East China Normal University //5 1254108023@stu.edu.ecnu.cn
Bo Jiang
Lab of Artificial Intelligence for Education, School of Computer Science, East China Normal University // bjiang.zh@gmail.com
ABSTRACT:
Competency is a broad concept in education, often described in unstructured natural language within curriculum frameworks, which makes assessing it difficult. The construction of a competency ontology serves to mitigate this drawback by illustrating its boundary and internal structure. However, construction of ontology is a very time-consuming work that need huge effort from several domain experts. Inspired by the powerful language understanding capability of large language models (LLMs) such as GPT model, this work proposed a human-AI collaboration approach to accelerate the construction of competency ontology. This paper demonstrates how to extract knowledge concepts, relations and entities from a Chinese national computer science curriculum framework to construct a competency ontology and ontology-based knowledge graph through collaboration with GPT model. We evaluate the extraction result of GPT model by comparing with manual result and other LLMs’ result. The outcomes of GPT-4o model demonstrate that it is capable to cover at least three-quarters of human efforts in extraction, showcasing its superior qualification in ontology and ontology-based knowledge graph constructions.
Keywords:
Knowledge graph, Ontology, Human-AI collaboration, ChatGPT, Competency
Cite as:Ye, Y., Hu, H., & Jiang, B. (2025). Constructing an ontology-based knowledge graph for K-12 computer science competency via human-AI collaboration. Educational Technology & Society, 28(3), 21-35. https://doi.org/10.30191/ETS.202507_28(3).SP03
Published December 13, 2024
Arif Cem Topuz, Mine Yıldız, Elif Taşlıbeyaz, Hamza Polat and Engin Kurşun
Arif Cem Topuz
Ardahan University, Faculty of Engineering, Department of Computer Engineering, Türkiye // arifcemtopuz@ardahan.edu.tr
Mine Yıldız
Atatürk University, Kazim Karabekir Education Faculty, Department of Foreign Language Education, Türkiye // mine.yazici@atauni.edu.tr
Elif Taşlıbeyaz
Erzincan Binali Yıldırım University, Faculty of Education, Department of Computer Education and Instructional Technologies, Türkiye // etaslibeyaz@erzincan.edu.tr
Hamza Polat
Atatürk University, Faculty of Applied Sciences, Department of Information Sciences and Technologies, Türkiye // hamzapolat@atauni.edu.tr
Engin Kurşun
Atatürk University, Kazim Karabekir Education Faculty, Department of Computer Education and Instructional Technologies, Türkiye // ekursun@atauni.edu.tr
ABSTRACT:
Language teachers mostly spend much time scoring students’ writing and may sometimes hesitate to provide reliable scores since essay scoring is time-consuming. In this regard, AI-based Automated Essay Scoring (AES) systems have been used and Generative AI (GenAI) has recently appeared with its potential in scoring essays. Therefore, this study aims to focus on the differences and relationships between human raters (HR) and GenAI scores for the essays produced by English as a Foreign Language (EFL) learners. The data consisted of 210 essays produced by 35 undergraduate students. Two HR and GenAI evaluated the essays using an analytical rubric divided into the following five factors: (1) ideas, (2) organization and coherence, (3) support, (4) style, and (5) mechanics. This study found that there were significant differences between the scores given by HR and those generated by GenAI, as well as variations among the HR themselves; nonetheless, GenAI’s scores were similar across dual evaluations. It was also noted that GenAI’s scores were statistically significantly lower than those of HR. On the other side, it was found that HR scores correlated weakly, while GenAI scores correlated strongly. A significant correlation was observed between HR-1 and GenAI across all factors, whereas the second HR-2 showed significant correlations with GenAI in only three factors. Therefore, this study can guide EFL teachers on how to reduce their workload in writing assessments by giving GenAI more responsibility in scoring essays. The study also offers many suggestions for future studies on AES based on the findings and limitations of the study.
Keywords:
Generative artificial intelligence (GenAI), ChatGPT, Human raters (HR), Automated essay scoring (AES), Automated writing evaluation (AWE)
Cite as:Topuz, A. C., Yıldız, M., Taşlıbeyaz, E., Polat, H., & Kurşun, E. (2025). Is generative AI ready to replace human raters in scoring EFL writing? Comparison of human and automated essay evaluation. Educational Technology & Society, 28(3), 36-50. https://doi.org/10.30191/ETS.202507_28(3).SP04
Published December 14, 2024
Georgios Lampropoulos, Richard Ferdig and Regina Kaplan-Rakowski
Georgios Lampropoulos
Department of Applied Informatics, University of Macedonia, Greece // Department of Education, University of Nicosia, Cyprus // lamprop.geo@gmail.com
Richard Ferdig
Research Center for Educational Technology, Kent State University, United States // rferdig@gmail.com
Regina Kaplan-Rakowski
Department of Learning Technologies, University of North Texas, United States // Regina.Kaplan-Rakowski@unt.edu
ABSTRACT:
Generative Artificial Intelligence (GAI) tools, such as ChatGPT, have been rapidly revolutionizing the world in many contexts, including education. This study reports on early adopters’ perspectives, attitudes, sentiments, and discourses regarding the general and educational use of ChatGPT using X (former Twitter) data. Text mining, sentiment analysis, and topic modeling techniques were used to analyze the data. The results revealed the vast applicability of GAI tools and the versatility of ChatGPT to be used in different domains and by users from diverse backgrounds and of different expertise. The use of GAI, specifically ChatGPT, was highlighted in the areas of education, enterprises, cybersecurity, marketing and content creation, entertainment, virtual assistants, chatbots, and investment. Positive sentiments significantly outnumbered the negative ones and the public mostly expressed trust, anticipation, and joy when referring to both the general and educational use of ChatGPT. A statistically significant difference was observed with educational dataset containing considerably fewer neutral tweets. Therefore, it can be inferred that educators were more expressive and emotional when discussing the adoption and use of new technologies in their classrooms and in educational settings. Moreover, the study findings have two key implications. First, social media, such as Twitter, may be useful to discuss and analyze new innovations, such as ChatGPT. Second, this study reveals a disparity between the general public and education sector in their perspectives on innovations, with the educators exhibiting more extreme emotions. This finding raises concerns about the digital divide and the potential of educators acting impulsively based on emotions.
Keywords:
ChatGPT, Education, Artificial intelligence, Generative artificial intelligence, Chatbot
Cite as:Lampropoulos, G., Ferdig, R., & Kaplan-Rakowski, R. (2025). A social media data analysis of general and educational use of ChatGPT: Understanding emotional educators through Twitter data. Educational Technology & Society, 28(3), 51-65. https://doi.org/10.30191/ETS.202507_28(3).SP05
Published December 15, 2024
Jian-Wen Fang, Jing Chen, Qiu-Lin Weng, Yun-Fang Tu, Gwo-Jen Hwang and Yi-Chen Xia
Jian-Wen Fang
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // beginnerfjw@163.com
Jing Chen
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 2594552593@qq.com
Qiu-Lin Weng
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 17630909926@163.com
Yun-Fang Tu
Empower Vocational Education Research Center, National Taiwan University of Science and Technology, Taipei City, Taiwan // sandy0692@gmail.com
Gwo-Jen Hwang
Graduate Institute of Educational information and Measurement, National Taichung University of Education, Taiwan // Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, Taiwan // Yuan Ze University, Taoyuan, Taiwan // gjhwang.academic@gmail.com
Yi-Chen Xia
Winchester School of arts, University of Southampton, United Kingdom // friedhelm3321@gmail.com
ABSTRACT:
Debugging constitutes a pivotal component in the learning curve of programming, serving not only to enhance coding proficiencies but also to cultivate thinking skills. However, debugging tools embedded in integrated development environments (IDEs) often provide limited error diagnosis, which may reduce students’ engagement with coding and inhibit their learning performances. This study therefore proposed a Reflective Generative Artificial Intelligence (GenAI) Debugging (RGD) approach, an innovative debugging approach based on reflective strategies and operationalized through a GenAI tool, an intelligent conversational agent. To assess the performance of the approach, we used a quasi-experimental design and recruited 80 high school students from two classes of a province in eastern China. One class with 40 students was selected to be the experimental group using the RGD approach; the other class with 40 students was the control group utilizing the conventional coding and debugging (CGD) approach. The results depicted that the RGD group better enhanced students’ learning achievement than the CGD group. Nevertheless, a significant difference was found in the two groups’ computational thinking skills. The findings could be a reference to instructors and researchers intending to use GenAI in programming classes.
Keywords:
Generative AI, Computer programming, Computational thinking, Reflective strategy
Cite as:Fang, J.-W., Chen, J., Weng, Q.-L., Tu, Y.-F., Hwang, G.-J., & Xia, Y.-C. (2025). Effects of a GenAI-based debugging approach integrating the reflective strategy on senior high school students’ learning performance and computational thinking. Educational Technology & Society, 28(3), 66-81. https://doi.org/10.30191/ETS.202507_28(3).SP06
Published December 15, 2024
Anna Y. Q. Huang, Chien-Chang Lin, Sheng-Yi Su, Ruo-Xuan Yen and Stephen J. H. Yang
Anna Y. Q. Huang
Department of Computer Science and Information Engineering, National Central University, Taiwan // anna.yuqing@gmail.com
Chien-Chang Lin
Department of Computer Science and Information Engineering, National Central University, Taiwan // cclin3123@gmail.com
Sheng-Yi Su
Department of Computer Science and Information Engineering, National Central University, Taiwan // zxca497@gmail.com
Ruo-Xuan Yen
Department of Computer Science and Information Engineering, National Central University, Taiwan // apple310565@gmail.com
Stephen J. H. Yang
Department of Computer Science and Information Engineering, National Central University, Taiwan // stephen.yang.ac@gmail.com
ABSTRACT:
Previous studies have demonstrated that summarization activities effectively enhance students’ reading comprehension and learning performance. With the ongoing surge in the application of artificial intelligence in education, this study introduces a chatbot with generative artificial intelligence (GenAI) to assist students in reviewing Python programming concepts by refining the summary after class. Consequently, this study aims to assess the efficacy of the proposed a GenAI-enabled chatbot in enhancing the summarization ability and learning performance of novice programmers in college. Experimental results reveal that, the GenAI-enabled chatbot not only significantly improves students’ programming coding performance but also significantly improves their summarization ability. It is noteworthy that students’ summarization proficiency is not only elevated from the outset but also consistently progresses until the end. This study also identified that understanding the Python programming concepts of input/output operations and lists serves as predictors for learning performance in programming concept and coding, respectively. This emphasizes the significance of fundamental programming concept and a nuanced understanding of advanced data types in programming learning.
Keywords:
Artificial intelligence in education, Natural language processing, data science applications in education, Educational data mining
Cite as:Huang, A. Y. Q., Lin C.-C., Su, S.-Y., Yen, R.-Y., & Yang, S. J. H. (2025). Improving students’ learning performance and summarization ability through a Generative AI-enabled Chatbot. Educational Technology & Society, 28(3), 82-111. https://doi.org/10.30191/ETS.202507_28(3).SP07
Published December 15, 2024
Khe Foon Hew, Weijiao Huang, Sikai Wang, Xinyi Luo and Donn Emmanuel Gonda
Khe Foon Hew
The University of Hong Kong, Hong Kong SAR // kfhew@hku.hk
Weijiao Huang
The University of Hong Kong, Hong Kong SAR // wjhuang1@connect.hku.hk
Sikai Wang
The University of Hong Kong, Hong Kong SAR // sikaiw@connect.hku.hk
Xinyi Luo
The University of Hong Kong, Hong Kong SAR // xinyiluo@hku.hk
Donn Emmanuel Gonda
The University of Hong Kong, Hong Kong SAR // dgonda@hku.hk
ABSTRACT:
Despite the prevalence of online learning, the lack of student self-regulated learning (SRL) skills continues to be persistent issue. To support students’ SRL, teachers can prompt with SRL-related questions and provide timely, personalized feedback. Providing timely, personalized feedback to each student in large classes, however, can be labor-intensive for teachers. This 2-stage study offers a novel contribution by developing a Large Language Model (LLM)-based chatbot system that can automatically monitor students’ goal setting and planning in online learning. Goal setting and planning are two important skills that can occur in all SRL phases. In stage 1, we developed the Goal-And-Plan-Mentor ChatGPT system (GoalPlanMentor) by creating an SRL knowledge base with goal and plan indicators, using Memory-Augmented-Prompts to automatically detect student goals and plans, and providing personalized feedback. In stage 2, we compared the accuracy of GoalPlanMentor’s detection (coding) of students’ goals and plans with human coders, examined the quality of GoalPlanMentor’s feedback, and students’ perceptions about the usefulness of GoalPlanMentor. Results show substantial to near perfect agreement between GoalPlanMentor’s and human’s coding, and high quality of GoalPlanMentor’s feedback in terms of providing clear directions for improvement. Overall, students perceived GoalPlanMentor to be useful in setting their goals and plans, with average values being significantly higher than the midpoint of the scale. Students who highly perceived the system’s usefulness for goal-setting exhibited significantly greater learning achievements compared to those with a low perception of its usefulness. Implications for future research are discussed.
Keywords:
Generative artificial intelligence, Chatbot, Self-regulated learning, Online learning, Large language models
Cite as:Hew, K. F., Huang, W., Wang, S., Luo, X., Gonda, D. E. (2025). Towards a large-language-model-based chatbot system to automatically monitor student goal setting and planning in online learning. Educational Technology & Society, 28(3), 112-132. https://doi.org/10.30191/ETS.202507_28(3).SP08
Published December 17, 2024
Zhi-Qiang Ma, Xin Cui, Wen-ping Liu, Yun-Fang Tu and Gwo-Jen Hwang
Zhi-Qiang Ma
Jiangsu Research Center of “Internet Plus Education”, Jiangnan University, Wuxi City, China // mzq1213@jiangnan.edu.cn
Xin Cui
School of Design, Jiangnan University, Wuxi City, China // cuixin960220@163.com
Wen-ping Liu
Jiangsu Research Center of “Internet Plus Education”, Jiangnan University, Wuxi City, China // 771358712@qq.com
Yun-Fang Tu
Department of Educational Technology, Wenzhou University, Wenzhou, China // sandy0692@gmail.com
Gwo-Jen Hwang
Graduate Institute of Educational information and Measurement, National Taichung University of Education, Taichung City, Taiwan // Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, Taipei City, Taiwan // Yuan Ze University, Taoyuan City, Taiwan // gjhwang.academic@gmail.com
ABSTRACT:
In traditional collaborative argumentation activities, students often struggle to present arguments from diverse perspectives. ChatGPT is capable of understanding user prompts and generating corresponding responses, and it can play different roles with diverse backgrounds to argue with students, creating the possibility of promoting the quality of their argumentation. However, to make ChatGPT’s responses work well for argumentation, students need to give appropriate prompts. Therefore, this study proposed the role-playing prompt-based ChatGPT-assisted Collaborative Argumentation (CaCA) approach, and a quasi-experiment was conducted to examine its effects on students’ argumentation outcomes, processes, and perceptions. Sixty-six first-year graduate students engaged in this experiment: the experimental group adopted the role-playing prompt-based CaCA approach, while the control group adopted the conventional CaCA approach. Results showed that the role-playing prompt-based CaCA approach broadened students’ perspectives in their arguments and increased the connections between data and claims, forming the chain of arguments centered on warrant and backing in their discourse. However, it did not significantly enhance their ability to edit ideas deeply or increase their willingness to give rebuttals. This research provides new insights into the application of ChatGPT in a micro-level collaborative argumentation context.
Keywords:
ChatGPT in education, Generative artificial intelligence, Educational prompt engineering, Chatbot, Epistemic network analysis
Cite as:Ma, Z.-Q., Cui, X., Liu, W.-P., Tu, Y.-F., & Hwang, G.-J. (2025). ChatGPT-assisted collaborative argumentation: Effects of role-playing prompts on students’ argumentation outcomes, processes, and perceptions. Educational Technology & Society, 28(3), 133-150. https://doi.org/10.30191/ETS.202507_28(3).SP09
Published December 17, 2024
Ting Tian, Ching Sing Chai, Mei-Hwa Chen and Jyh-Chong Liang
Ting Tian
Department of Curriculum and Instruction, Faculty of Education, The Chinese University of Hong Kong, Hong Kong // tianting@link.cuhk.edu.hk
Ching Sing Chai
Department of Curriculum and Instruction, Faculty of Education, The Chinese University of Hong Kong, Hong Kong // CSChai@cuhk.edu.hk
Mei-Hwa Chen
Department of Computer Science, College of Nanotechnology, Science, and Engineering, University at Albany, State University of New York // mchen@albany.edu
Jyh-Chong Liang
Program of Learning Sciences, School of Learning Informatics, National Taiwan Normal University, Taiwan // aljc@ntnu.edu.tw
ABSTRACT:
The rapid development of chatbots undergirded by large language models calls for teachers and students to explore using chatbots in educational ways. Given this new phenomenon, education researchers need to explore students’ responses to chatbots to comprehensively understand their experiences. This study adopted purposive sampling to interview Taiwanese university students (N = 17) who had used ChatGPT for a semester in an open elective course. Transcripts were analyzed using the grounded theory approach. Findings indicate that while the challenges posed by ChatGPT perturbed the students, their actual interactions with the chatbot revealed that it lacks some vital epistemic aspects. ChatGPT cannot discern what is true, and cannot experience the world as humans do. Hence, while it could assist humans in some epistemic endeavors, the students concluded that they need to assume epistemic agency in its use. The findings imply that educators may need to design academic tasks that further develop undergraduates’ epistemic agency for the critical and creative use of chatbot-generated artifacts.
Keywords:
Generative artificial intelligence, Epistemic agency, Grounded theory, University students
Cite as:Tian, T., Chai, C. S., Chen, M.-H., & Liang, J.-C. (2025). University students’ perception of learning with generative artificial intelligence. Educational Technology & Society, 28(3), 151-165. https://doi.org/10.30191/ETS.202507_28(3).SP10
Published December 23, 2024
Wangda Zhu, Wanli Xing, Eddy Man Kim, Chenglu Li, Yuanzhi Wang, Yoon Yang and Zifeng Liu
Wangda Zhu
University of Florida, USA // wangdazhu@ufl.edu
Wanli Xing
University of Florida, USA // wanli.xing@coe.ufl.edu
Eddy Man Kim
Cornell University, USA // mk369@cornell.edu
Chenglu Li
Utah University, USA // chenglu.li@utah.edu
Yuanzhi Wang
Cornell University, USA // yw529@cornell.edu
Yoon Yang
Cornell University, USA // yy825@cornell.edu
Zifeng Liu
University of Florida, USA // liuzifeng@ufl.edu
ABSTRACT:
Although image-generative AI (GAI) has sparked heated discussion among engineers and designers, its role in CAD (computer-aided design) education, particularly during the conceptual design phase, remains not sufficiently explored. To address this, we examined the integration of GAI into the early stages of design in a CAD class. Specifically, we conducted an in-class workshop introducing Midjourney for conceptual design and released a home assignment on mood board design for hands-on GAI design practice. Twenty students completed the workshop and assignment from a CAD class at a research-intensive university. We collected and analyzed data from surveys, students’ prompts, and design artifacts to explore their perceptions of GAI, prompt behaviors, and design creativity and conducted a correlation analysis between these variables. After the workshop, students significantly rated GAI as more useful and user-friendly in design and found it more supportive in terms of design efficiency and aesthetics, while we did not find a significant difference in design creativity. By analyzing 365 prompts used for completing home tasks, we identified three types of prompt behaviors: generation, modification, and selection, as well as classified three types of workflows: exploration, two-step, and multi-step. The correlation analysis showed that design creativity changes significantly and positively correlated with their prompt behaviors. Students with more multi-step prompts produced more creative artifacts based on the instructors’ evaluation. This exploratory study offered valuable insights into the integration of GAI in CAD education and suggested potential directions for future GAI curricula and tools in design education.
Keywords:
Image-generative AI, Design education, Perceptions, Prompt behaviors, Creativity
Cite as:Zhu, W., Xing, W., Kim, E. M., Li, C., Wang, Y., Yang, Y., & Liu, Z. (2025). Integrating image-generative AI into conceptual design in computer-aided design education: Exploring student perceptions, prompt behaviors, and artifact creativity. Educational Technology & Society, 28(3), 166-183. https://doi.org/10.30191/ETS.202507_28(3).SP11
Published December 23, 2024
Theme-Based Articles
Generative artificial intelligence in education: Theories, technologies, and applications
Xiao-Pei Meng, Xiao-Ge Guo, Jian-Wen Fang, Jing Chen and Le-Fu Huang
Xiao-Pei Meng
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 15610147930@163.com
Xiao-Ge Guo
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 1342777584@qq.com
Jian-Wen Fang
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // beginnerfjw@163.com
Jing Chen
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 2594552593@qq.com
Le-Fu Huang
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // huanglefu123@163.com
ABSTRACT:
With the wide application of Generative Artificial Intelligence (GenAI) technology in society, GenAI literacy and critical thinking are considered vital competencies for success in the future. As future teachers, pre-service teachers (PSTs) must possess these key competencies. Previous research has pointed out that although learning and applying GenAI knowledge and technology can improve PSTs’ GenAI literacy, the results were not as expected. This study incorporated a KWL (Know, Want, Learned)-based reflection strategy based on the SCQA (Situation, Complication, Question, and Answer) model, and proposed an RSCQA (Reflection, Situation, Complication, Question, and Answer) approach, which aims to help PSTs develop GenAI literacy and critical thinking. A quasi-experiment was designed for this study to verify the validity of the RSCQA approach, which was used by the experimental group, while the Conventional SCQA (CSCQA) approach was used by the control group. Results of the study demonstrated that using the RSCQA approach improved PSTs’ GenAI literacy and critical thinking, and they excelled in multimodal demonstration capability. This study also conducted an ENA analysis of the content of PSTs’ reflections, and the results demonstrated that reflective strategies help PSTs engage in higher-order cognitive activities. The results of the qualitative interviews further indicated that the reflective strategy enhanced PSTs’ learning effectiveness and strengthened their critical thinking. Overall, the RSCQA approach helps PSTs engage in reflective learning and perform better in GenAI literacy and critical thinking. The results of this study provide practical experience and operational examples of PST cultivation.
Keywords:
Artificial Intelligence in Education, Critical thinking, Teacher training, Information literacy, Generative artificial intelligence
Cite as:Meng, X.-P., Guo, X.-G., Fang, J.-W., Chen, J., & Huang, L.-F. (2025). Fostering pre-service teachers’ generative AI literacy and critical thinking: An RSCQA approach. Educational Technology & Society, 28(3). https://doi.org/10.30191/ETS.202507_28(3).TP01
Published December 17, 2024
De-Xin Hu, Dan-Dan Pang and Zhe Xing
De-Xin Hu
School of Education, Tianjin University, Tianjin, China // hudx@tju.edu.cn
Dan-Dan Pang
School of Education, Tianjin University, Tianjin, China // pdd_0718@tju.edu.cn
Zhe Xing
School of Education, Tianjin University, Tianjin, China // 2022212035@tju.edu.cn
ABSTRACT:
The emergence of Generative AI technologies, represented by ChatGPT, has triggered extensive discussions among scholars in the education sector. While relevant research continues to grow, there is a lack of comprehensive understanding that systematically measures the effects of Generative AI on student learning outcomes. This study employs meta-analysis to integrate findings from previous experimental and quasi-experimental research to evaluate the impact of Generative AI on student learning outcomes. The analysis of 44 effect sizes from 21 independent studies indicates that Generative AI tools, compared to traditional AI tools or no intervention, moderately enhance student learning outcomes (g = 0.572). These tools significantly improve the cognitive (g = 0.604), behavioral (g = 0.698), and affective (g = 0.478) dimensions of learning outcomes. In addition, the study identifies and examines 6 potential moderating variables: educational level, sample size, subject area, teaching model, intervention duration, and assessment instrument. The results of the moderating effects test reveal that sample size and assessment instrument significantly influence the effectiveness of Generative AI. For sample size, the effect of Generative AI on small samples (g = 1.216) is greater than that on medium (g = 0.476) and large samples (g = 0.547). For assessment instrument, the effect of Generative AI on self-developed tests (g = 0.984) is greater than that on standardized tests (g = 0.557). The meta-analysis result indicated that the use of Generative AI should be supplemented with detailed guidance and flexible strategies. Specific recommendations for future research and practical implementations of Generative AI in education are discussed.
Keywords:
Education, Generative artificial intelligence, Generative AI, Learning outcomes, Meta-analysis
Cite as:Hu, D.-X., Pang, D.-D., & Xing, Z. (2025). Evaluating the effects of Generative AI on student learning outcomes: Insights from a meta-analysis. Educational Technology & Society, 28(3). https://doi.org/10.30191/ETS.202507_28(3).TP02
Published December 17, 2024
Chiu-Lin Lai
Department of Education, National Taipei University of Education, Taiwan // jolen761002@gmail.com
ABSTRACT:
Discussing with generative artificial intelligence (GAI) has been recognized as a method for obtaining information. Researchers have noticed that students’ ability to search, select, and evaluate information (e.g., their online information-searching strategies) may affect the quality of their learning and discussion with GAI. In this study, we conducted an experiment in an online science course to explore the role of online information-searching strategies (OISS) in the process of students discussing with GAI. A total of 46 high school students participated in the study. In the course, students wrote question prompts to the GAI, accessed information provided by the GAI, and composed their science reports. We collected the students’ OISS tendency questionnaire, science reports, and the question prompt content they wrote to GAI. According to the results, students with higher OISS outperformed those with lower OISS in terms of content accuracy and logical descriptions in their science reports. The ordered network analysis (ONA) results showed a significant difference in the prompting sequences of the two groups of students. Students with higher OISS gradually developed their knowledge models of scientific concepts by organizing information and finding connections or inconsistencies among different types of knowledge. Students with lower OISS strategies focused more on content extension and consolidation of independent expertise. We labeled the higher OISS students as holism learners, while the lower OISS students were labeled as atomism learners. Lastly, the study findings underscore the importance of guiding students to comprehensively structure and synthesize their knowledge within GAI-based learning environments.
Keywords:
Generative artificial intelligence, Online information-searching strategies, Question prompts, Science education, Holism/Atomism learning
Cite as:Lai, C.-L. (2025). Consolidate knowledge or build scientific models? The role of online information-searching strategies in students’ prompt sequences with Generative Artificial Intelligence. Educational Technology & Society, 28(3). https://doi.org/ 10.30191/ETS.202507_28(3).TP03
Published December 17, 2024
Seyfullah Gökoğlu and Fatih Erdoğdu
Seyfullah Gökoğlu
Bartın University, Türkiye // gokogluseyfullah@gmail.com
Fatih Erdoğdu
Zonguldak Bülent Ecevit University, Türkiye // fatiherdogdu67@gmail.com
ABSTRACT:
Artificial Intelligence (AI) has become increasingly prevalent in education, and Generative AI (GenAI) has emerged as a new concept in AI technology. Educators continue to debate the advantages and disadvantages of GenAI in education. To guide these discussions, further analysis is needed to determine the impact of GenAI on educational outcomes. This study aims to investigate the effect of GenAI on learning performance using the meta-analysis method. This meta-analysis study synthesized the results of 31 articles involving 2646 participants. The results show that GenAI has a moderately positive effect on learning performance (g = .689). There is no publication bias in determining the validity of the effect. The analysis included the effect sizes of seven moderator variables: sample level, sample size, research design, learning domain, research setting, intervention duration, GenAI tool, and testing format. Only intervention duration, GenAI tool, and testing format significantly moderated the effectiveness of GenAI on learning performance.
Keywords:
GenAI, Learning performance, Meta-analysis, Effect size
Cite as:Gökoğlu, S., & Erdoğdu, F. (2025). The effects of GenAI on learning performance: A meta-analysis study. Educational Technology & Society, 28(3). https://doi.org/10.30191/ETS.202507_28(3).TP04
Published December 17, 2024
Leisi Pei, Morris Siu-Yung Jong, Biyun Huang, Wai-Chung Pang and Junjie Shang
Leisi Pei
Department of Curriculum and Instruction, Faculty of Education and Human Development, The Education University of Hong Kong, Hong Kong SAR // lpei@eduhk.hk
Morris Siu-Yung Jong
Centre for Learning Sciences and Technologies, and Department of Curriculum and Instruction, The Chinese University of Hong Kong, Hong Kong SAR // mjong@cuhk.edu.hk
Biyun Huang
School of Education, City University of Macau, Macau SAR // byhuang@cityu.edu.mo
Wai-Chung Pang
Division of English Education, HKFYG Lee Shau Kee College, Hong Kong SAR // wcpang@hlc.edu.hk
Junjie Shang
Lab of Learning Sciences, Graduate School of Education, Peking University, China // jjshang@pku.edu.cn
ABSTRACT:
ChatGPT, which rides on generative artificial intelligence (AI) technology, has garnered exponential attention worldwide since its release in 2022. While there is an increasing number of studies probing into the potential benefits and challenges of ChatGPT in education, most of them confine the focus to informal learning and higher education. The formal adoption of ChatGPT in authentic classroom settings, especially in K-12 contexts, remains underexplored. In view of this gap, we conducted a quasi-experiment to study the effects of integrating ChatGPT into a compulsory Grade-10 English as a foreign language (EFL) writing course in a Hong Kong secondary school. The participants (99 Grade-10 students) were divided into treatment or control groups; the former and latter respectively used ChatGPT and conventional media as the tool for EFL writing instruction. The analysis of the participants’ EFL writing performance at the end of the experiment shows that the treatment group outperformed the control group. Further, there was a significant interaction effect between the assigned group of participants and their baseline level of EFL proficiency. Compared to the high-achieving participants, low-achieving ones tended to benefit more from ChatGPT-supported EFL writing instruction in the experiment. These findings together suggest that ChatGPT has the potential to be applied in formal EFL writing instruction in K-12 classroom settings.
Keywords:
Artificial intelligence (AI), ChatGPT, English as a foreign language (EFL), EFL writing instruction, K-12 education, Task-based language teaching (TBLT)
Cite as:Pei, L., Jong, M. S.-Y., Huang, B., Pang, W.-C., & Shang, J. (2025). Formally integrating Generative AI into secondary education: Application of ChatGPT in EFL writing instruction. Educational Technology & Society, 28(3). https://doi.org/10.30191/ETS.202507_28(3).TP05
Published December 17, 2024
An empirical study of the effects of peer intelligent agent instructional strategy on EFL students' oral English learning performance, cognitive
Gang Yang and Qun-Fang Zeng
Full Length Articles
Starting from Volume 17 Issue 4, all published articles of the journal of Educational Technology & Society are available under Creative Commons CC-BY-ND-NC 3.0 license.