Special Issue on "The application and research of generative AI in education"
Guest Editor(s): Jiun-Yu Wu, Morris Siu-Yung Jong and Oi-Man Kwok
Download Table of Content in PDF
Download Complete Issue in PDF
Guest editorial: The application and research of generative AI in education
Jiun-Yu Wu, Morris Siu-Yung Jong and Oi-Man Kwok
Lucas Kohnke
Department of English Language Education, The Education University of Hong Kong, Hong Kong, China // lucaskohnke@gmail.com, lmakohnke@eduhk.hk
Di Zou
Department of English and Communication, The Hong Kong Polytechnic University, Hong Kong, China // dizoudaisy@gmail.com, dizou@ln.edu.hk
ABSTRACT:
This study explored the impact of ChatGPT on classroom teaching and lesson preparation through the experiences of 12 English language teachers at a Hong Kong university. The primary research method, qualitative interviews, provided critical insights into the intersection of artificial intelligence (AI) and pedagogy. Four themes emerged: increased reliance on ChatGPT for lesson planning due to its convenience; the transformation of teaching methodologies, with ChatGPT becoming an important tool; the challenges presented by ChatGPT (inappropriate content; the risk of becoming over-reliant on it); and the new opportunities that ChatGPT offers for differentiated instruction and customised assessment. While ChatGPT significantly reshaped teaching practice, it was clear that systematic training was needed to allow them to leverage its benefits fully. Addressing challenges such as adjusting content to student abilities and balancing AI with other resources is vital to the optimisation of AI in pedagogy. This study underscores the need for continuous professional development, balanced resource management and curriculum revisions to incorporate AI in the teaching of English; encourages the exploration of innovative AI-enhanced pedagogies; contributes to the understanding of how AI is reshaping pedagogical practice; and offers practical recommendations for educators navigating the integration of AI into their teaching processes.
Keywords:
ChatGPT, Generative Artificial Intelligence, English language education, Lesson planning
Yulei Ye, Hanglei Hu and Bo Jiang
Yulei Ye
Department of Educational Information Technology, East China Normal University // 51254108022@stu.edu.ecnu.cn
Hanglei Hu
Department of Educational Information Technology, East China Normal University //5 1254108023@stu.edu.ecnu.cn
Bo Jiang
Lab of Artificial Intelligence for Education, School of Computer Science, East China Normal University // bjiang.zh@gmail.com
ABSTRACT:
Competency is a broad concept in education, often described in unstructured natural language within curriculum frameworks, which makes assessing it difficult. The construction of a competency ontology serves to mitigate this drawback by illustrating its boundary and internal structure. However, construction of ontology is a very time-consuming work that need huge effort from several domain experts. Inspired by the powerful language understanding capability of large language models (LLMs) such as GPT model, this work proposed a human-AI collaboration approach to accelerate the construction of competency ontology. This paper demonstrates how to extract knowledge concepts, relations and entities from a Chinese national computer science curriculum framework to construct a competency ontology and ontology-based knowledge graph through collaboration with GPT model. We evaluate the extraction result of GPT model by comparing with manual result and other LLMs’ result. The outcomes of GPT-4o model demonstrate that it is capable to cover at least three-quarters of human efforts in extraction, showcasing its superior qualification in ontology and ontology-based knowledge graph constructions.
Keywords:
Knowledge graph, Ontology, Human-AI collaboration, ChatGPT, Competency
Arif Cem Topuz, Mine Yıldız, Elif Taşlıbeyaz, Hamza Polat and Engin Kurşun
Arif Cem Topuz
Ardahan University, Faculty of Engineering, Department of Computer Engineering, Türkiye // arifcemtopuz@ardahan.edu.tr
Mine Yıldız
Atatürk University, Kazim Karabekir Education Faculty, Department of Foreign Language Education, Türkiye // mine.yazici@atauni.edu.tr
Elif Taşlıbeyaz
Erzincan Binali Yıldırım University, Faculty of Education, Department of Computer Education and Instructional Technologies, Türkiye // etaslibeyaz@erzincan.edu.tr
Hamza Polat
Atatürk University, Faculty of Applied Sciences, Department of Information Sciences and Technologies, Türkiye // hamzapolat@atauni.edu.tr
Engin Kurşun
Atatürk University, Kazim Karabekir Education Faculty, Department of Computer Education and Instructional Technologies, Türkiye // ekursun@atauni.edu.tr
ABSTRACT:
Language teachers mostly spend much time scoring students’ writing and may sometimes hesitate to provide reliable scores since essay scoring is time-consuming. In this regard, AI-based Automated Essay Scoring (AES) systems have been used and Generative AI (GenAI) has recently appeared with its potential in scoring essays. Therefore, this study aims to focus on the differences and relationships between human raters (HR) and GenAI scores for the essays produced by English as a Foreign Language (EFL) learners. The data consisted of 210 essays produced by 35 undergraduate students. Two HR and GenAI evaluated the essays using an analytical rubric divided into the following five factors: (1) ideas, (2) organization and coherence, (3) support, (4) style, and (5) mechanics. This study found that there were significant differences between the scores given by HR and those generated by GenAI, as well as variations among the HR themselves; nonetheless, GenAI’s scores were similar across dual evaluations. It was also noted that GenAI’s scores were statistically significantly lower than those of HR. On the other side, it was found that HR scores correlated weakly, while GenAI scores correlated strongly. A significant correlation was observed between HR-1 and GenAI across all factors, whereas the second HR-2 showed significant correlations with GenAI in only three factors. Therefore, this study can guide EFL teachers on how to reduce their workload in writing assessments by giving GenAI more responsibility in scoring essays. The study also offers many suggestions for future studies on AES based on the findings and limitations of the study.
Keywords:
Generative artificial intelligence (GenAI), ChatGPT, Human raters (HR), Automated essay scoring (AES), Automated writing evaluation (AWE)
Georgios Lampropoulos, Richard Ferdig and Regina Kaplan-Rakowski
Georgios Lampropoulos
Department of Applied Informatics, University of Macedonia, Greece // Department of Education, University of Nicosia, Cyprus // lamprop.geo@gmail.com
Richard Ferdig
Research Center for Educational Technology, Kent State University, United States // rferdig@gmail.com
Regina Kaplan-Rakowski
Department of Learning Technologies, University of North Texas, United States // Regina.Kaplan-Rakowski@unt.edu
ABSTRACT:
Generative Artificial Intelligence (GAI) tools, such as ChatGPT, have been rapidly revolutionizing the world in many contexts, including education. This study reports on early adopters’ perspectives, attitudes, sentiments, and discourses regarding the general and educational use of ChatGPT using X (former Twitter) data. Text mining, sentiment analysis, and topic modeling techniques were used to analyze the data. The results revealed the vast applicability of GAI tools and the versatility of ChatGPT to be used in different domains and by users from diverse backgrounds and of different expertise. The use of GAI, specifically ChatGPT, was highlighted in the areas of education, enterprises, cybersecurity, marketing and content creation, entertainment, virtual assistants, chatbots, and investment. Positive sentiments significantly outnumbered the negative ones and the public mostly expressed trust, anticipation, and joy when referring to both the general and educational use of ChatGPT. A statistically significant difference was observed with educational dataset containing considerably fewer neutral tweets. Therefore, it can be inferred that educators were more expressive and emotional when discussing the adoption and use of new technologies in their classrooms and in educational settings. Moreover, the study findings have two key implications. First, social media, such as Twitter, may be useful to discuss and analyze new innovations, such as ChatGPT. Second, this study reveals a disparity between the general public and education sector in their perspectives on innovations, with the educators exhibiting more extreme emotions. This finding raises concerns about the digital divide and the potential of educators acting impulsively based on emotions.
Keywords:
ChatGPT, Education, Artificial intelligence, Generative artificial intelligence, Chatbot
Jian-Wen Fang, Jing Chen, Qiu-Lin Weng, Yun-Fang Tu, Gwo-Jen Hwang and Yi-Chen Xia
Jian-Wen Fang
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // beginnerfjw@163.com
Jing Chen
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 2594552593@qq.com
Qiu-Lin Weng
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 17630909926@163.com
Yun-Fang Tu
Empower Vocational Education Research Center, National Taiwan University of Science and Technology, Taipei City, Taiwan // sandy0692@gmail.com
Gwo-Jen Hwang
Graduate Institute of Educational information and Measurement, National Taichung University of Education, Taiwan // Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, Taiwan // Yuan Ze University, Taoyuan, Taiwan // gjhwang.academic@gmail.com
Yi-Chen Xia
Winchester School of arts, University of Southampton, United Kingdom // friedhelm3321@gmail.com
ABSTRACT:
Debugging constitutes a pivotal component in the learning curve of programming, serving not only to enhance coding proficiencies but also to cultivate thinking skills. However, debugging tools embedded in integrated development environments (IDEs) often provide limited error diagnosis, which may reduce students’ engagement with coding and inhibit their learning performances. This study therefore proposed a Reflective Generative Artificial Intelligence (GenAI) Debugging (RGD) approach, an innovative debugging approach based on reflective strategies and operationalized through a GenAI tool, an intelligent conversational agent. To assess the performance of the approach, we used a quasi-experimental design and recruited 80 high school students from two classes of a province in eastern China. One class with 40 students was selected to be the experimental group using the RGD approach; the other class with 40 students was the control group utilizing the conventional coding and debugging (CGD) approach. The results depicted that the RGD group better enhanced students’ learning achievement than the CGD group. Nevertheless, a significant difference was found in the two groups’ computational thinking skills. The findings could be a reference to instructors and researchers intending to use GenAI in programming classes.
Keywords:
Generative AI, Computer programming, Computational thinking, Reflective strategy
Anna Y. Q. Huang, Chien-Chang Lin, Sheng-Yi Su, Ruo-Xuan Yen and Stephen J. H. Yang
Anna Y. Q. Huang
Department of Computer Science and Information Engineering, National Central University, Taiwan // anna.yuqing@gmail.com
Chien-Chang Lin
Department of Computer Science and Information Engineering, National Central University, Taiwan // cclin3123@gmail.com
Sheng-Yi Su
Department of Computer Science and Information Engineering, National Central University, Taiwan // zxca497@gmail.com
Ruo-Xuan Yen
Department of Computer Science and Information Engineering, National Central University, Taiwan // apple310565@gmail.com
Stephen J. H. Yang
Department of Computer Science and Information Engineering, National Central University, Taiwan // stephen.yang.ac@gmail.com
ABSTRACT:
Previous studies have demonstrated that summarization activities effectively enhance students’ reading comprehension and learning performance. With the ongoing surge in the application of artificial intelligence in education, this study introduces a chatbot with generative artificial intelligence (GenAI) to assist students in reviewing Python programming concepts by refining the summary after class. Consequently, this study aims to assess the efficacy of the proposed a GenAI-enabled chatbot in enhancing the summarization ability and learning performance of novice programmers in college. Experimental results reveal that, the GenAI-enabled chatbot not only significantly improves students’ programming coding performance but also significantly improves their summarization ability. It is noteworthy that students’ summarization proficiency is not only elevated from the outset but also consistently progresses until the end. This study also identified that understanding the Python programming concepts of input/output operations and lists serves as predictors for learning performance in programming concept and coding, respectively. This emphasizes the significance of fundamental programming concept and a nuanced understanding of advanced data types in programming learning.
Keywords:
Artificial intelligence in education, Natural language processing, data science applications in education, Educational data mining
Khe Foon Hew, Weijiao Huang, Sikai Wang, Xinyi Luo and Donn Emmanuel Gonda
Khe Foon Hew
The University of Hong Kong, Hong Kong SAR // kfhew@hku.hk
Weijiao Huang
The University of Hong Kong, Hong Kong SAR // wjhuang1@connect.hku.hk
Sikai Wang
The University of Hong Kong, Hong Kong SAR // sikaiw@connect.hku.hk
Xinyi Luo
The University of Hong Kong, Hong Kong SAR // xinyiluo@hku.hk
Donn Emmanuel Gonda
The University of Hong Kong, Hong Kong SAR // dgonda@hku.hk
ABSTRACT:
Despite the prevalence of online learning, the lack of student self-regulated learning (SRL) skills continues to be persistent issue. To support students’ SRL, teachers can prompt with SRL-related questions and provide timely, personalized feedback. Providing timely, personalized feedback to each student in large classes, however, can be labor-intensive for teachers. This 2-stage study offers a novel contribution by developing a Large Language Model (LLM)-based chatbot system that can automatically monitor students’ goal setting and planning in online learning. Goal setting and planning are two important skills that can occur in all SRL phases. In stage 1, we developed the Goal-And-Plan-Mentor ChatGPT system (GoalPlanMentor) by creating an SRL knowledge base with goal and plan indicators, using Memory-Augmented-Prompts to automatically detect student goals and plans, and providing personalized feedback. In stage 2, we compared the accuracy of GoalPlanMentor’s detection (coding) of students’ goals and plans with human coders, examined the quality of GoalPlanMentor’s feedback, and students’ perceptions about the usefulness of GoalPlanMentor. Results show substantial to near perfect agreement between GoalPlanMentor’s and human’s coding, and high quality of GoalPlanMentor’s feedback in terms of providing clear directions for improvement. Overall, students perceived GoalPlanMentor to be useful in setting their goals and plans, with average values being significantly higher than the midpoint of the scale. Students who highly perceived the system’s usefulness for goal-setting exhibited significantly greater learning achievements compared to those with a low perception of its usefulness. Implications for future research are discussed.
Keywords:
Generative artificial intelligence, Chatbot, Self-regulated learning, Online learning, Large language models
Zhi-Qiang Ma, Xin Cui, Wen-ping Liu, Yun-Fang Tu and Gwo-Jen Hwang
Zhi-Qiang Ma
Jiangsu Research Center of “Internet Plus Education”, Jiangnan University, Wuxi City, China // mzq1213@jiangnan.edu.cn
Xin Cui
School of Design, Jiangnan University, Wuxi City, China // cuixin960220@163.com
Wen-ping Liu
Jiangsu Research Center of “Internet Plus Education”, Jiangnan University, Wuxi City, China // 771358712@qq.com
Yun-Fang Tu
Department of Educational Technology, Wenzhou University, Wenzhou, China // sandy0692@gmail.com
Gwo-Jen Hwang
Graduate Institute of Educational information and Measurement, National Taichung University of Education, Taichung City, Taiwan // Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, Taipei City, Taiwan // Yuan Ze University, Taoyuan City, Taiwan // gjhwang.academic@gmail.com
ABSTRACT:
In traditional collaborative argumentation activities, students often struggle to present arguments from diverse perspectives. ChatGPT is capable of understanding user prompts and generating corresponding responses, and it can play different roles with diverse backgrounds to argue with students, creating the possibility of promoting the quality of their argumentation. However, to make ChatGPT’s responses work well for argumentation, students need to give appropriate prompts. Therefore, this study proposed the role-playing prompt-based ChatGPT-assisted Collaborative Argumentation (CaCA) approach, and a quasi-experiment was conducted to examine its effects on students’ argumentation outcomes, processes, and perceptions. Sixty-six first-year graduate students engaged in this experiment: the experimental group adopted the role-playing prompt-based CaCA approach, while the control group adopted the conventional CaCA approach. Results showed that the role-playing prompt-based CaCA approach broadened students’ perspectives in their arguments and increased the connections between data and claims, forming the chain of arguments centered on warrant and backing in their discourse. However, it did not significantly enhance their ability to edit ideas deeply or increase their willingness to give rebuttals. This research provides new insights into the application of ChatGPT in a micro-level collaborative argumentation context.
Keywords:
ChatGPT in education, Generative artificial intelligence, Educational prompt engineering, Chatbot, Epistemic network analysis
Ting Tian, Ching Sing Chai, Mei-Hwa Chen and Jyh-Chong Liang
Ting Tian
Department of Curriculum and Instruction, Faculty of Education, The Chinese University of Hong Kong, Hong Kong // tianting@link.cuhk.edu.hk
Ching Sing Chai
Department of Curriculum and Instruction, Faculty of Education, The Chinese University of Hong Kong, Hong Kong // CSChai@cuhk.edu.hk
Mei-Hwa Chen
Department of Computer Science, College of Nanotechnology, Science, and Engineering, University at Albany, State University of New York // mchen@albany.edu
Jyh-Chong Liang
Program of Learning Sciences, School of Learning Informatics, National Taiwan Normal University, Taiwan // aljc@ntnu.edu.tw
ABSTRACT:
The rapid development of chatbots undergirded by large language models calls for teachers and students to explore using chatbots in educational ways. Given this new phenomenon, education researchers need to explore students’ responses to chatbots to comprehensively understand their experiences. This study adopted purposive sampling to interview Taiwanese university students (N = 17) who had used ChatGPT for a semester in an open elective course. Transcripts were analyzed using the grounded theory approach. Findings indicate that while the challenges posed by ChatGPT perturbed the students, their actual interactions with the chatbot revealed that it lacks some vital epistemic aspects. ChatGPT cannot discern what is true, and cannot experience the world as humans do. Hence, while it could assist humans in some epistemic endeavors, the students concluded that they need to assume epistemic agency in its use. The findings imply that educators may need to design academic tasks that further develop undergraduates’ epistemic agency for the critical and creative use of chatbot-generated artifacts.
Keywords:
Generative artificial intelligence, Epistemic agency, Grounded theory, University students
Wangda Zhu, Wanli Xing, Eddy Man Kim, Chenglu Li, Yuanzhi Wang, Yoon Yang and Zifeng Liu
Wangda Zhu
University of Florida, USA // wangdazhu@ufl.edu
Wanli Xing
University of Florida, USA // wanli.xing@coe.ufl.edu
Eddy Man Kim
Cornell University, USA // mk369@cornell.edu
Chenglu Li
Utah University, USA // chenglu.li@utah.edu
Yuanzhi Wang
Cornell University, USA // yw529@cornell.edu
Yoon Yang
Cornell University, USA // yy825@cornell.edu
Zifeng Liu
University of Florida, USA // liuzifeng@ufl.edu
ABSTRACT:
Although image-generative AI (GAI) has sparked heated discussion among engineers and designers, its role in CAD (computer-aided design) education, particularly during the conceptual design phase, remains not sufficiently explored. To address this, we examined the integration of GAI into the early stages of design in a CAD class. Specifically, we conducted an in-class workshop introducing Midjourney for conceptual design and released a home assignment on mood board design for hands-on GAI design practice. Twenty students completed the workshop and assignment from a CAD class at a research-intensive university. We collected and analyzed data from surveys, students’ prompts, and design artifacts to explore their perceptions of GAI, prompt behaviors, and design creativity and conducted a correlation analysis between these variables. After the workshop, students significantly rated GAI as more useful and user-friendly in design and found it more supportive in terms of design efficiency and aesthetics, while we did not find a significant difference in design creativity. By analyzing 365 prompts used for completing home tasks, we identified three types of prompt behaviors: generation, modification, and selection, as well as classified three types of workflows: exploration, two-step, and multi-step. The correlation analysis showed that design creativity changes significantly and positively correlated with their prompt behaviors. Students with more multi-step prompts produced more creative artifacts based on the instructors’ evaluation. This exploratory study offered valuable insights into the integration of GAI in CAD education and suggested potential directions for future GAI curricula and tools in design education.
Keywords:
Image-generative AI, Design education, Perceptions, Prompt behaviors, Creativity
Use me wisely: AI-driven assessment for LLM prompting skills development
Dimitri Ognibene, Gregor Donabauer, Emily Theophilou, Cansu Koyuturk, Mona Yavari, Sathya Bursic, Alessia Telari, Alessia Testa, Raffaele Boiano, Davide Taibi, Davinia Hernandez-Leo, Udo Kruschwitz and Martin Ruskov
Generative artificial intelligence in education: Theories, technologies, and applications
Xiao-Pei Meng, Xiao-Ge Guo, Jian-Wen Fang, Jing Chen and Le-Fu Huang
Xiao-Pei Meng
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 15610147930@163.com
Xiao-Ge Guo
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 1342777584@qq.com
Jian-Wen Fang
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // beginnerfjw@163.com
Jing Chen
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // 2594552593@qq.com
Le-Fu Huang
School of Teacher Education, Wenzhou University, Chashan Higher Education Park, Wenzhou City, Zhejiang Province, China // huanglefu123@163.com
ABSTRACT:
With the wide application of Generative Artificial Intelligence (GenAI) technology in society, GenAI literacy and critical thinking are considered vital competencies for success in the future. As future teachers, pre-service teachers (PSTs) must possess these key competencies. Previous research has pointed out that although learning and applying GenAI knowledge and technology can improve PSTs’ GenAI literacy, the results were not as expected. This study incorporated a KWL (Know, Want, Learned)-based reflection strategy based on the SCQA (Situation, Complication, Question, and Answer) model, and proposed an RSCQA (Reflection, Situation, Complication, Question, and Answer) approach, which aims to help PSTs develop GenAI literacy and critical thinking. A quasi-experiment was designed for this study to verify the validity of the RSCQA approach, which was used by the experimental group, while the Conventional SCQA (CSCQA) approach was used by the control group. Results of the study demonstrated that using the RSCQA approach improved PSTs’ GenAI literacy and critical thinking, and they excelled in multimodal demonstration capability. This study also conducted an ENA analysis of the content of PSTs’ reflections, and the results demonstrated that reflective strategies help PSTs engage in higher-order cognitive activities. The results of the qualitative interviews further indicated that the reflective strategy enhanced PSTs’ learning effectiveness and strengthened their critical thinking. Overall, the RSCQA approach helps PSTs engage in reflective learning and perform better in GenAI literacy and critical thinking. The results of this study provide practical experience and operational examples of PST cultivation.
Keywords:
Artificial Intelligence in Education, Critical thinking, Teacher training, Information literacy, Generative artificial intelligence
De-Xin Hu, Dan-Dan Pang and Zhe Xing
De-Xin Hu
School of Education, Tianjin University, Tianjin, China // hudx@tju.edu.cn
Dan-Dan Pang
School of Education, Tianjin University, Tianjin, China // pdd_0718@tju.edu.cn
Zhe Xing
School of Education, Tianjin University, Tianjin, China // 2022212035@tju.edu.cn
ABSTRACT:
The emergence of Generative AI technologies, represented by ChatGPT, has triggered extensive discussions among scholars in the education sector. While relevant research continues to grow, there is a lack of comprehensive understanding that systematically measures the effects of Generative AI on student learning outcomes. This study employs meta-analysis to integrate findings from previous experimental and quasi-experimental research to evaluate the impact of Generative AI on student learning outcomes. The analysis of 44 effect sizes from 21 independent studies indicates that Generative AI tools, compared to traditional AI tools or no intervention, moderately enhance student learning outcomes (g = 0.572). These tools significantly improve the cognitive (g = 0.604), behavioral (g = 0.698), and affective (g = 0.478) dimensions of learning outcomes. In addition, the study identifies and examines 6 potential moderating variables: educational level, sample size, subject area, teaching model, intervention duration, and assessment instrument. The results of the moderating effects test reveal that sample size and assessment instrument significantly influence the effectiveness of Generative AI. For sample size, the effect of Generative AI on small samples (g = 1.216) is greater than that on medium (g = 0.476) and large samples (g = 0.547). For assessment instrument, the effect of Generative AI on self-developed tests (g = 0.984) is greater than that on standardized tests (g = 0.557). The meta-analysis result indicated that the use of Generative AI should be supplemented with detailed guidance and flexible strategies. Specific recommendations for future research and practical implementations of Generative AI in education are discussed.
Keywords:
Education, Generative artificial intelligence, Generative AI, Learning outcomes, Meta-analysis
Chiu-Lin Lai
Department of Education, National Taipei University of Education, Taiwan // jolen761002@gmail.com
ABSTRACT:
Discussing with generative artificial intelligence (GAI) has been recognized as a method for obtaining information. Researchers have noticed that students’ ability to search, select, and evaluate information (e.g., their online information-searching strategies) may affect the quality of their learning and discussion with GAI. In this study, we conducted an experiment in an online science course to explore the role of online information-searching strategies (OISS) in the process of students discussing with GAI. A total of 46 high school students participated in the study. In the course, students wrote question prompts to the GAI, accessed information provided by the GAI, and composed their science reports. We collected the students’ OISS tendency questionnaire, science reports, and the question prompt content they wrote to GAI. According to the results, students with higher OISS outperformed those with lower OISS in terms of content accuracy and logical descriptions in their science reports. The ordered network analysis (ONA) results showed a significant difference in the prompting sequences of the two groups of students. Students with higher OISS gradually developed their knowledge models of scientific concepts by organizing information and finding connections or inconsistencies among different types of knowledge. Students with lower OISS strategies focused more on content extension and consolidation of independent expertise. We labeled the higher OISS students as holism learners, while the lower OISS students were labeled as atomism learners. Lastly, the study findings underscore the importance of guiding students to comprehensively structure and synthesize their knowledge within GAI-based learning environments.
Keywords:
Generative artificial intelligence, Online information-searching strategies, Question prompts, Science education, Holism/Atomism learning
Seyfullah Gökoğlu and Fatih Erdoğdu
Seyfullah Gökoğlu
Bartın University, Türkiye // gokogluseyfullah@gmail.com
Fatih Erdoğdu
Zonguldak Bülent Ecevit University, Türkiye // fatiherdogdu67@gmail.com
ABSTRACT:
Artificial Intelligence (AI) has become increasingly prevalent in education, and Generative AI (GenAI) has emerged as a new concept in AI technology. Educators continue to debate the advantages and disadvantages of GenAI in education. To guide these discussions, further analysis is needed to determine the impact of GenAI on educational outcomes. This study aims to investigate the effect of GenAI on learning performance using the meta-analysis method. This meta-analysis study synthesized the results of 31 articles involving 2646 participants. The results show that GenAI has a moderately positive effect on learning performance (g = .689). There is no publication bias in determining the validity of the effect. The analysis included the effect sizes of seven moderator variables: sample level, sample size, research design, learning domain, research setting, intervention duration, GenAI tool, and testing format. Only intervention duration, GenAI tool, and testing format significantly moderated the effectiveness of GenAI on learning performance.
Keywords:
GenAI, Learning performance, Meta-analysis, Effect size
Leisi Pei, Morris Siu-Yung Jong, Biyun Huang, Wai-Chung Pang and Junjie Shang
Leisi Pei
Department of Curriculum and Instruction, Faculty of Education and Human Development, The Education University of Hong Kong, Hong Kong SAR // lpei@eduhk.hk
Morris Siu-Yung Jong
Centre for Learning Sciences and Technologies, and Department of Curriculum and Instruction, The Chinese University of Hong Kong, Hong Kong SAR // mjong@cuhk.edu.hk
Biyun Huang
School of Education, City University of Macau, Macau SAR // byhuang@cityu.edu.mo
Wai-Chung Pang
Division of English Education, HKFYG Lee Shau Kee College, Hong Kong SAR // wcpang@hlc.edu.hk
Junjie Shang
Lab of Learning Sciences, Graduate School of Education, Peking University, China // jjshang@pku.edu.cn
ABSTRACT:
ChatGPT, which rides on generative artificial intelligence (AI) technology, has garnered exponential attention worldwide since its release in 2022. While there is an increasing number of studies probing into the potential benefits and challenges of ChatGPT in education, most of them confine the focus to informal learning and higher education. The formal adoption of ChatGPT in authentic classroom settings, especially in K-12 contexts, remains underexplored. In view of this gap, we conducted a quasi-experiment to study the effects of integrating ChatGPT into a compulsory Grade-10 English as a foreign language (EFL) writing course in a Hong Kong secondary school. The participants (99 Grade-10 students) were divided into treatment or control groups; the former and latter respectively used ChatGPT and conventional media as the tool for EFL writing instruction. The analysis of the participants’ EFL writing performance at the end of the experiment shows that the treatment group outperformed the control group. Further, there was a significant interaction effect between the assigned group of participants and their baseline level of EFL proficiency. Compared to the high-achieving participants, low-achieving ones tended to benefit more from ChatGPT-supported EFL writing instruction in the experiment. These findings together suggest that ChatGPT has the potential to be applied in formal EFL writing instruction in K-12 classroom settings.
Keywords:
Artificial intelligence (AI), ChatGPT, English as a foreign language (EFL), EFL writing instruction, K-12 education, Task-based language teaching (TBLT)
Jiwon Lee and Jeongmin Lee
Jiwon Lee
Ewha Womans University, South Korea // leejiwon@ewhain.net
Jeongmin Lee
Ewha Womans University, South Korea // jeongmin@ewha.ac.kr
ABSTRACT:
Teachers’ instructional decisions rely on effective data collection, analysis, and interpretation. Generative artificial intelligence (AI) offers flexible and efficient support for these processes. However, few studies have explored its practical application in classroom decision-making. This study aimed to develop a generative AI chatbot to enhance teachers’ data literacy (DL) and data-informed decision-making (DIDM). Using Richey and Klein’s design and development research methodology, this study was conducted in four phases: analysis, design, development, and evaluation. The chatbot was reviewed by five experts and pilot-tested by four participants before being tested in an experiment. A single group pre-post paired samples t-test was conducted with 25 teachers, who interacted with the chatbot on Zoom once a week for 2 weeks. Teachers’ reflective journals were examined using open coding procedures to analyze their responses. The findings revealed a significant increase in teachers’ DL and DIDM efficacy. A qualitative analysis of the teachers’ reflective journals highlighted the chatbot’s strengths, limitations, and potential improvements. Based on these findings, this study offers practical implications for developing and using generative AI chatbots to support school teachers’ DIDM processes.
Keywords:
Data-informed decision-making, Generative AI chatbot, Data literacy for teachers, Prompt engineering
An empirical study of the effects of peer intelligent agent instructional strategies on EFL university students' oral English learning performance, cognitive load, learning anxiety and technology acceptance
Gang Yang, Yudie Rong, Zhuocen Zou, Xinya Fang, Manna Yang and Qunfang Zeng
Mehmet Ali Yarım
Bursa Provincial Directorate of National Education, Bursa, Türkiye // karazeybekli@hotmail.com
ABSTRACT:
This study, which aims to measure the impact of Web 2.0 tools on student success and motivation in English lessons of digital natives in primary schools, is a mixed methods research in which both quasi-experimental and qualitative research techniques are applied. The study was conducted in Bursa, Turkey. The participants of the quantitative part of the study were 4th grade students with similar characteristics. The data of the qualitative part of the study were obtained from the experimental group teacher and parents. According to the results of the study, Web 2 tools used in English lessons in primary schools increase both students’ academic success. At the same time, these tools increased permanence in education and reduced the difference in level between classes. Web 2 tools have made positive contributions to the development of meaning, reading and speaking skills in foreign language education and to increasing interest and motivation in the course. In addition, Web 2.0 tools used in English lessons have provided social benefits such as increasing communication and interaction, having fun, and developing technological intelligence. It is thought that the use of Web 2.0 tools and technological arguments in teaching foreign languages to today’s children born in a digital world will provide important data and parameters to both educators, decision makers and policy makers.
Keywords:
Foreign language, Language teaching, English, Web 2 tools, Technology
Kun Huang, Anita Lee-Post and Nathan R. Arnold
Kun Huang
University of Kentucky, United States // k.huang@uky.edu
Anita Lee-Post
University of Kentucky, United States // dsianita@uky.edu
Nathan R. Arnold
University of Kentucky, United States // nathan.arnold@uky.edu
ABSTRACT:
We examined students’ academic emotions and enacted behavioral regulation in time management and effort within an asynchronous online college class. Using self-reported emotions and analytic indicators of time management and effort regulation across three time periods, we identified three profiles of emotions and behavioral regulation. The relationship between these profiles and course performance was also analyzed. Behavioral regulation indicators were triangulated with self-report data, confirming both the construct validity of these indicators and their role in predicting online class performance. Implications are discussed to guide research and practice regarding online college students’ time management, effort regulation, and emotions.
Keywords:
Self-regulated learning, Learning analytics, Time management, Academic emotions, Effort regulation
Elba Gutiérrez-Santiuste, Lourdes López-Pérez, Fátima Poza-Vilches, Daniel Molina-Cabrera, Rosana Montes-Soldado and Luis Alcalá
Elba Gutiérrez-Santiuste
Department of Pedagogy, University of Granada, Spain // egutierrez@ugr.es
Lourdes López-Pérez
Consorcio Parque de las Ciencias, Spain // llopez@parqueciencias.com
Fátima Poza-Vilches
Department of Research Methods in Education, University of Granada, Spain // fatimapoza@ugr.es
Daniel Molina-Cabrera
Department of Computer Science and Artificial Intelligence, University of Granada, Spain // dmolina@decsai.ugr.es
Rosana Montes-Soldado
Department of Software Engineering, University of Granada, Spain // rosana@ugr.es
Luis Alcalá
Consorcio Parque de las Ciencias, Spain // alcala@parqueciencias.com
ABSTRACT:
This research focuses on pre-university students’ perceptions of algorithmic biases in artificial intelligence. Six types of biases (generational, gender, functional diversity, ethnicity, geographical origin and economic reasons) are examined on the basis of four variables (age, sex, educational level and academic year) of young people. A quantitative method is employed using a questionnaire. ANOVA, T-test and Kruskall-Wallis test are used. The results show statistically significant differences in the variables analysed and, in general terms, young people have a medium-high perception of possible biases. The highest number of differences between groups was found in the level of education (secondary education/baccalaureate/vocational training). The least differences were found in age (less than 12 years/12–14 years/15–17 years/18–21 years) and sex of the participants (male/female). Students in vocational training have a higher perception of bias and those in baccalaureate have the lowest means. In this case, significant differences were found. The results also show significant differences in biases produced by functional diversity, geographical origin and economic reasons. In relation to age, significant differences were found in two groups of students. According to sex, males have a higher perception of gender and ethnicity biases. These results have consequences for educational practice, as they highlight the aspects that should be addressed in the training of young people in artificial intelligence. It also has implications for research as it opens up new questions to be analysed.
Keywords:
Algorithmic biases, Artificial intelligence, Pre-university students
Ran Bao
Zhaoqing University, China // University of Macau, China // 384057908@qq.com
Sile Liang
The No.1 middle school of Zhaoqing, China // 515212519@qq.com
Baihe Chen
Zhaoqing University, China // 2246424108@qq.com
ABSTRACT:
In the digital age, English language learning supported by mobile devices is becoming common. This research explores the impact of family support on student engagement and persistence in context of Mobile Assisted Language Learning (MALL). We used a multi-informant approach in a high school in China, including surveys, interviews, and modeling of 117 students, their parents, and teachers. It found significant differences in the perception of family support among students, parents, and teachers. Specifically, parents often overestimate the support they provide, students may underestimate the support they receive. It is worth noting that teacher evaluation, as a third-party perspective, surpasses parental and students’ evaluation in terms of modeling willingness and participation. Results revealed that family support has a significant impact on the willingness and persistence of MALL among students, especially parental engagement. This study emphasizes the complexity of family support in MALL and the necessity of integrating different perspectives to obtain a comprehensive understanding. The results are significant for educational research and practice, indicating that teacher insights are valuable in strengthening MALL projects and supporting effective language acquisition.
Keywords:
Multi-informant, MALL, Social support, Middle school, Family, Educational technology
Shu-Hsuan Chang, Kun-Chou Liao, I-Cheng Lin, Tsung-Han Tsai, Pei-Ling-Chien and Po-Jen Kuo
Shu-Hsuan Chang
Department of Industrial Education and Technology, National Changhua University of Education, Taiwan // shc@cc.ncue.edu.tw
Kun-Chou Liao
Department of Industrial Education and Technology, National Changhua University of Education, Taiwan // t02244@mail.dyjh.tc.edu.tw
I-Cheng Lin
Department of Finance, National Changhua University of Education, Taiwan // icliniclin@cc.ncue.edu.tw
Tsung-Han Tsai
Department of Industrial Education and Technology, National Changhua University of Education, Taiwan // kyokofukada70324@gmail.com
Pei-Ling-Chien
Faculty of Engineering, Kyushu University, Japan // chien.pei.ling.019@m.kyushu-u.ac.jp
Po-Jen Kuo
Department of Industrial Education and Technology, National Changhua University of Education, Taiwan // pc7938@icloud.com
ABSTRACT:
Technology imagination is a key individual ability that promotes technology progress and innovative economic development. This study aims to achieve three purposes: (1) to develop and validate a technology imagination scale (TIS) through rigorous scale development procedures based on data collected from 553 Taiwanese college students, (2) to compare technology imagination between students from two major higher education institutions in Taiwan, general higher education (GHE) and higher technical and vocational education(HTVE), and (3) to compare students’ perceptions across the different dimensions of technology imagination using PR values and radar charts. The results show: (1) The TIS includes 20 items with four factors (connectivity, transcendence, possibility, and technology utilization), demonstrating good reliability and validity. (2) Measurement invariance analysis confirmed the applicability of the TIS to both GHE and HTVE educational tracks. In the dimension of technology utilization, GHE scores significantly higher than HTVE, but there is no significant difference in connectivity, transcendence, and possibility. (3) PR values and radar charts visualize individual students’ performance across the four dimensions of technology imagination. This study provides a reliable and valid measurement tool to assess college students’ technology imagination, which supports the formulation of educational strategies and the cultivation of innovative talents.
Keywords:
Technology imagination, Scale development, Measurement, Factor analysis, Measurement invariance
Starting from Volume 17 Issue 4, all published articles of the journal of Educational Technology & Society are available under Creative Commons CC-BY-ND-NC 3.0 license.