Beyond Boundaries: The Role of Learning Types in Shaping MOOC Learner Engagement and Progression (Hannah John, John Kerr & Guillaume Andrieux, Glasgow)
Utilising the ABC Learning Design based on the ABC curriculum design method (Young & Perović 2016) and the Conversational Framework (Laurillard 2012), specifically the six learning types that underpin that model, eight Massive Open Online Courses (MOOCs) from Coursera or FutureLearn were examined. These MOOCs were selected because they represented a wide range of disciplines, assessment options, and course structures. This paper will demonstrate how the application of various learning types impacts how learners engage with the material, progress through the course, and how it influences their commitment to continue learning (Martin and Bolliger 2018). Additionally, results from this research provide evidence of how the frequency and sequencing of learning types create opportunities for learners to engage with content in a meaningful way.
By synthesising the secondary data from pre-course surveys, exit surveys, end-of-course surveys, comment sections, and several other course metrics, including but not limited to the watch-through rates and technical feedback of over 400 videos, course completion, assessment completion, and learner satisfaction, seven key areas of impactful course design were identified and will be explored throughout this paper. These key areas will focus on the following elements of course design (1) quality and duration of videos, (2) balance and distribution of acquisition learning types (3) structure of discussions, (4) effective guidance for exploration activities, (5) balance of assessment and feedback opportunities, (6) Utilisation of e-learning tools and plug-ins, and (7) Successfully leveraging the synergies between the learning types in online course design. This empirical research will present evidence on how learning types can be successfully deployed in course design and course design sequencing.
Understanding engagement patterns of game-based learning for children in CatnClever using learning analytics (Josmario Albuquerque, Kristina Corbero, Sebastian Hahnel & Bart Rienties, OU UK)
It is widely acknowledged that games and game-based learning (GBL) opportunities can spark engaged learning opportunities (Sun et al., 2023). Well-designed games can lead to enjoyable and engaging experiences, in particular for young people and children (Maureen et al., 2022; Plass et al., 2015). Furthermore, games can be fun, interesting, motivating, and playful. However, there is mixed evidence whether GBL has positive (e.g., knowledge, skills, attitudes) or negative impacts of playing games for young children (Guan et al., 2024). In particular, there is paucity of research in how young learners engage in GBL. In this explorative study we aimed to investigate how learners engaged in one specific game-based mobile app called CatnClever that is designed specifically for children aged between 3-6. Using principles of learning analytics and artificial intelligence (Banihashem et al., 2023), we were specifically interested in exploring whether we can use engagement data from 8,365 German preschoolers across 60,279 activities how children progressed over time in CatnClever to predict learning performance without formal assessment data. As extensive testing of young children might not be appropriate in terms of gameplay, motivation, and data collection (Kucirkova et al., 2024), in this study we wanted to explore whether we could estimate learning performance and activity difficulty of 170 Tool X activities in four subjects (i.e., mathematics, language, social and emotional learning, and sport) purely from children engagement data. Findings suggest that difficulty aligns well with effort, suggesting optimal challenge levels. Key engagement moments were also identified, potentially informing further interventions. Overall, we stress the potential of learning analytics to deepen our understanding of young learners’ interactions in GBL, paving the way for tailored educational strategies. However, ethical considerations regarding data collection and analysis in GBL environments warrant careful attention.
References
- Banihashem, S. K., Dehghanzadeh, H., Clark, D., Noroozi, O., & Biemans, H. J. A. (2023). Learning analytics for online game-Based learning: a systematic literature review. Behaviour & Information Technology, 1-28. https://doi.org/10.1080/0144929X.2023.2255301
- Guan, X., Sun, C., Hwang, G.-j., Xue, K., & Wang, Z. (2024). Applying game-based learning in primary education: a systematic review of journal publications from 2010 to 2020. Interactive Learning Environments, 32(2), 534-556. https://doi.org/10.1080/10494820.2022.2091611
- Kucirkova, N., Livingstone, S., & Radesky, J. (2024). Advancing the understanding of children’s digital engagement: responsive methodologies and ethical considerations in psychological research [Conceptual Analysis]. Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1285302
- Maureen, I. Y., van der Meij, H., & de Jong, T. (2022). Evaluating storytelling activities for early literacy development. International Journal of Early Years Education, 30(4), 679-696.
- Plass, J. L., Homer, B. D., & Kinzer, C. K. (2015). Foundations of Game-Based Learning. Educational Psychologist, 50(4), 258-283. https://doi.org/10.1080/00461520.2015.1122533
- Sun, L., Kangas, M., Ruokamo, H., & Siklander, S. (2023). A systematic literature review of teacher scaffolding in game-based learning in primary education. Educational Research Review, 40(August ), 100546. https://doi.org/https://doi.org/10.1016/j.edurev.2023.100546
AI in Democratising Educational Decision Making (Anne Adams, Peter Devine, Richard Greenwood, Christothea Herodotou & Kevin Mcleod, OU UK)
For centuries future research and horizon scanning (HS) has been used for strategic decision making (Inayatullah, 1998). Educational HS has initiated national policies like SATs and Apprenticeships. Local horizon scanning can change an institutions educational research and scholarship objectives. However, horizon scanning can exclude voices and is rarely evidence-based. Society requires a systematic, evidence-based horizon scanning approach that overcomes social and technical barriers to democratising decision making.
This presentation will review the application of a new HS method applied across several different contexts, from research and scholarship strategic planning to innovation dissemination. An analysis of what works and does not work across contexts will be given. Part of this evidence will involve the Parliamentary Office for Science and Technology (POST) National 23/24 HS, specifically focused on education and digital innovation results. Following parliamentary Evidence Cafes, academic experts from 61 Higher Education Institutions (HEI) across England, Scotland and Wales used nQuirePOLICY tools, building on the award-winning nQuire platform (nquire.org.uk) to identify 2,903 topics across 12 policy-led thematic areas. Evidence underpinning expert opinions were classified by participants using the evidence typology (Clough and Adams, 2020) to capture an evidence-base from research to policy documentation, lived experiences, expert reports and media. Clustering of the topics into 10 themed areas was completed using a set of tailored OpenAI prompts developed by aligning prior human horizon scanning decision making with AI outputs creating a 75% – 80% accuracy. The AI clustering missed human assumptions based upon a deeper understanding of zeitgeist, context and human perspective issues. However, it also avoided negative unconscious bias assumptions.
Further parliamentary evidence cafes verifying AI clustering have now identified more issues around political sensitivity in policy language and challenges to policy assumptions. Future applications will be presented, including increased citizen engagement in devolved parliaments horizon scanning.
References
- Clough, Gill and Adams, Anne (2020). Evidence Cafes: Overcoming conflicting motivations and timings. Research for All, 4(2) pp. 145–149.
- Inayatullah, S. (1998). Macrohistory and futures studies. Futures, 30(5), 381-394.
Generative AI as your course materials writing assistant: Is it useful? (Thomas Daniel Ullmann, Duygu Bektik, Chris Edwards, Christothea Herodotou & Denise Whitelock, OU UK)
Generative AI, now widely available, is expected to make a significant impact across various sectors, including education. Its core feature, the ability to rapidly produce plausible text on a wide range of topics, and its chat-like interface for content refinement suggest that it may have a role to play in the course content production process. In this presentation, we share insights from our recent investigation. We experimented with the use of generative AI for tasks such as outlining the big questions, creating learning activities, and enhancing inclusivity in materials. We will showcase prompts and discuss the analysis of the responses. Across all tasks, the generative AI produced content that could effectively aid in brainstorming, creating outlines, and adhering to specific writing guidelines. However, it’s important to emphasise that the generated content always required adjustments and expert review.
Leveraging Generative AI for Enhanced Writing Instruction: A Case Study (Aysegul Liman-Kaban, Bahcesehir)
Providing formative feedback on student writing is a crucial component of writing instruction, but it poses a significant time burden on educators. This study investigates the potential of generative AI, specifically ChatGPT, to serve as an automated writing evaluation tool that can alleviate this burden. We compared the quality of feedback provided by ChatGPT to that of human evaluators on 350 undergraduate student essays. The feedback was assessed based on five criteria: criteria-based guidance, clarity of improvement directions, accuracy, prioritization of essential features, and supportive tone. Our findings indicate that human evaluators generally provided higher quality feedback across most categories, except for criteria-based guidance where ChatGPT performed comparably. Differences in feedback quality were also observed based on the initial quality of the essays, but not on the language status of the students. While well-trained human feedback remains superior, the ease and timeliness of AI-generated feedback suggest that tools like ChatGPT could be valuable in specific educational contexts, particularly for early drafts or in situations lacking sufficient human resources. Given the infrequency of substantial student revisions before submitting draft texts, we anticipate that formative feedback from AI could inspire greater revision compared to the current dearth of such feedback. Additionally, it might reduce the lengthy interval between initial drafting and subsequent revisions, as time-constrained secondary teachers often wait until extended breaks to address stacks of student papers. Moreover, ChatGPT’s capacity to provide feedback without requiring a training set, unlike other Automated Writing Evaluation (AWE) applications, and its ability to offer feedback on specific genres (such as argument writing in history) suggest its potential applicability across various genres and contexts. However, further research or educator testing is necessary to validate these potential applications. We argue that realizing the value of AI entails recognizing both its strengths and limitations and utilizing it in a manner that maximizes its strengths while mitigating its weaknesses. This involves educating teachers and students about AI’s functionalities and promoting critical and reflective usage, alongside integrating more social aspects into writing and assessment practices (Tate, Doroudi, Ritchie, & Xu, 2023). Similar positive outcomes for student learning have been observed in studies on other AI interventions for language development, such as visual-syntactic text formatting (Tate et al., 2019) and conversational agents (Xu et al., 2022). Thus, we draw insights on how to approach large language models from this existing research, aiming, as Grimes & Warschauer (2010) articulated, for “utility in a fallible tool.
Keywords: Formative feedback, Writing instruction, Generative AI, ChatGPT, Higher education, Writing development
Towards an EDIA-based and AI-enabled pedagogy across the curriculum (Mirjam Hauck, Rachele deFelice, Clare Horackova, Deirdre Dunlevy, Tracie Farell & Venetia Brown, OU UK)
This contribution is inspired by Tracie Farell’s “Shifting Powers” project that proposes that rather than asking whether AI is good or fair, we have to look at how it “shifts power”. Power relationships, we are reminded, preserve inequality within our society in real and material terms. How will AI contribute to those inequalities? Is there any chance AI can help to foster new balances of power and if so, what will this look like in practice?
Our work is a first attempt at mapping out an agenda for learning and teaching with GenAI guided by EDIA principles. It is underpinned by a critical approach to the use of gen AI and wants to equip learners – including teachers as learners – with the skills that enable them to work with gen AI in equitable and inclusive ways and thus contribute to shifting powers in education contexts.
We will be using the learning and teaching of languages and cultures as a case in point and present and discuss the tenets of educator training informed by Sharples’ (2023) framework for an AI-enabled pedagogy across the curriculum with an added focus on social justice and inclusion.
Our insights stem from our collaboration with two AL colleagues who are – like many others – new to GenAI and have been trialling the so-called “protégé effect” whereby we learn best when we have to teach it to others. We will present the outline of the educator training which will be available as a short course later this year in the OU’s Open Centre for Languages and Cultures. In doing so we will pay particular attention to the tension educators are experiencing who find themselves balancing anxieties regarding the shortcomings and challenges of GenAI and a perceived lack of technological expertise on the one hand, and expectations to harness and promote the innovative potential of GenAI on the other.