Last week’s EDEN Research Workshop was thought-provoking in many ways. Incidentally, I think that was largely because of the format that discouraged long presentations and encouraged discussion and reflection. I thought this would irritate me but it didn’t.
One of the questions that the workshop prompted for me (and, if the ‘fishbowl’ discussion at the end is to be believed, for others too) is the extent to which our wealth of previous research into student engagement with open and distance learning (especially when online) is relevant to MOOCs. Coincidentally, my [paper!] copy of the November issue of Physics World arrived yesterday, and a little piece entitled “Study reveals value of online learning” lept out and hit me. It’s about work at MIT that has pre- and post-tested participants on a mechanics MOOC. The details are at:
Colvin, K. F., Champaign, J., Liu, A., Zhou, Q., Fredericks, C., & Pritchard, D. E. (2014). Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class. The International Review of Research in Open and Distance Learning, 15(4), 263-282.
They found that students, whether well or poorly prepared, learnt well. The Physics World article comments that David Pritchard, the leader of the MIT researchers “finds it ‘rather dismaying’ that non-one else has published about whether learning takes place, given that there are thousands of MOOCs available”. I agree with Pritchard that we need more robust research into the effectiveness of MOOCs. However, I come back to the same point: To what extent does everything we know about open and distance learning apply?
I used to get really annoyed that people talked about MOOCs as if what they are doing is entirely new, and when the Physics World article goes on to compare MOOCs with traditional classroom learning, as if nothing else has existed, I feel that annoyance surfacing. However, at EDEN, I suddenly realised that there are some fundamental differences. Most people studying MOOCs are already well qualified; that is increasingly not the case for our typical Open University students. I accept that the MIT work has looked at “less well prepared” MOOC-studiers, and that is very encouraging, but I wonder if it is appropriate to generalise or to attempt support such a wide spectrum of different learners in the same way. Secondly, most work on the impact of educational interventions considers students who are retained, and the MIT study is no exception; they only considered students who had engaged with 50% or more of the tasks; if my maths is right that was about 6% of those initially registered. Much current work at the Open University rightly focuses on retaining our students; all our students. Then of course there are differences in length of module and typical study intensity, and so on.
I suppose an appropriate conclusion is that MOOC developers should both learn from and inform developers of more conventional open and distance learning modules. And I note that the issue of The International Review of Research in Open and Distance Learning that follows the one in which the MIT work is reported, is a special, looking at research into MOOCs. That’s good.
“…the discussion on MOOCs to-date has occurred mainly in mainstream media and trade publications. Although some peer-reviewed articles on MOOCs currently exist … the amount of available research is generally limited…. Paying greater attention to what is already known about learning in online and virtual spaces, how the role of educators and learners is transformed in these contexts, and how social networks extend a learning network will enable mainstream MOOC…”
http://jolt.merlot.org/vol9no2/siemens_editorial_0613.htm