“Imagine you are advising a funding organisation that wishes to promote activity and research in the area of open education.
Set out the three main priorities they should address, explaining each one and providing a justification for your list.”
I’ve thought about this for a few days and tried not to peek at what others are writing in response. I’ve drawn substantially on my own experience of open learning for my own set of three priorities.
Priority 1: Is there a correlation between open learning drop-out rates and the level of study skills of initial applicants?
I left school before completing my A-levels and found myself in my late twenties as a Project Manager with responsibility for multi-million pound budgets and teams of graduate engineers. From experience with these teams, I developed a perspective on how common sense correlated with academic attainment (see graph).
As part of my work, I found myself dealing with contract terms and felt that a law degree might be useful, so I enrolled as an external student on the University of London LLB degree programme. After about two years of struggling with this, I gave it up as a bad job. I tried again with another course from the Open University, which I completed without sitting the exam. Eventually, I figured out how to study and obtained a BSc in Physics and Mathematics, also with the OU.
Recent experience with the Edinburgh University/Coursera MOOC, eLearning and Digital Cultures, showed that of the 43,000 who registered, 2,000 completed the course. I almost wrote “almost completed” but actually, I think 5% is a pretty good conversion rate given the absence of entry requirements.
So my question is this: if all 43,000 of those people had the necessary study skills required to complete the course when they signed up, how many would have completed it? Seeking to identify any correlation might encourage course providers to be clear about the expectations of students before they sign up.
Priority 2: Does open education have to be free?
The RAF used to make some wonderful posters for school physics departments. Initially, they would provide them for free to any school that asked for them. They discovered that most of these posters finished up in cupboards and back rooms. When they started charging a nominal fee (£15 a set, I think) for them, the posters almost always found their way to the walls of classrooms and corridors.
If education is valuable, surely it has to be given a value? How many of those 43,000 would have signed up if there was a five-dollar registration fee? My own thoughts are that people value things that they pay for. I have often suggested that disruptive pupils would be a lot less so, and their parents much keener on ensuring this, if they handed over a pound every time they crossed the classroom threshold. Would this deny access to education for anyone? In the classroom, those who “can’t afford” to buy a pencil often have £300 mobile devices in their pockets. If you’re signing up for a mooc, you’re doing it on more than a few quid’s worth of computer equipment.
Should there be a registration fee as a matter of course (no pun intended) for all open courses?
Priority 3: How valid is peer assessment?
One of the most interesting things about following the community dialogues in learning communities is that there is rarely anything robustly critical ever said. Sugata Mitra suggests that there is an important place for learners for the “grandmother”, one who offers encouragement and praise to the student when things are produced or the study get hard. Whilst I don’t disagree with him at all (and I’m a big fan of Alfie Kohn and Carol Dweck’s findings that praise should be for effort, not outcome), I wonder if this leads to a false overstatement of the quality of work produced in the context of peer-assessed learning communities?
Within the open learning environment, is peer assessment skewed towards the award of praise, even where none is due? Are we afraid of critical commentary and grading because we’re too polite? As a consequence of this, does open learning that is peer assessed result in a driving down of quality?