Activity 5 of the h817open mooc is a reading of Stephen Downes’ 2001 paper, Learning Objects: Resources for distance education worldwide. Before I read this, I set out some thoughts from my own experience in the area of instructional design.
Earlier in my career, I was involved in the development and manufacture of training systems which included fully immersive simulation environments which often had associated part-task or mission simulations in partial mock-ups or computer-based training suites. These were usually for pilots but also had application for crew operators, train drivers, and even surgeons. The principle underlying these systems is that mission or procedure training on relatively cheap devices saves many hours of operation in the live environment and for some specialisms, many lives and millions of dollars’ worth of wrecked equipment when the student got it wrong during training. So effective have these systems become, that it is possible that the co-pilot of the aircraft you are flying in (who might be landing it) has never set foot inside that particular type before today. He or she will, however, have many hours in the trainers and simulators before taking the controls of the real thing for the first time.
When costing the design of computer-based training suites, I remember we used ratios of around 20:1, meaning that it would take an instructional designer 20 hours to put together 1 hour of courseware. This courseware wasn’t particularly sophisticated by the standards you might see on your tablet computer today, although there were lesson plans, outcomes and artefacts like images or simple semi-dynamic graphics elements. Sometimes there were efficiency gains to be made by reusing modules and components from other projects but more often than not there would be a lot of adaptation of the models to the specifics of the system being modelled. The development costs of these models were high but the investment was justified by the savings in operational or flight time.
Did we get the models wrong? Almost never. The number of real-world consequences of failures in the training due to bad modelling or design are vanishingly small. Why? Because the training was validated against data from the real environment where it exists. In the case of the B767, for example, it took three months to prove it against hundreds of thousands of data points from the systems we were simulating. Evidence was the key.
Why have I set these ideas out before reading Downes’ paper? Because I wanted to remind myself of my perspectives on the importance of good quality instructional design. Since entering education, I have noticed that there is an acceptance across sectors and national boundaries of a wide range of standards of effectiveness, most of which are in the “we don’t know, we think it’s OK” category. I continue to be surprised that we have educational systems in which no evidence-based analysis of need (e.g. a TNA) or effectiveness exists. Ben Goldacre has recently written about the importance of knowing what works, which means obtaining proper randomised trial-based evidence instead of the usual subjective “evaluation” which pervades teaching. We are encouraged to be reflective practitioners which is not a bad thing, but alone it is certainly not sufficient to inform the development of good system-wide practice.
Right. Now I’m ready to read Downes. The question in my head is, “what is a learning object and how do you know it’s a good one?”.