We proceeded with audience interaction regarding what metrics can be use to measure the learning effectiveness of OER. The following is a list of key points, Patrick will have a more complete and organised list on Cloudworks (http://cloudworks.ac.uk/cloud/view/3320)
It was clear that it is only a proxy that we can measure, and there were doubts over causality. Also do we need to prove that we are “as good as” or “better”. Could we use the same material as open and closed as a trial – do we have the appropriate level of control as proper trial? Should we be measuring secondary and quantitative effects and tracking these back to learning outcomes? We need metrics that isolate the effect of OER from eLearning and everything else. Are “stories” enough to convince people of the benefits of OER? Do we need quantitative data for advocacy? Great anecdotal evidence exists – can we measure eg. how enquiring students are becoming? can we baseline this? It may might be easier to look at strategies by which students are learning, by which teachers are using materials and tools. Are materials generating more interaction, new communites of practice? Do learning materials have an impact on learning? Does the “open” make a difference? Can we draw on existing research? Qualitative data – case studies, what is working, what is not? Is learning happening for free? Are there measurable efficiencies? (yes) What materials to students graduate towards? Teacher’s behaviour – differences in behaviour, frequency of OER use? (but is this a focus on use not the outcome of use?) Easier question when using OER “platforms” as we have detailed student traces here (eg OLI). We ended with a plea for OER stories on the Cloud.
(this material licensed cc-by 2.5 (UK))