Analytics-driven corporate MOOCs – performance indicators beyond completion rates
Post initial hype about massive numbers of participants signing-up to follow MOOCs, the ensuing discussion concerned equally massive drop-out rates. Researchers and practitioners now seem to agree that completion and drop-out rates are not THE crucial performance indicators of MOOCs and their learners. They argue that many learners do not even strive to complete the course because they are interested in only a part of the course or because they want to simply watch the course material. When computing drop-out rates only for those participants who have committed to completing the course, either by paying for certification or by indicating the objective of completion in the registration form, drop-out is no longer alarming and instead close to rates in offline learning settings. (cf. EPFL(1))
Much effort was and is still spent on predicting drop-out through various indicators and intervening proactively to keep learners active until completion. This begs the question of what should be the most pertinent indicators.
First, the indicator needs to be a valid source of information. That means for instance, that empirical evidence should prove an indicator’s correlation with course completion.
Second, indicators are useful if their analyses can be translated into actionable solutions. That is, the indicator should help platform and course designers to make decisions on features, the overall learning experience as well as content creation and modes of content provision. Engagement, for instance, even though of course clearly related to performance (e.g. Paredes 2012 (2), Wolff 2013(3)) is not as such an actionable indicator. We rather need to understand the principles underlying engagement on which then actions can be taken.
Third,value-add information for HR should be considered. Academic grades and test scores “are worthless as a criteria for hiring” according to Laszlo Bock, Senior Vice President for People Operations at Google (cf. New York Times(4)). New solutions come up such as the game-based assessment of candidates developed by Silicon Valley start-up knack (cf. Knack(5)). Games are designed to diagnose a set of performance indicators, which were identified as crucial for job performance based on analyses of previous employees’ performance. The game score determines the candidate’s match with the job profile. While games create a scenario independent from subject matters, behavior in a job-related online training could be a more valuable source of information about a person’s qualities. We are therefore launching the discussion about relevant indicators and their potential for corporations.
The continuation of this series highlights alternative performance indicators:
(1) EPFL – Ecole polytechnique fédérale de Lausanne, MOOCS Annual Report 2012-2014
(2) Paredes – Walter Christian Paredes and Kon Shing Kenneth Chung. Modelling learning & performance: a social networks perspective. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, pages 34–42. ACM, 2012. http://dl.acm.org/citation.cfm?id=2330617 (without full text)
(3) Wolff – Annika Wolff, Zdenek Zdrahal, Andriy Nikolov, and Michal Pantucek. Improving retention: predicting at-risk students by analysing clicking behaviour in a virtual learning environment. In Proceedings of the third international conference on learning analytics and knowledge, pages 145–149. ACM, 2013. http://oro.open.ac.uk/36936/1/LAK%20-%20OU%20camera%20ready.pdf
(5) Knack – https://www.knack.it/