Please see my article from the CLO magazine – Verifying Virtual Value.
I am often asked by business leaders to describe what we intend to measure in order to understand, manage, and improve the networked (or social) learning eco-system. There is interest in knowing how we will prove networked learning, turn potential chaos into something that is more certain and efficient, and to get some kind of “history” about the learning and development for individuals and organizations.
We do not want to take a traditional ‘Learning Management System” appraoch and treat networked learning as formal training.
Next is a list of the metric categories (and a few examples for each one) that I believe represent the minimum measurement requirements (not in any order of priority).
- Networking patterns – the relationship between people and content categories, the network make up or profile (business unit, job, level, etc), key brokers and influencers by content category, and the degree of networking across silos. Is information flowing efficiently and effectively?
- Learning efficiency – time lag between posting content and when content is viewed, amount of time spent producing content for others to view, amount of redundant or significantly overlapping content, the degree to which “informal” content is reused in “formal” content (and perhaps reducing formal content development costs and effort). How much time are people spending looking for people and information?
- Learning needs – differences between the learning needs or demand between “formal” and “social” learning (are some skills best learnt formally?), most popular learning needs by job, level, business unit, etc. When is social learning creating and destroying value?
- Contribution patterns – most active contributors and methods of contribution, busiest days and times for contributing, frequency and amount of contributions by job, level, business unit, etc. Are the “right” people contributing at the expected levels, at the “right” times, and using the most appropriate methods?
- Content usage patterns – preferred ways to consume various content topics, busiest days and times for viewing content, amount of time spent viewing content and participating in discussion threads and blogs, and preferred way to “find” content. Is the utilization of methods, media, subject areas at expected levels?
- Content quality – ratings by content category, contributor, and medium, amount of “inappropriate” or “wrong” content reported by users, and the amount and type of content with very few or a lot of hits or views. Is the “community” doing a good job of managing content quality, is there enough “good” content, are there too many unmet learning needs?
- Return – increased productivity, improved customer service, compressed time to competence, higher reuse of shared information, improved employee engagement, and increased collaboration across silos. Are the benefits of social learning at the expected levels?
- Opportunity cost – cost avoidance, less travel expenses, less reliance on classrooms and trainers, fewer training development projects, and lower content maintenance cost. Are we able to do more with less? What costs would we pass up by using social learning instead of formal learning?
These are just some of my initial thoughts – a starter for 10 – and work in progress. Please feel free to share your ideas and suggestions.