Thought LOL stood for 'laugh out loud'? 3 million people think differently. I'll tell you why in just a moment.
I can heartily recommend this month's McKinsey Quarterly article 'Givers Take All' that empirically sets out why organisations that encourage employees to behave more altruistically stand to gain the most. By rewarding 'givers', and screening out 'takers', organisations can reap significant and lasting benefits.
'Evidence from studies led by Indiana University’s Philip Podsakoff demonstrates that the frequency with which employees help one another predicts sales revenues in pharmaceutical units and retail stores; creativity in consulting and engineering firms; productivity in paper mills; and revenues, operating efficiency, customer satisfaction, and performance quality in restaurants.'
The full article is available to KIN members in the Memberspace KRT Library here.
The second item that caught my eye this month was an item in New Scientist magazine about how online gaming communities are moderating the malevolent language that plagues those forums. The example given was from the 'League of Legends' (LoL) fantasy from Riot Games, which has 3 million online players at any given time. What's this got to do with organisational learning? Bear with me...
'Behavioural profiles are constructed for every player in the game; the profiles measure how many times users insult teammates or opponents'. The clever part is that an MIT Game Lab system called Tribunal aggregates negative behaviour cases and bubbles them to the top, where they are presented back to the community forum. The community can then vote on whether the behaviour was acceptable or not. Particularly egregious cases can lead to a player being banned. It turns out that not only does this self-policing work in making players think about their behaviour, but it actually modifies the communities 'societal norms'.
This got me thinking about whether this community self-policing and culture-shaping technique would work for positive feedback as well as negative. If we think about behaviour being on a spectrum from selfish/unacceptable right though to selfless/altruistic, why shouldn't it? We already know that those who take the trouble to help others (as in the McKinsey article examples) stand to gain the most in the long run. If the surfacing of 'bad' behaviour for a community to judge and even impose sanctions works, the surfacing of 'exemplary' behaviour and award of rewards should also work.
Whilst not many organisations could boast having 3 million participants in their discussion forums at any one time, I'd like to see the MIT Game Lab Tribunal technology deployed to reinforce good behaviour in one of the KIN member organisations' community forums. Any volunteers?
Image source: http://euw.leagueoflegends.com/
I can heartily recommend this month's McKinsey Quarterly article 'Givers Take All' that empirically sets out why organisations that encourage employees to behave more altruistically stand to gain the most. By rewarding 'givers', and screening out 'takers', organisations can reap significant and lasting benefits.
'Evidence from studies led by Indiana University’s Philip Podsakoff demonstrates that the frequency with which employees help one another predicts sales revenues in pharmaceutical units and retail stores; creativity in consulting and engineering firms; productivity in paper mills; and revenues, operating efficiency, customer satisfaction, and performance quality in restaurants.'
The full article is available to KIN members in the Memberspace KRT Library here.
The second item that caught my eye this month was an item in New Scientist magazine about how online gaming communities are moderating the malevolent language that plagues those forums. The example given was from the 'League of Legends' (LoL) fantasy from Riot Games, which has 3 million online players at any given time. What's this got to do with organisational learning? Bear with me...
'Behavioural profiles are constructed for every player in the game; the profiles measure how many times users insult teammates or opponents'. The clever part is that an MIT Game Lab system called Tribunal aggregates negative behaviour cases and bubbles them to the top, where they are presented back to the community forum. The community can then vote on whether the behaviour was acceptable or not. Particularly egregious cases can lead to a player being banned. It turns out that not only does this self-policing work in making players think about their behaviour, but it actually modifies the communities 'societal norms'.
This got me thinking about whether this community self-policing and culture-shaping technique would work for positive feedback as well as negative. If we think about behaviour being on a spectrum from selfish/unacceptable right though to selfless/altruistic, why shouldn't it? We already know that those who take the trouble to help others (as in the McKinsey article examples) stand to gain the most in the long run. If the surfacing of 'bad' behaviour for a community to judge and even impose sanctions works, the surfacing of 'exemplary' behaviour and award of rewards should also work.
Whilst not many organisations could boast having 3 million participants in their discussion forums at any one time, I'd like to see the MIT Game Lab Tribunal technology deployed to reinforce good behaviour in one of the KIN member organisations' community forums. Any volunteers?
Image source: http://euw.leagueoflegends.com/