Monday 27 July 2015

KRT SIG Announcement: Knowledge Transfer for Joiners - Please build me a spacecraft

Three years ago my son joined an advertising agency - it was his first job after university. Those who designed the agency's induction process had really thought about the know-how and know-who that somebody new to an organisation (or indeed the world of work) rarely gets . Each graduate was given a 5 week assignment that could only be achieved by them talking with campaign planners, account managers, creatives and a number of the back-office functions. This wasn't make-work, but a real project which helped them feel productive as quickly as possible. The unstated aim was of course also to help the newbies build their personal network.
What reminded me of this was reading about NASA's Project HOPE - 'Hands On Project Experience'. Jon Boyle and Ed Hoffman of NASA recently published a paper http://km.nasa.gov/wp-content/uploads/sites/3/2015/03/Real-Knowledge-at-NASA.pdf that describes their 'knowledge services model for the modern project environment'. The section on how the HOPE program features a number of case-studies where early-entry NASA managers have to design, develop and launch an suborbital flight project in 18 months. In this way they learn quickly about the business, governance, operations policies and standards- and presumably started to build their personal networks too.
All too often, new starters are thrown in the deep end of an existing project and expected to 'pick-up stuff' as they go along. An alternative could be to: 
  • Engineer an induction project for new starters such that they make a contribution as soon as possible. This also reduces the 'time to competency' for the organisation. 
  • Help them make a wide-variety of connections outside their immediate team/function
  • Asks 'what else does this individual bring to the organisation, over and above fulfilling the job description?' This is particularly important and valuable for those joining from other organisations or with sector experience.
I've come across organisations that do elements of these, but not all. Has anyone else come across intelligent and interesting knowledge transfer for joiners programs? Join in the discussion on our new LinkedIn group 

Wednesday 22 July 2015

ISO 9001 revision - what knowledge managers need to know

For those not familiar with ISO 9001, this is the over-arching international standard for Quality Management. After a four year gestation, an update http://www.iso.org/iso/iso9001_revision is due out in September. For the first time this requires any organization adhering to the standard to demonstrate 'the management of organizational knowledge'.
I've had the devil's-own job navigating the ISO website trying to find out what this really means for those of us working in organisational change and knowledge management.
This is all I have been able to glean about the new requirement:
1. Determine the knowledge necessary for the operation of processes and for achieving conformity of products and services
2. Maintain knowledge and make it available to the extent necessary
3. Consider the current organizational knowledge and compare it to changing needs and trends
4. Acquire the necessary additional knowledge.
Why the words in bold? I am a fan of Josh Bernoff's blog www.withoutbullshit. In this posting he talks about the insidiousness of the 'passive voice' and how to avoid it. The highlighted words above exhibit the worst of this trend. To be fair, there may be more detail on what is meant by the 'knowledge necessary' or 'the extent necessary', however I can't find them in the labyrinth that is the ISO Technical Committee's website. I certainly hope that the final ISO 9001 document is written in more explicit, directive and helpful terms. If not, this is a missed opportunity to leverage the global International Quality Standard to reinforce the importance of knowledge transfer in organizations. 

Friday 10 July 2015

The 'Passive Voice' Zombie Detector

In managing knowledge and encouraging organisational change, clear communication is paramount. This applies to how you get your message across and the techniques KIN advocates for effective know-how transfer. 

I've started following Josh Bernoff's postings on withoutbullshit.com for inspiring and practical tips on improving my written communication. Recent advice has been on the scourge of 'passive voice'. This is somethin
g we do without realising we are inadvertently and unnecessarily diluting the impact of our message. Simple things, such as using generalisations such as 'it is recognised that'. 

Josh's excellent posting yesterday was about how to identify passive voice with a simple 'zombie test' and how to fix it. 
Uncomfortable though it was, I forced myself to go back over the emails I sent this morning and apply the Bernoff Passive Voice Zombie Test. It was instructive to say the least. 

In a world where it is increasingly difficult to get attention (I'm currently reading Captivology: The Science of Capturing People's attention, by Benn Farr), why communicate with one hand tied behind your back? Make your voice assertive not passive - and listen to Josh on how to do it. 


Thursday 2 July 2015

"Big data: are we making a big mistake?"

The following is from what is probably the most useful and cautionary tale written about decision taking based on big data. Extracted from the March 2014 FT Magazine 'Big Data: are we making a big mistake?' by Tim Harford.

In 1936, the Republican Alfred Landon stood for election against President Franklin Delano Roosevelt. The respected magazine, The Literary Digest, shouldered the responsibility of forecasting the result. It conducted a postal opinion poll of astonishing ambition, with the aim of reaching 10 million people, a quarter of the electorate. 
After tabulating an astonishing 2.4 million returns as they flowed in over two months, The Literary Digest announced its conclusions: Landon would win by a convincing 55 per cent to 41 per cent, with a few voters favouring a third candidate.
The election delivered a very different result: Roosevelt crushed Landon by 61 per cent to 37 per cent. To add to The Literary Digest’s agony, a far smaller survey conducted by the opinion poll pioneer George Gallup came much closer to the final vote, forecasting a comfortable victory for Roosevelt. Mr Gallup understood something that The Literary Digest did not. When it comes to data, size isn’t everything.
Opinion polls are based on samples of the voting population at large. This means that opinion pollsters need to deal with two issues: sample error and sample bias.
Sample error reflects the risk that, purely by chance, a randomly chosen sample of opinions does not reflect the true views of the population. The “margin of error” reported in opinion polls reflects this risk and the larger the sample, the smaller the margin of error. A thousand interviews is a large enough sample for many purposes and Mr Gallup is reported to have conducted 3,000 interviews.
But if 3,000 interviews were good, why weren’t 2.4 million far better? The answer is that sampling error has a far more dangerous friend: sampling bias. Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all. George Gallup took pains to find an unbiased sample because he knew that was far more important than finding a big one.
The Literary Digest, in its quest for a bigger data set, fumbled the question of a biased sample. It mailed out forms to people on a list it had compiled from automobile registrations and telephone directories – a sample that, at least in 1936, was disproportionately prosperous. To compound the problem, Landon supporters turned out to be more likely to mail back their answers. The combination of those two biases was enough to doom The Literary Digest’s poll. For each person George Gallup’s pollsters interviewed, The Literary Digest received 800 responses. All that gave them for their pains was a very precise estimate of the wrong answer.
The big data craze threatens to be The Literary Digest all over again. Because found data sets are so messy, it can be hard to figure out what biases lurk inside them – and because they are so large, some analysts seem to have decided the sampling problem isn’t worth worrying about. It is.