Sunday, 23 April 2017

The depreciating value of human knowledge

Automation is just one facet on the broader spectrum of AI and machine intelligence. Yes, it's going to affect us all (it already is with the increasing emergence of intelligent agents and bots), but I think there is a far deeper issue here that - at least for the majority of people who haven't become immersed in the "AI" meme - is going largely unnoticed. That is, the very nature of human knowledge and how we understand the world. Machines are now doing things that - quite simply - we don't understand, and probably never will. 

I think most of us are familiar with the DIKW model (over-simplification if ever there was), but if you ascribe to this relationship between data, information, knowledge and wisdom, I think the top layers - knowledge and wisdom - are getting compressed by our growing dependencies on the bottom two layers - data and information. What will the DIKW model look like in 20 years time? I'm thinking a barely perceptible "K" and "W" layers!

If you think this is a rather outrageous prediction, I recommend reading this article from David Weinberger, who looks at how machines are rapidly outstripping our puny human abilities to understand them. And it seems we're quite happy with this situation, since being fairly lazy by nature, we're more than happy to let them make complex decisions for us. We just need to feed them the data - and there's plenty of that about! 

This quote from the piece probably best sums it up:

"As long as our computer models instantiated our own ideas, we could preserve the illusion that the world works the way our knowledge —and our models — do. Once computers started to make their own models, and those models surpassed our mental capacity, we lost that comforting assumption. Our machines have made obvious our epistemological limitations, and by providing a corrective, have revealed a truth about the universe. 

The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend

We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it."

Should we be worried? I think so - do you?
Steve Dale

Wednesday, 12 April 2017

Hail to the Chief - creating faux senior roles is no alternative to a grounded strategy

I've been advising a client that is devising a new knowledge strategy.

Here's a snippet of a recent phone conversation...

Client: 'We're thinking of appointing a Chief Knowledge Officer. We need to show that the strategy has some real clout behind it'.

Me: 'So will this Chief Knowledge Officer have a seat on the main board? If not, how many levels down will the role be positioned?' (The board has only 3 members, CEO, Finance/HR and Operations directors)

Client: 'No, it will be at senior manager level' (that's 3 levels down from the board)

Me: 'I think you should wait to see what the knowledge strategy requires, before creating roles. I'm going to send you an article from a recent Harvard Business Review. Let's have another conversation when you've read it'.

The HBR article I emailed was 'Please Don't Hire a Chief Artificial Intelligence Officer'
I asked my client to simply substitute 'KM' for 'AI' and 'Chief Knowledge Officer' for 'Chief AI Officer'.

Try this yourself with the following paragraph from the article and you'll see why...

'However, I also believe that the effective deployment of AI in the enterprise requires a focus on achieving business goals. Rushing towards an “AI strategy” and hiring someone with technical skills in AI to lead the charge might seem in tune with the current trends, but it ignores the reality that innovation initiatives only succeed when there is a solid understanding of actual business problems and goals. For AI to work in the enterprise, the goals of the enterprise must be the driving force.
This is not what you’ll get if you hire a Chief AI Officer. The very nature of the role aims at bringing the hammer of AI to the nails of whatever problems are lying around. This well-educated, well-paid, and highly motivated individual will comb your organization looking for places to apply AI technologies, effectively making the goal to use AI rather than to solve real problems'.
The problem with creating 'Chiefs' is that they imply clout, but often have none. Witness the number of Chief Knowledge Officer jobs that were created around the turn of the century and how many remain today. I can't think of one. 

Before any roles are created, it's essential that those with real clout understand how organizational learning or knowledge transfer can help them achieve their personal objectives and solve 'actual business problems'. Get that right and you're more than halfway to your strategy. Creating hollow roles are probably unnecessary nails.

Friday, 7 April 2017

Motivating deep experts

Every now and again you hear something that is so simple, you wonder why you hadn't thought of it before. I had one of those moments listening to an superb Knowledge and Innovation Network webinar yesterday. Ian Corbett was presenting on 'Helping experts become catalysts for knowledge and Innovation'. KIN members can see Ian's slides on Memberspace in the Management Buy-in special interest library.

Ian, originally a geologist by trade, has done a lot of research on 'expertise' and is now applying it to charitable education projects in South Africa, where he lives.

The 'aha' moment during Ian's talk came when he was explaining how to get the best from deep experts or technical teams. The defining characteristics are:

  1. They value face-to-face interaction (plays to their inner ego)
  2. Low tolerance for admin and passing fads
  3. They seek innovation, not reuse
  4. They want autonomy

Pretty obvious when you think about it eh?

Yet how often do managers acknowledge these simple needs? KIN had a good look at intrinsic motivations at the recent Spring Workshop on Behavioural Economics. Looking at these 4 motivational factors, they might nicely define what intrinsic motivation means for a deep expert.

Next time you are working with a group of experts, what will you do to act on, or at least acknowledge these?
Source: APQC

KIN members can see Ian's slides on Memberspace in the Management Buy-in special interest library.

Tuesday, 21 February 2017

Facts don't change minds - what we think we know is not what we know

Think you know how a toilet works? This article in New Yorker shows that we really know a lot less than we think. This is noteworthy when considering how knowledge is transferred between 'experts'. It is also shows the importance of communities of practice or networks in validating knowledge. "People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins. One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group".

Wednesday, 8 February 2017


KIN has long stressed the importance and power of learning from failure. In this short, funny and revealing post, David D'Souza publicly shares his experience of what not to do in a TV interview.
His post uses humour, it's punchy (note the bullet points) and is in the first-person. I doubt I'll ever be on TV, but everyone could immediately relate to and learn from this.
Now that's real learning from failure - the antithesis of a dry 'lessons learned' report.

David was a speaker at the KIN Winter workshop on 'Organisational knowledge in the era of Machine Intelligence'. You can see the video of David's inspiring talk here.

Tuesday, 7 February 2017

Innovation; Eureka or Bernard?

Alexander Fleming was by all accounts a brilliant scientist, but poor communicator. His 1935 discovery of penicillin would have gone unnoticed but for Florey and Chain later industrialising its manufacture. The three shared the 1945 Nobel Prize for Chemistry. Great innovations abound where an individual's discovery or invention is capitalised upon by others. 'If I have seen further it is because I have stood on the shoulders of giants*' is often (mis)attributed to Newton.

This trope goes some way to dispelling the 'Eureka!' myth of the lone individual conjoring-up an instant solution in his/her bathtub. In reality, neither the brilliant individual, nor serial or incremental innovators reflect contemporary innovation models. Co-development or collaborative problem-solving, particularly involving diverse sectors, is much more effective.

Last year I hosted a fascinating site-visit to Bletchley Park, the site, now museum, where Britain's secret WW2 code breakers worked. We saw first-hand the importance and effectiveness of diversity in collaboration. Diversity of skills, social backgrounds and crafts all played a part in successfully developing innovations of remarkable importance.

Innovation is Combination is the title of a recent article about by Greg Satell on modern innovation models that transcend organisations or existing networks. As Greg says:

"The 21st century, however, will give rise to a new era of innovation in which we combine not just fundamental elements, but entire fields of endeavor. As Dr. Angel Diaz, IBM’s VP of Cloud Technology & Architecture told me, “We need computer scientists working with cancer scientists, with climate scientists and with experts in many other fields to tackle grand challenges and make large impacts on the world.”
Today, it takes more than just a big idea to innovate. Increasingly, collaboration is becoming a key competitive advantage because you need to combine ideas from widely disparate fields". 

*This phrase is more correctly attributed to a chap prosaically called Bernard of Chartres.

Friday, 20 January 2017

Got a badge? Can I be your friend?

Got a Fitbit or Nike FuelBand? Your employer might soon be asking you to wear one at work.

We live in an ever more connected world and visualising digital connections and interactions is now very easy. However, the most effective networkers instinctively know the importance of building relationships face-to-face. Until now, the only way of mapping those real-world relationships and personal networks has been though survey-based techniques, such as Social Network Analysis (SNA or ONA). Indeed KIN has conducted SNA surveys in the past and will be doing so again this summer.

The advent of cheap, wearable bluetooth or wifi trackers, now gives the opportunity to effortlessly produce sociograms of how teams interact or indeed of entire organisations. Of course there are significant privacy concerns here. Firstly, how do you feel about your movements being tracked in real-time and secondly how will that data be used? Even with assurances of anonymised data and analysis, I'm really not sure I want my bosses tracking my every move. Having said that, they are already doing just that with my 'digital footprint', particularly email and internal messaging.

MIT spin-off Humanyze claims to be working with several large financial and energy companies, mapping employees movement and connections using wearable trackers or 'badges'. They claim network analysis can improve teamwork, collaboration and process optimisation. Some of these programs are being promoted as ways of getting sedentary workers to move around more. If this is the case why not give them a Fitbit?

At least in being asked to wear, and display, a badge, the individual is openly participating in being tracked. I have this amusing/dystopian vision of a badge-wearer I'd rather not be associated with approaching me and hightailing out of there before the system connects us. Conversely, influential and senior people being stalked by 'badgers' wanting to game the system.