Monday, 14 November 2016

Organisational Knowledge in a Machine Intelligence era


KIN Winter Workshop 2016, 7th December 2016.


According to Narrative Science, 62 per cent of organisations will be using Artificial Intelligence (AI) by 2018.

If you asked most people when they last encountered something that used artificial intelligence, they’d probably conjure up a mental image of robots, and might be hard pressed to think of something in everyday use. Machine intelligence and machine learning – the new synonyms for “artificial intelligence” – are on the rise and are going to be pervasive. Anyone using a smartphone is already using some sort of machine intelligence with Google Now’s suggestions, Siri’s voice recognition, or Windows Cortana personal assistant. We don’t call these “artificial intelligence”, because it’s a term that alarms some people and has earned some ridicule down the years. But it doesn’t matter what you call it; the ability to get computers to infer information that they aren’t directly supplied with, and to act on it, is already here.

But what does all this mean in a practical sense? Can – or should we -  rely on intelligent machines to do the heavy (physical and cognitive) lifting for us, and if so, what does the future hold for knowledge and information professionals?

The rise of the chatbot

It’s taken about 10 years, but social media has finally been accepted as a business tool, rather than just a means for people to waste time. If you look at any contemporary enterprise collaboration system, you’ll find social media features borrowed from Facebook or Twitter embedded into the functionality. Organisations have (finally) learnt that the goal of social technology within the workplace is not simply to maximise engagement or to facilitate collaboration, but rather to support work activities without getting in the way. Having said that, we still can’t extract ourselves from email as the primary tool for doing business. Email is dead, long live email!

Some progress then. But technology never stands still, and there’s more disruption on the way, led as usual by the consumer society. Early in 2016, we saw the introduction of the first wave of artificial intelligence technology in the form of chatbots and virtual assistants. This is being heralded as a new era in technology that some analysts have referred to as the “conversation interface”. It’s an interface that won’t require a screen or a mouse to use. There will be no need to click, swipe or type. This is an era when a screen for a device will be considered antiquated, and we won’t have to struggle with UX design. This interface will be completely conversational, and those conversations will be indistinguishable from the conversations we have with work colleagues, friends and family.

Virtual Assistants are personalised cross-platform devices that work with third-party services to respond instantly to users requests which could include online searching, purchasing, monitoring and controlling connected devices and facilitating professional tasks and interactions.

Will it be another 10 years before we see this technology accepted as a business tool? I think not, because the benefits are so apparent.  For example, given the choice of convenience and accessibility, would we still use email to get things done, or would we have a real-time conversation? Rather than force workers to stop what they’re doing and open a new application, chatbots and virtual assistants inject themselves into the places where people are already communicating. Instead of switching from a spreadsheet to bring up a calendar, the worker can schedule a meeting without disrupting the flow of their current conversations.

Companies like Amazon and Google are already exploring these technologies in the consumer space, with the Amazon Echo and Google Home products; these are screenless devices that connect to Wi-Fi and then carry out services.  This seamless experience puts services in reach of the many people who wouldn’t bother to visit an App Store, or would have difficulty in using a screen and keyboard, such as the visually impaired.

We’ll be looking at some examples of how chatbots and virtual assistants are being used to streamline business processes and interface with customers at the KIN Winter Workshop on the 7th December 2016.

Machine Learning 

It is worth clarifying here what we normally mean by learning in AI: a machine learns when it changes its behaviour based on experience. It sounds almost human-like, but in reality the process is quite mechanical. Machine learning began to gain traction when the concept of data mining took off in the 1990’s. Data mining uses algorithms to look for patterns in a given set of information. Machine learning does the same thing, but then goes one step further – it changes its program's behaviour based on what it learns.

One application of machine learning that has become very popular is image recognition. These applications first must be trained – in other words, humans have to look at a bunch of pictures and tell the system what is in the picture. After thousands and thousands of repetitions, the software learns which patterns of pixels are generally associated with dogs, cats, flowers, trees, etc., and it can make a pretty good guess about the content of images.

This approach has delivered language translation, handwriting recognition, face recognition and more. Contrary to the assumptions of early research into AI, we don’t need to precisely describe a feature of intelligence for a machine to simulate it.

Thanks to machine learning and the availability of vast data sets, AI has finally been able to produce usable vision, speech, translation and question-answering systems. Integrated into larger systems, those can power products and services ranging from Siri and Amazon to the Google car.

The interesting – or worrying, dependent on your perspective – aspect of machine learning, is that we don’t know precisely how the machine arrives at any particular solution. Can we trust the algorithms that the machine has developed for itself? There is so much that can affect accuracy, e.g. data quality, interpretation and biased data. This is just one facet of a broader discussion we will be exploring at the KIN Winter Workshop, and specifically those deployments of machine learning for decision making and decision support.

Jobs and Skills

The one issue that gets most people agitated about AI, is the impact on jobs and skills. A recent survey by Deloitte suggested 35% of UK jobs would be affected by automation over the next two decades. However, many counter this by saying the idea is to free up people’s time to take on more customer-focused, complex roles that cannot be done by machines.

I think this video from McKinsey puts the arguments into perspective by differentiating between activities and jobs. Machines have a proven track record of being able to automate repetitive, rule driven or routine tasks. That’s not the same as replacing jobs, where routine processes are only part of a wider job function.  According to McKinsey, taking a cross section of all jobs, 45% of activities can be automated, and we’re not just talking about predominantly manual labour. They go on to say that up to a third of a CEO’s time could be automated.

Other research by the Pew Research Centre has said 53% of experts think that AI will actually create more jobs.

The question we need to be asking ourselves is what knowledge and skills do we need to develop now in order to make the most of this technology revolution happening around us and ensure we remain relevant. If organisations don’t find out more about these technologies and how they can be used to improve efficiency or productivity, they can be sure their competitors are!

If you haven’t yet registered for the KIN Winter Workshop – “KnowledgeOrganisation in the ‘Machine Intelligence’ era", (KIN Member link) do so soon! If you’re not currently being affected by AI, you soon will be. Make sure you’re ready!

Steve Dale
KIN Facilitator






Wednesday, 9 November 2016

Buying-in to the Nudge Business

'I'm trying to get really busy people, resistant to change, to do things differently. I have no team, little or no budget and struggle to get buy-in from senior people'. I frequently hear variants of this refrain from clients embarking on knowledge or organisational learning programmes. 

It was refreshing to hear an inspiring success story this week that illustrated what can be achieved with little resource. The Freakonomics podcast 'The White House Gets Into the Nudge Business' features Maya Shankar describing the evolution and eventual success of her behavioural science unit. She tells how she went from convincing sceptical (and sometimes hostile) US government agencies, to being in demand everywhere. The programme is interesting for two reasons; firstly her patient and smart approach to moving generating senior management buy-in (and getting resource along the way). Secondly the stories of their creative behavioural 'nudges'.  

You'll be inspired by the podcast and the real-world examples Shankar shares. Here is a short extract from the programme transcript. Highlights in bold are mine.



STEPHEN DUBNER: You had been mostly a student for the previous bunch of years, so you weren’t a practitioner of either behavioral sciences or a longtime practitioner of policy making, and yet you are made head of this new White House unit...I guess from the outside you could argue it’s a signal that maybe the White House wasn’t very serious about this or wasn’t really expecting all that much out of a unit like that. 

MAYA SHANKAR:  ...I came in without a mandate and without having the authority to simply create this team. What happened as a result is the the team ended up being far more organic. And I remember at the time thinking, “Man, this is kind of frustrating.” Right? I wish there was an easier way for us to get to 'yes', and that I could simply tell our agency projects, “Please take this risk. Run this early pilot with me.” 

But, you know we’ve come to see longer term value in this organic approach. Actually having to convince our agency colleagues to run behavioral projects with us, doing the upfront work to convince them early on that there was inherent value in what we were proposing. We organized 'brown bags' on behavioral insights 101, giving examples of success stories in which behavioral insights were applied to policy. Making sure that we aligned our recommendations with their existing priorities and goals. 

That all has helped in the longer term because I think it’s actually fostering true cultural change and buy-in in agencies. And for that reason, many of our early pilots with agencies have effortlessly led to longer-term collaborations at the request of our agency partners. So you can easily imagine that if I came in and I was able to order these pilots – well, as soon as I left those pilots, that work would probably leave with me. But because we were required to get their buy in, they started demanding this work. They started becoming internal champions for the work. And now we have a government that has a number of internal champions within the agencies that see the value and hopefully help the whole effort persist. 
------------------------------------

At the KIN Spring 2017 workshop on 7th March we will take a look at Behavioural Economics and how it can be utilised to effect change in knowledge sharing. We already have a great line-up of practitioners from PwC & UBS, an experiential learning exercise on bias from The Chemistry Group and Prof Peter Ayton deputy Dean of Social Sciences at City University. 

AN mp3 copy of the Freakonomics podcast, along with other behavioural economics material, will be available from the KIN event page.