This Harvard Business Review article from Professor Neal Roese of Northwestern University suggests an alternative to learning from failure. His 5 step process uses visualisation techniques to imagine alternative scenarios to help you recover from a setback.
The Innovation Network (KIN) is a members only community, however this blog reflects musings and interests of the KIN Facilitators and members that may be of interest to the wider world.
Thursday, 15 December 2016
Thursday, 1 December 2016
The Hippocratic Oath - as applied to the modern learning organisation
Most of us have heard of the Hippocratic Oath - the 2,500 year-old text solemnly recited by newly qualified doctors. I had never actually read it until today. I was struck at how appropriate it is when applied to knowledge sharing and learning in modern organisations.
I have reproduced the oath below, adapted by me for workers in complex, knowledge-based organizations. I have substituted only a handful words (in italics). These replace a few specific medical terms and I have omitted three sentences that are wholly medical in nature.
- I will respect the hard-won gains of those colleagues in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.
- I will apply, for the benefit of my colleagues, all measures which are required, avoiding those twin traps of hubris and nihilism.
- I will not be ashamed to say "I know not," nor will I fail to call in my colleagues when the skills of another are needed to solve a problem.
- I will respect the privacy of my colleagues, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in confidential matters. If it is given me to save a project, all thanks. This responsibility must be faced with great humbleness and awareness of my own frailty.
- I will prevent decisions that are not based on evidence whenever I can, for prevention is preferable to cure.
- I will remember that I remain a member of the organization for which I work, with special obligations to all my colleagues.
- If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest standards of knowledge and learning and may I long experience the joy of helping those who seek my know-how.
It is almost uncanny how appropriate the Hippocratic Oath is for those working in knowledge-based, modern organisations. However, I am under no illusion that getting knowledge workers to recite this at the time of their annual appraisal will not make the slightest difference.
Credit: The 1964 modern version of the Hippocratic Oath on which this post is based is credited to Professor Dean Lasagne, Dean of the School of Medicine at Tufts University
Photo credit: Hippocrates by Reubens - Wikipedia
Monday, 14 November 2016
Organisational Knowledge in a Machine Intelligence era
KIN
Winter Workshop 2016, 7th December 2016.
According to Narrative
Science, 62 per cent of organisations will be using Artificial Intelligence
(AI) by 2018.
If you asked most people when they last encountered
something that used artificial intelligence, they’d probably conjure up a
mental image of robots, and might be hard pressed to think of something in everyday
use. Machine intelligence and machine learning –
the new synonyms for “artificial intelligence” – are on the rise and are going
to be pervasive. Anyone using a smartphone is already using some sort of
machine intelligence with Google
Now’s suggestions, Siri’s
voice recognition, or Windows
Cortana personal assistant. We don’t call these “artificial intelligence”,
because it’s a term that alarms some people and has earned some ridicule down
the years. But it doesn’t matter what you call it; the ability to get computers
to infer information that they aren’t directly supplied with, and to act on it,
is already here.
But what does all this mean in a practical sense? Can – or should
we - rely on intelligent machines to do
the heavy (physical and cognitive) lifting for us, and if so, what does the
future hold for knowledge and information professionals?
The rise of the chatbot
It’s taken about 10 years, but social media has finally been
accepted as a business tool, rather than just a means for people to waste time.
If you look at any contemporary enterprise collaboration system, you’ll find
social media features borrowed from Facebook or Twitter embedded into the
functionality. Organisations have (finally) learnt that the goal of social
technology within the workplace is not simply to maximise engagement or to
facilitate collaboration, but rather to support work activities without getting
in the way. Having said that, we still can’t extract ourselves from email as
the primary tool for doing business. Email is dead, long live email!
Some progress then. But technology never stands still, and
there’s more disruption on the way, led as usual by the consumer society. Early
in 2016, we saw the introduction of the first wave of artificial intelligence
technology in the form of chatbots
and virtual assistants. This is being heralded as a new
era in technology that some analysts have referred to as the “conversation interface”.
It’s an interface that won’t require a screen or a mouse to use. There will be
no need to click, swipe or type. This is an era when a screen for a device will
be considered antiquated, and we won’t have to struggle with UX design. This
interface will be completely conversational, and those conversations will be
indistinguishable from the conversations we have with work colleagues, friends
and family.
Virtual
Assistants are personalised cross-platform devices that work with
third-party services to respond instantly to users requests which could include
online searching, purchasing, monitoring and controlling connected devices and
facilitating professional tasks and interactions.
Will it be another 10 years before we see this technology
accepted as a business tool? I think not, because the benefits are so
apparent. For example, given the choice
of convenience and accessibility, would we still use email to get things done,
or would we have a real-time conversation? Rather than force workers to stop
what they’re doing and open a new application, chatbots and virtual assistants inject
themselves into the places where people are already communicating. Instead of
switching from a spreadsheet to bring up a calendar, the worker can schedule a
meeting without disrupting the flow of their current conversations.
Companies like Amazon and Google are already exploring these
technologies in the consumer space, with the Amazon Echo and Google Home products; these are screenless
devices that connect to Wi-Fi and then carry out services. This seamless experience puts services in
reach of the many people who wouldn’t bother to visit an App Store, or would
have difficulty in using a screen and keyboard, such as the visually impaired.
We’ll be looking at some examples of how chatbots and
virtual assistants are being used to streamline business processes and interface
with customers at the KIN
Winter Workshop on the 7th December 2016.
Machine Learning
It is worth clarifying here what we normally mean by
learning in AI: a machine learns when it changes its behaviour based on experience.
It sounds almost human-like, but in reality the process is quite mechanical. Machine learning began
to gain traction when the concept of data mining took off in the 1990’s. Data
mining uses algorithms to look for patterns in a given set of information.
Machine learning does the same thing, but then goes one step further – it
changes its program's behaviour based on what it learns.
One application of machine learning that has become very
popular is image recognition. These applications first must be trained – in
other words, humans have to look at a bunch of pictures and tell the system
what is in the picture. After thousands and thousands of repetitions, the
software learns which patterns of pixels are generally associated with dogs,
cats, flowers, trees, etc., and it can make a pretty good guess about the
content of images.
This approach has delivered language translation,
handwriting recognition, face recognition and more. Contrary to the assumptions
of early research into AI, we don’t need to precisely describe a feature of
intelligence for a machine to simulate it.
Thanks to machine learning and the availability of vast data
sets, AI has finally been able to produce usable vision, speech, translation
and question-answering systems. Integrated into larger systems, those can power
products and services ranging from Siri and Amazon to the Google car.
The interesting – or worrying, dependent on your perspective
– aspect of machine learning, is that we don’t know precisely how the machine
arrives at any particular solution. Can we trust the algorithms that the machine
has developed for itself? There is so much that can affect accuracy, e.g. data
quality, interpretation and biased data.
This is just one facet of a broader discussion we will be exploring at the KIN Winter Workshop, and specifically
those deployments of machine learning for decision making and decision support.
Jobs and Skills
The one issue that gets most people agitated about AI, is
the impact on jobs and skills. A recent survey by Deloitte suggested 35% of UK
jobs would be affected by automation over the next two decades. However, many
counter this by saying the idea is to free up people’s time to take on more
customer-focused, complex roles that cannot be done by machines.
I think this video from McKinsey
puts the arguments into perspective by differentiating between activities and jobs. Machines have a proven track record of being able to
automate repetitive, rule driven or routine tasks. That’s not the same as
replacing jobs, where routine processes are only part of a wider job function. According to McKinsey, taking a cross section
of all jobs, 45% of activities can be automated, and we’re not just talking
about predominantly manual labour. They go on to say that up to a third of a
CEO’s time could be automated.
Other research by the Pew Research
Centre has said 53% of experts think that AI will actually create more
jobs.
The question we need to be asking ourselves is what knowledge
and skills do we need to develop now
in order to make the most of this technology revolution happening around us and
ensure we remain relevant. If organisations don’t find out more about these
technologies and how they can be used to improve efficiency or productivity,
they can be sure their competitors are!
If you haven’t yet registered for the KIN Winter Workshop – “KnowledgeOrganisation in the ‘Machine Intelligence’ era", (KIN Member link) do so soon! If you’re not
currently being affected by AI, you soon will be. Make sure you’re
ready!
Steve Dale
KIN Facilitator
Wednesday, 9 November 2016
Buying-in to the Nudge Business
'I'm trying to get really busy people, resistant to change, to do things differently. I have no team, little or no budget and struggle to get buy-in from senior people'. I frequently hear variants of this refrain from clients embarking on knowledge or organisational learning programmes.
It was refreshing to hear an inspiring success story this week that illustrated what can be achieved with little resource. The Freakonomics podcast 'The White House Gets Into the Nudge Business' features Maya Shankar describing the evolution and eventual success of her behavioural science unit. She tells how she went from convincing sceptical (and sometimes hostile) US government agencies, to being in demand everywhere. The programme is interesting for two reasons; firstly her patient and smart approach to moving generating senior management buy-in (and getting resource along the way). Secondly the stories of their creative behavioural 'nudges'.
You'll be inspired by the podcast and the real-world examples Shankar shares. Here is a short extract from the programme transcript. Highlights in bold are mine.
STEPHEN DUBNER: You had been mostly a student for the previous bunch of years, so you weren’t a practitioner of either behavioral sciences or a longtime practitioner of policy making, and yet you are made head of this new White House unit...I guess from the outside you could argue it’s a signal that maybe the White House wasn’t very serious about this or wasn’t really expecting all that much out of a unit like that.
MAYA SHANKAR: ...I came in without a mandate and without having the authority to simply create this team. What happened as a result is the the team ended up being far more organic. And I remember at the time thinking, “Man, this is kind of frustrating.” Right? I wish there was an easier way for us to get to 'yes', and that I could simply tell our agency projects, “Please take this risk. Run this early pilot with me.”
But, you know we’ve come to see longer term value in this organic approach. Actually having to convince our agency colleagues to run behavioral projects with us, doing the upfront work to convince them early on that there was inherent value in what we were proposing. We organized 'brown bags' on behavioral insights 101, giving examples of success stories in which behavioral insights were applied to policy. Making sure that we aligned our recommendations with their existing priorities and goals.
That all has helped in the longer term because I think it’s actually fostering true cultural change and buy-in in agencies. And for that reason, many of our early pilots with agencies have effortlessly led to longer-term collaborations at the request of our agency partners. So you can easily imagine that if I came in and I was able to order these pilots – well, as soon as I left those pilots, that work would probably leave with me. But because we were required to get their buy in, they started demanding this work. They started becoming internal champions for the work. And now we have a government that has a number of internal champions within the agencies that see the value and hopefully help the whole effort persist.
------------------------------------
At the KIN Spring 2017 workshop on 7th March we will take a look at Behavioural Economics and how it can be utilised to effect change in knowledge sharing. We already have a great line-up of practitioners from PwC & UBS, an experiential learning exercise on bias from The Chemistry Group and Prof Peter Ayton deputy Dean of Social Sciences at City University.
AN mp3 copy of the Freakonomics podcast, along with other behavioural economics material, will be available from the KIN event page.
Wednesday, 26 October 2016
Big Data, Data Analytics and AI
Image source: livemint.com
Big Data, Data Analytics and AI have been topics and trends that I’ve been keeping a "layman's" eye on for several years, mainly because I don't like surprises. If I'm going to be replaced at some point by a machine, I'd like to see it coming from a distance rather it sneaking up behind me!
One of the issues I have with with Big Data is just that – the term “Big Data”. It’s fairly abstract and defies a precise definition. I’m guessing the name began as a marketing invention, and we’ve been stuck with it ever since. I’m a registered user of IBM’s Watson Analytical Engine, and their free plan has a dataset limit of 500MByte. So is that ‘Big Data’? In reality it’s all relative. To a small accountancy firm of 20 staff, their payroll spreadsheet is probably big data, whereas the CERN research laboratory in Switzerland probably works in units of terabytes.
Eric Schmidt (Google) was famously quoted in 2010 as saying “There were 5 exabytes of information created between the dawn of civilisation through 2003, but that much information is now created in 2 days”. We probably don’t need to understand what an ‘exabyte’ is, but we can get a sense that it’s very big, and what’s more, we begin to get a sense of the velocity of information, since according to Schmidt it’s doubling every 2 days, and probably less than that since we’ve moved on by 6 years since his original statement.
It probably won’t come as a surprise to anyone that most organisations still don’t know what data they actually have, and what they’re creating and storing on a daily basis. Some are beginning to realise that these massive archives of data might hold some useful information that can be potentially deliver some business value. But it takes time to access, analyse, interpret and apply actions resulting from this analysis, and in the mean-time, the world has moved on.
According to the “Global Databerg Report” by Veritas Technologies, 55% of all information is considered to be ‘Dark’, or in other words, value unknown. The report goes on to say that where information has been analysed, 33% is considered to be “ROT” – redundant, obsolete or trivial. Hence the ‘credibility’ gap between the rate at which information is being created, and our abilities to process and extract value from this information before it becomes “ROT”.
But the good news is that more organisations are recognising that there is some potential value in the data and information that they create and store, with growing investment in people and systems that can make use of this information.
The PwC Global Data & Analytics Survey 2016 emphasises the need for companies to establish a data-driven innovation culture – but there is still some way to go. Those using data and analytics are focused on the past, looking back with descriptive (27%) or diagnostic (28%) methods. The more sophisticated organisations (a minority at present) use a forward-looking predictive and prescriptive approach to data.
What is becoming increasingly apparent is that C-suite executives who have traditionally relied on instinct and experience to make decisions, now have the opportunity to use decision support systems driven by massive amounts of data. Sophisticated machine learning can complement experience and intuition. Today’s business environment is not just about automating business processes – it’s about automating thought processes. Decisions need to be made faster in order to keep pace with a rapidly changing business environment. So decision making based on a mix of mind and machine is now coming in to play.
One of the most interesting bi-products of this Big Data era is ‘Machine Learning‘ – mentioned above. Machine learning’s ability to scale across the broad spectrum of contract management, customer service, finance, legal, sales, pricing and production is attributable to its ability to continually learn and improve. Machine learning algorithms are iterative in nature, constantly learning and seeking to optimise outcomes. Every time a miscalculation is made, machine learning algorithms correct the error and begin another iteration of the data analysis. These calculations happen in milliseconds which makes machine learning exceptionally efficient at optimising decisions and predicting outcomes.
So, where is all of this headed over the next few years? I can’t recall the provenance of the quote “never make predictions, especially about the future”, so treat these predictions with caution:
- Power to business users: Driven by a shortage of big data talent and the ongoing gap between needing business information and unlocking it from the analysts and data scientists, there will be more tools and features that expose information directly to the people who use it. (Source: Information Week 2016)
- Machine generated content: Content that is based on data and analytical information will be turned into natural language writing by technologies that can proactively assemble and deliver information through automated composition engines. Content currently written by people, such as shareholder reports, legal documents, market reports, press releases and white papers are prime candidates for these tools.(Source: Gartner 2016)
- Embedding intelligence: On a mass scale, Gartner identifies “autonomous agents and things” as one of the up-and-coming trends, which is already marking the arrival of robots, autonomous vehicles, virtual personal assistants, and smart advisers. (Source: Gartner 2016)
- Shortage of talent: Business consultancy A.T. Kearney reported that 72% of market-leading global companies reported that they had a hard time hiring data science talent.(Source: A.T Kearney 2016)
- Machine learning: Gartner said that an advanced form of machine learning called deep neural nets will create systems that can autonomously learn to perceive the world on their own. (Source: Ovum 2016)
- Data as a service: IBM’s acquisition of the Weather Company — with all its data, data streams, and predictive analytics — highlighted something that’s coming. (Source: Forrester 2016)
- Real-time insights: The window for turning data into action is narrowing. The next 12 months will be about distributed, open source streaming alternatives built on open source projects like Kafka and Spark. (Source: Forrester 2016)
- Roboboss: Some performance measurements can be consumed more swiftly by smart machine managers aka “robo-bosses,” who will perform supervisory duties and make decisions about staffing or management incentives. (Source: Gartner 2016)
- Algorithm markets: Firms will recognize that many algorithms can be acquired rather than developed. “Just add data”. Examples of services available today, includingAlgorithmia, Data Xu, and Kaggle (Source: Forrester 2016)
The one thing I have taken away from the various reports, papers and blogs I’ve read as part of this research is that you can’t think about Big Data in isolation. It has to be coupled with cognitive technologies – AI, machine learning or whatever label you want to give it. Information is being created at an ever-increasing velocity. The window is getting ever narrower for decision making. These demands can only be met by coupling Big Data and Data Analytics with AI.
A summary of all the above is included in these slides.
Tuesday, 18 October 2016
What modern organisations can learn from the Bletchley Park code-breakers
A few weeks ago I wrote about my eye-opening visit to Bletchley Park in a post called 'Huts and Silos'. This inspired me to arrange a KIN Site Visit to the home of WW2 codebreaking. The idea was
to see what modern organisations could learn from Bletchley Park's innovation, collaboration and organisation set-up. On Friday, 13 of us had an inspiring tour of the site; this is the result of our reflections at the end of the day.
Participant observations of the Bletchley Park operation
|
Possible lessons for modern organisations
|
|
1.
|
Diversity of backgrounds and professions represented. Unusually, class
distinctions were immaterial.
|
Different perspectives & backgrounds = higher likelihood of
finding solutions to problems. Complementary skill sets.
|
2.
|
‘Silo’ working at Bletchley Park was a necessity for security reasons
|
Sometimes there is a good reason for clear separation of operations,
for example Chinese Walls for financial operations.
|
3.
|
Despite much of the work being tedious and the workers conscripted,
morale was high and ambitious targets achieved
|
Intrinsic motivation (having a goal that workers believe in and work
that plays to strengths) can compensate for difficult circumstances. It’s not
all about pay and rations (literally!)
|
4.
|
Socialisation and relaxation was seen by senior management as an
important factor in managing stress and keeping productivity high. Eg tennis,
dances, beer!
|
Informal spaces to relax and converse with co-workers are vital in
building relationships, trust and the exchange of ideas (clearly the latter
didn’t apply at Bletchley Park)
|
5.
|
Unusually for the time, female staff at Bletchley Park (2/3 of the
total) received equal pay to men. Note: we are unsure if this applied just to
the code-breakers, or all female staff.
|
One hopes that equal pay is no longer an issue, but we must be
vigilant with regard to biases. The KIN Spring 2017 Workshop will include
this issue.
|
6.
|
Individuals with specialist skills were given very specific tasks; not
asked to be generalists
|
Too often experts are asked to take on generalist roles (such as
managing teams or budgets). This can be a distraction, or cause stress or
under-performance.
|
7.
|
There were many failed attempts at problem solving. This was
anticipated and processes in place to understand root cause of failure. In
one instance, the Navy code-breakers took 9 months of repeated failure before
cracking a problem.
|
We need to have a defined level of tolerance for failure, and ensure
processes are in place to take action as a result. ‘Anyone who has not
experienced failure has never tried anything new’ – A Einstein
|
8.
|
The code breakers had to deal with up to an astonishing 6000 messages
per day. These had to be processed before midnight every day, when the Enigma
settings changed. The industrialization of the processing and analysis may be
the first example of Big Data and Data Analytics.
|
Processes and skills for the analysis of huge volumes of rea-time data
are becoming ever more important. AI may be a way of understanding hidden
patterns an inferences (see KIN Winter Workshop, 7th December).
|
9.
|
The actors in ‘The Imitation Game’ spent time talking directly with
Bletchley Park veterans, to
understand what it was like to work there.
|
First-hand, verbatim knowledge is vital in understanding context and
nuance for handovers and other knowledge transfer situations.
|
10.
|
Having tough targets and working under critical time constraints can
sometimes foster ingenious solutions. For example the ‘cribs’ shortcuts.
|
Sometimes disturbing the staus quo or adopting counter intuitive
approaches can bear foster innovation.
|
11.
|
A good source of personnel were cryptic crossword puzzle fanatics and
other critical thinkers
|
Do we encourage critical thinking and individualism sufficiently in
our education systems?
|
Subscribe to:
Posts (Atom)