Friday, November 16, 2018

Building in organisational adaptability

The way in which most businesses organise for value has a fundamental 20th century mass production design flaw. It constrains them from reaching both the capacity and capability to respond to the market at the speed our digital world demands.

It is specialisation.

Specialisation made a great deal of sense in a mass production world - enshrined in Adam Smith's pin factory, through Henry Ford's Model T and beyond.
But it presents a problem of friction for the modern, responsive organisation.

For example - if you have wonderful developers in one silo, and wonderful business leaders in another, you introduce a brake on responsiveness.
While you may have optimised your tech teams to develop, and your business teams to come up with ideas to value, in order to accelerate the generation of value these 'roles' should be more closely connected (as we see in Agile, Design Thinking, Lean Start-up, Holocracy and combinations thereof).
Specialisation made sense when neither party could easily access the skills of the other.

That has changed. Technology now enables business ideation. Technologies that enable one-to-one marketing, for example, can inspire businesses to ideate on the new models this supports. But the friction comes when tech delivery capacity and capability remain a bottleneck.

Those choke-points all too often force businesses back to 'as usual' when striving for new.
What is required is a way of giving business people technology skills and technology people business skills. In this way the organisation becomes more responsive to upturns and dips in  the requirement for both. It becomes built to adapt.

Pipe dream? Hardly - there are already many no-to-low code platforms and technologies which enable pretty much anyone with a laptop to develop their own applications and business process automation with nothing more than the lightest layer of additional coding*.
Business folk can be technology folk, too. At low cost. Minimum Viable Products can be crafted by the people with the vision and the insight - closing the strategy gap through direct and hands-on involvement.

And this cuts both ways. By applying managed innovation as a way of working (Agile, Design Thinking et al) Technology folk can become as enabled to access insight, set vision
and focus on value as their business colleagues.

Bringing both no-to-low code platforms and insight-led, human-centred managed innovation into your org, is a huge leap toward becoming a truly responsive organisation.

*I do not dismiss the importance of technologists. Even the best of these platforms 1. Had to be coded in the first place 2. Create their biggest bang for buck when some customisation is applied.

Wednesday, October 31, 2018

Coding the economy to cut the high price of low cost

Photo by Ray Hennessey on Unsplash

The allocation of resources - and the cost of optimising that allocation - has always been humanity's most profound challenge. Artificial Intelligence may offer our best solution - but only if we are prepared to think globally.

Capital and the market model it sustains has been held up as our best bet for resource allocation for centutres. Yet capitalism and its governance are not necessary states by any means. They are emergent properties of the complex adaptive system we describe as the economy.

One transaction allocating value does not capitalism make. Multiple ones at scale can. Just as a molecule of water is not a wave, let alone a tidal force. These are emergent properties of complex adaptive systems.

The interactions within the system are what creates those emerging properties. Change those interactions and the properties change.

The best modifier we have had to date has been Government intervention - taxes, laws, welfare etc. But complex aqdaptive systems are notoriously difficult to tinker with. Think of the butterfly effect as applied to weather (another complex adaptive system).

So a Trump here or a Brexit there is going to have some impacts, but predicting exactly who, where and what feels the chill wind is a somewhat more exacting science than knowing that change will come.

As our economy has become more connected (and decentralised) it is becoming more anti-fragile. Make no mistake the economy will keep on functioning no matter what governments do. The question is can it be controlled somehow to ensure the emergent properties are desirable for humanity?

As we head towards quantum computing and ever more intelligent automation the point looms at which decisions to allocate resource will patently be better handled by machines.

Our machines will be better able to calculate the global economic costs of each transaction. They will factor for the environment as much as for human need. If we code that in.

But this is going to be a greater race for power than military applications of AI. Those who choose not to take part will be at significant compertitive disadvantage to rivals.

So the pressures to allocate to meet immediate human gratification will be huge. Only through global agreement on total costs will we give our children a chance.

Monday, September 24, 2018

The strategic importance of managed innovation in digital transformation

A couple of thoughts in visual form on the strategic importance of managed innovation in digital transformation - and the behaviours required of the people involved:

First - the strategic importance:


Second - the behaviours


Thursday, August 23, 2018

Enabling achievement vs hitting your KPIs

Photo by Jonas Jacobsson on Unsplash
GCSE exam results filled the UK media today, telling its once-a-year story of joy and heartbreak. The arguments over the KPIs have been more intense this year amid changes in the way exam results are calculated.
Which prompted
me to return to a regular question when faced with how to measure something.
I asked a teenager what she thought education was for?
'To help you pass exams,' she said.
But it's not, is it?
Education is a lifelong thing. We acquire new skills and capabilities to be able to achieve things. Education is to enable us to achieve the things we seek to achieve.
The exam result is not the thing we are seeking to achieve.
The same is true of so much poor wisdom applied to the selection of our business KPIs. Too often they provide a distraction from the thing we are seeking to achieve and become an end in themselves.
Next time you are tasked with designing or setting kpis, remember how exams can so easily fail the goals of education.

Friday, August 17, 2018

Crushed by scale

Photo by Mikito Tateisi on Unsplash
What if your process is simply scaling up doing the wrong thing?
What if your improved technology enables you to do that wrong thing even faster?

We often talk about the economies large organisations gain through scaling. But doing more of the wrong thing, that's the diseconomy of scale - and the crippling drag on the value of change.
So while we marvel at the new things, we must never be distracted from the need for new ways.


Digital Transformation is little more than a new thing to marvel at (an expensive tech upgrade) - unless it is accompanied by a shift to insight-led, value focused innovation as the organisation’s default way of working.
And while ideas are great, value is better. And continuous value creation is best.
To get to best requires tested frameworks, the right expertise, accelerators and approaches,. And they must be delivered in a repeatable, human-centred and transferable way.

And only once you are proving value... then you scale.


Tuesday, July 17, 2018

The Digital Customer exposes the need for value in all interactions


We already have digital versions of ourselves populating our increasingly digital world: Your Linkedin, Facebook and Twitter profiles, your Amazon and Google footprints are all examples.
For the most part they are not yet autonomous. But it cannot be long before the 'MeBot' - an autonomous and intelligent you - becomes a ubiquitous part of our daily interaction with people, things and data.
All of which strongly suggests that brands and organisations must start developing strategies that place the digital customer at their heart.
Let me be clear, that digital version of you will always be informed by and learning from the real you. But increasingly it will be the digital rather than analogue version of you who will be making the transactions (tilting, as these thing are, to online more heavily by the day).
And if Digital You has got the spends - Digital You is going to be the target.
So what does advertising/targeting/relationship-building/comms/PR/you-name-it look like when it is aimed at our MeBot?
Well - I suspect MeBot's will rapidly learn which lies to ignore, which content sources to trust, which deals are for-real. They may even be less swayed by the Herd mentality humans find it so hard to resist (think of the impact on the Stock Markets...).
This is likely to starkly expose some of the realities and truths of relationships of trust - such as...

  1. Customers are not inhabitants of your omnichannels waiting to be managed from one to the next. They live in a 4D world with limitless touchpoints. The analogue digital combination will evidence that by the truck-load. Map that!
  2. Customers are not waiting to be engaged, made your friend, or have anything else 'done' to them. They need a reason to interact with you... which leads us to point 3.
  3. Customers are not loyal. Forget loyalty - focus on proof of value. Unless you are offering a good enough value proposition your wheels will just keep on spinning.

Thursday, June 21, 2018

The road to frictionless has hardly begun

Image via : https://amckinnis.com/3-ways-to-create-frictionless-transactions/
A fascinating evening at Imperial College last night. An opportunity to hear from Google and Amazon  and others on Voice.

Voice is becoming increasingly important - with 85 per cent of brands now actively working on their voice strategies (according to Vaice - a voice tech agency offering pro bono help to brands and agencies to engage in voice).

We heard about encouraging efforts from the likes of Snips (who have a blockchain-supported edge-computing solution to the data-grab dilemma many businesses, orgs and people may fear of the increasingly dominant platforms (such as Google, Amazon, Apple, Microsoft, Facebook) and a personal favourite, Voiceitt, which is out to make voice accessible to those whose speech may be challenged by stroke, cerebral palsy and other debilitating conditions (including age).

Amazon shared the model it uses to make decisions about Alexa Skills to build. Unsurprisingly it starts with customer value...

Customer Value / Complexity x Frequency Potential x Frequency Maximisers

To be honest, that's pretty much the formula for success applied since widgets became apps, on web or mobile. I could argue it's a pretty solid formula for success in pretty much anything.

But it has been for a long time.

And that's the bit I think the excitement about Voice is missing currently. There was a lot of focus on the value of content and the continued broadcasting of it. There was talk about designing for personas, but none of this addresses the shift we should be looking for in business models.

When the web arrived, this was also the first reaction; how can we make money with this novelty?

It's real impact is how it shifts the way we can organise, cut out traditional supply chains etc. That wasn't identified immediately for the most part.
Then apps - what new capabilities could we play with?

We will also see a repeat of BYOD - my home is full of voice devices. My office (apart from our Collab) isn't. A generation of kids is growing up right now using voice for search, to learn, to discover music, to play games, to do the stuff they want to do with technology. Alexa for Business is already live in the US.

The big stuff, the new business models, the real impacts on how we behave (and since we are social beings, build relationships and organise), these come when we start considering what it means to have ubiquity with the new technology:

  • How will we behave when voice is everywhere in everything (and I will package personal recognition without  the need for a screen with this)? 
  • How quickly can the AI behind voice learn enough about our emotional state to make use of it? The reasons behind our behaviour are somewhat more complex than current marketing typically grasps (See Behave for a crash course).
  • Do we need new rules to cope with the fact that voice literally speaks to our most instinctual selves (bypassing much of the frontal cortex brain activity where are our logic and judgment are most developed).

Voice strategies must go beyond a tone of voice for a brand. They must look to a future in which the majority of information exchanges are done out loud, where a few clicks is friction too far, where single sign-in is in the dustbin of history and customer intimacy is of the highest fidelity and at ubiquitous scale.

Here is a world that, from that learned intimacy, prediction must follow.

The immediate battle field is on two fronts:  First to intimacy and first to prediction built on it. The road to frictionless has hardly begun.

Friday, May 25, 2018

Actions speak louder than faux rationality

Photo by taha ajmi on Unsplash
When trying to understand human behaviour our biggest mistake is to seek the rational in what we think or what we believe. 

"Rationality resides in what you do" - Nassim  Nicholas Taleb.

If you ask people why they do something, or even ask them to accurately describe what they do, the answers are clouded by self-justification and self-protection - obscuring the actual. They will tell you what they think they do (or even what they think you want to hear). They will tell you what they believe is accurate.

Observing what they do often reveals significant differences between their belief and the real.

This has obvious and applicable benefit in reducing the risk in innovation. Respond not to what people say they do or think they do, but what they actually do. A-B testing, Design Thinking and Lean Start-up methodologies are all rooted in this.

The faux rationality of conclusions drawn based on our constructs of behaviour and abstractions there-on (replete with our own cognitive biases) is where we reintroduce risk. When this doesn't fail we should far less seek to repeat - than count ourselves lucky. No-one stays lucky forever.

Mathematics does not allow for constructs or abstraction. It demands precisely defined objects and relations. Without which - no algorithm can function.

Nvidia's research into teaching robots to perform tasks by having them observe humans illustrates again how the pursuit of AI is revealing to us what is really rational about people.



Thursday, May 24, 2018

Predicting how long your job will last

If you want to predict the future, look at what has stood the test of time.

When we talk about the future of work - naturally there are going to be new roles. There may be less or different tasks, the latter more likely.

But what you can bank on is that the roles that were here 100 years ago are far more likely to be here in another 100 years than those roles that have been with us for just a few short years.

That's not to say none of the new ones will stand the test of time, but a far higher percentage of the old ones will.

The longer anything lasts - the longer it is likely to continue to last. This is one of the lessons we can draw from Antifragility and other work by Nassim Nicholas Taleb. He would say it is a lesson we can learn from the wisdom of our grandparents.

Will a teacher's job exist in 100 years time? 90% yes. Will a social media strategists? 90% no.

We are often dazzled by the new and make projections into the future on very shallow data. This fails. As AI is proving all over again.

To create value with AI the proposition needs to be reframed in terms of prediction. But unless the correct weighting of what has come before is built into the programming, researchers find they hit the problem they call 'catastrophic forgetting'. The solution is to build in virtual memories (eg Deep Mind's Differential Neural Computer).

For AI to succeed it has to factor for what you and I instinctively know - the longer something has lasted, the longer it will succeed.

Wednesday, April 11, 2018

The professional touch: And other ways to kill your effectiveness

What does being professional mean in a digital world?
Once it was 'professional' to complete and perfect products to the nth degree before releasing them from secrecy. Now it is the norm to release Betas and Alphas to learn from doing, to measure response, to shape to adoption and behaviour.
In the same way professional writing, publications, presentations and thought leadership once served to provide a clear and final view. This was how it was. No argument expected and few tolerated.
Blogging, and other social media changed that - a space was created where ideas were kept alive rather than shot dead to be stuffed and placed on the wall as some kind of trophy.
The digital world requires a different kind of 'professionalism' - one which is less about the refinement of the 'professional touch' and more about using your skills to engage, learn and iterate.
This is a world in which a messy kitchen is more valuable than a neat and tidy professional one. Clay Shirky came up with the analogy in an interview I conducted with him 10 years ago. He argued that people felt less comfortable about joining in if they entered a kitchen in which nothing was out of place. If there was a little bit of mess, we were more likely to pick up a utensil and help out.
There's evidence from the world of dodgy fonts, too. While the professional approach is to use clear fonts with limited amounts of variation in size, emphasis or hue, what actually works is somewhat different.
Studies at Princeton discovered hard to read - or what designers call 'ugly' fonts deliver greater retention of what has been read. If you want to land the message make it ugly, messy and... naturally add some emotion.
Professional detachment is of low value in this last case. Creating memories does not simply rely on the world we experience externally. It is as much about our feelings at the time of experiencing. Let the excitement shine out.

Maya Angelou — 'At the end of the day people won't remember what you said or did, they will remember how you made them feel.'
If we accept that we can scale our effectiveness as professionals when we gain engagement, and in so doing can better land an idea, then what passes for professional communication demands to be reappraised.

Monday, April 09, 2018

Avoiding the holes in your future

"Planning involves conjectures about the future and hypothetical cases. They are so busy with actual cases that they are reluctant to take on theoretical ones."


Henry Kissinger was talking about US Government policy makers when he wrote the above - in his doctoral thesis. The insight is equally valid for businesses today.

When 'planning' what is often conducted is an exercise in reactive tactic making - trying to solve the problems facing you right now, usually in the order they appear to be impacting. It is planning in the rear-view mirror.

The result is a drag on future capability, with resource allocation skewed toward problems that have already happened versus preparation to take advantage of emerging opportunity.

Kissinger advises you must create the time to lift your eyes and look as far ahead as you can see.

Words like 'hypothetical' and 'theoretical' are too often dismissed in the daily challenge of dealing with what's right in front of you (by which we usually mean, what is immediately behind us).

So for those who are still more concerned about their to-do list (let's call it our 'should've already done list'), here's a real-world example from the world of advanced motorcycle riding: The further ahead you look, the faster you can go.

Looking further ahead when riding allows you to:

a) Identify risks, and plan to deal with them
b) Get a long view of the road ahead and plan your best position not just for the first bend you can see, but for a series of twists and turns.

The opposite leads to 'target fixation'. This is a known phenomena beyond motorcycling.

From Wikipedia: Target fixation is an attentional phenomenon observed in humans in which an individual becomes so focused on an observed object (be it a target or hazard) that they inadvertently increase their risk of colliding with the object.

Simply: You End Up Going Where You Look.

If you focus too much on the pothole in front of you, you are more likely to end up riding straight into it.

Look away from the danger and towards your opportunities and you are better able to make a plan to avoid the hole in your future.

That's planning.









Thursday, March 29, 2018

What if Facebook is doing us the biggest favour of all?

What if Facebook's scooping up of our personal data is doing us a huge favour?
How can that be? Let's imagine, and think, really big for a moment.

Humans as corporeal beings may be facing an extinction event. We are destroying our eco-system at an alarming rate, making large tracts of land uninhabitable. Sperm count has fallen in developed countries by 50% in four decades. If the rates of decline continue we'll be hitting 'The Handmaid's Tale' scenarios before we run out of Earth to live on.

There are those that argue (Life3.0) that far from dieing out, we may be about to evolve. That evolution would see us abandon our bodies and attain consciousness as digital beings.

To do so would free us from the challenges of keeping our bodies in a decent state - alive for example, and enable us to explore the universe, giving meaning to the vast tracts of it that currently have none (because there is no consciousness out there to experience it).

With me so far? Ok. So how does that mean Facebook is doing us a favour?

AI needs a lot of data to start learning and doing things humans do. It will need even more to recreate conscious versions of ourselves to live in infinity as zeros and ones.

What if this is Facebook, Google, Baidu, Yandex, Amazon's real mission - even if they don't realise it themselves? They are gathering and storing the data - to enable our evolution-as-upload as part of (rather than subject to) The Singularity.

Someone has to do it. If Facebook wants to make use of my data in the meantime to personalise an ad or two - I think that's a very reasonable exchange.

Happy Ishter!

Friday, March 23, 2018

Keep Calm And Get A Relationship

The whole Facebook-Cambridge Analytica debacle can be read as a lot of wailing and gnashing of teeth from people who like to see the internet as a wild west awaiting their control. But there is an important lesson for anyone using data.


First - why the fuss? There are already plenty of laws and forthcoming rules to prevent the misuse of data.

The General Data Protection Regulation explicitly states that someone's data cannot be used or stored without their express permission, for example.

So, even if you were to grant a company permission to use your data, you can't grant permission to them to use your friends’ data. A company can't ask for that or use that. Even Facebook realised this was a share too far in 2014 and ended the practice (which had until then been employed by 'abusive apps').

However, the argument is that all that data has already been hoarded by the bad guys. But GDPR will make every item they hoard subject to compliance. So even in the case of old data (which  loses its salience by the second in any event) the hoarder must make it easy for anyone to remove their consent and retrieve their data.

That's going to be a challenge for bad actors. And when the auditors come calling they will face fines for every single data point. And these are fines at the scale of 'put you out of business'.

The short term issue for Facebook and, therefore, for much of digital marketing and communications, is the breach of trust. This is based on the notion that we didn't understand the scale of what could be done with the posts and likes and comments we gave away in exchange for better connection with people and information that was useful or interesting to us.

Facebook could act on this, at least re the instance of Fake News. They could set their engineers to work creating an algorithm to automatically add links to fact-checking or cross-checking validated sites.

They could of course do the same for their adverts. Imagine the potential to cut through the lies...

However, these are only solutions if you have difficulty filtering truth from deceit. In reality we humans have a brilliantly well-developed ability to see through bull.

Large parts of our brains are dedicated to sorting the trustworthy from the cheats. (Martin Novak's Super Co-operators says this was essential to our ability to live in co-operative societies). Target me with all the propoganda you like, I won't be voting Nazi.

So we do have a responsibility in this as individuals. We choose what we are willing to believe, and we must ensure we apply our innate abilities to spot the fraudulent at all times.

And naturally - any business or organisation handling data must do so with care and with all due respect for the owner. It is this respect for the owner that points to the most critical learning.

If the digital industry takes one thing from Facebook's woes, it should be this:

Since the value of data rapidly decays, the relationship with the human behind the data is always going to be of far greater value than the data assets themselves.

Data is not the relationship. It is the output of a relationship. Get one.


Thursday, March 22, 2018

Minimum Viable Job Descriptions

To make truly responsive digital organisations requires an insight-led approach to creating new value, shifting the way we work and the technologies we need to support us.  But it also requires a change in us.

One thing the AI revolution is teaching us is that work (at least the bit left for humans) is less about repeating tasks and much more about striving for goals.
Yet job descriptions, which function as both recruitment rule book and measurement and guidance tool, have mostly remained proscriptive and remarkably static.

Most of us know job descriptions rapidly become unsatisfactory. Today it seems they become a poor fit with reality faster than ever. Perhaps it is time to formalise and recognise this.? Understanding what we do often helps us improve it.

So let's abandon the job description in favour of the Job Hypothesis  (or at least pass it to the task-compiling world  the bots better manage). The Job Hypothesis - a minimum viable job description - should be our start point. It should provide a framework to seek candidates within and to give new starters an initial steer.

The hypothesis should emerge from your initial insights; about the market, the needs of customers and trends emerging from impacting technologies.
Add an understanding of the organisation's Why and What and you can start to work out How the role in question should support these.

From these insights you can shape what the purpose of the role is; the roles it plays in supporting the organisation's Go To Market strategies (by audience); and - crucially - the responsibilities versus gathering more insight in all of the above.
These will help define what success will look like (ie in meeting requirements in x way or by y degree).

But this first draft must only be described as a hypothesis. Success is in further, continuous refinement - making the role flexible to live market and business need and to emerging trends. This places the focus on change and rewards and formalises constant learning about the market, customers, technology and other drivers - with the intended benefit of iterating roles towards greater market fit - now and in preparation for the future.

This will not be helpful to those who are only comfortable being told what to do, or who want to do the same old things days after day. But you probably aren't recruiting many of those.  Bots have that covered.

Friday, March 02, 2018

Agile Democracy

Image via http://cesran.org
Whatever your politics, whatever your position, there can be no doubt that the decision over Brexit - the UK's vote to leave the European Union - leaves a feeling of unease. It feels somehow unfit for purpose.

The problem with a vote - much like any decision - is that we can only commit to an intention. We do not vote for the consequences.

In the Brexit case (and, for the record, I remain, a remainer) the national vote was for the intention to leave. It cannot have been for the consequences. These - as with many of our decisions, contained a very great many unknowns which are only unearthed in the practice of folllowing your intent.

There are lessons for anyone trying to make decisions in conditions of ambiguity (by which I mean pretty much anyone in pretty much any live circumstance today under which the setting of clear and definable constraints are absent).

Dealing with ambiguity requires a much more agile approach - a willingness to respond to additional insight learned from your rapid prototyping and testing with those for whom the results really matter..

Even strategy work today is conducted in rapid iterative cycles - rather than the big bang of old. That's because even at the strategic level, elements are moving so fast that the only way to proceed is in rapid, insight-driven increments. Minimum Viable Strategy is tested for fit for purpose in measurable steps,sometimes pivoting towards what evidentially works rather than what the strategy document insists.

This insight-to-value approach is increasingly applied in industry. Decisions aren't of the one-time only variety. Decisions are made based on insights drawn from your last response. You move forward fast, but built on truths. This is how we are dealing with the rapid-shifting realities of a world that can change at the speed of digital (versus that of atoms).

And into this world we ask ourselves a one-time only, never-mind-the-consequences question when it comes Brexit?

It;'s clearly time for a new kind of democratic process - insight-led, rapid iterative democracy..






Tuesday, February 27, 2018

Engines of enhanced customer experience

We have the technology to provide one-to-one intimacy in marketing with engines of enhanced, in context, real-time and predictive customer experiences.
Neuroscience, behavioral economics and psychology combine to make the automated promise of AI one which delivers the truly effective next best offer or action.
The world of one-to-one CRM is here, with the capability to predict your needs and respond to them as they arise. We can apply Robotic Process Automation to deliver this efficiently and at scale.
And yet - when NatWest brings us Cora - an AI-driven, human-faced service bot, it creates an experience which could cut down the time required by a teller to serve you. But the upside for the customer is strangely limited. Despite our being able to speak with this voice-recognising screen-based bot, it responds by telling us to log in and complete a form.
It's an early test. I suspect it won't get deployed unless and until the team apply a little human-centred thought. If it recognises voice it could recognise YOUR voice - removing the need for log-in. If it recognises what you are asking, it could understand what of the records it has on you that it needs to access to complete the form for 'I've lost my credit card' or similar.
The same thinking can be applied to the very long queue in my local branch of LLoyds on Saturday morning.
Today's queue at the bank is made up of people who either do not or will not use internet services, and those who need some kind of physical exchange. The other folk in looking for mortgages etc have nice places to sit and wait for their appointments.
There was one teller on duty. Another employee fluttered around the queue asking what we were trying to achieve today, leading some folk off to machines if they found that was relevant - trying to lever some behaviour change into them.
My need was for physical exchange - converting unspent holiday currency into GBP - so I was left in the queue.
The experience illustrated much that is wrong with automating customer experience.
At the counter, I handed over my bank account card and my currency. The teller then had to fill in a form by hand to confirm she was handing over the currency exchanged and I was accepting the rate etc. She got to the point when she asked for a contact number and I was half way through responding when I said - "hang on a minute. You've got my bank account card, surely from that you can tell my name, address, contact number, bank account numbers etc. Why are we filling all that in again now?"
No doubt the poor teller's hand-written form will be typed in to create a digital record at some point further down the line.
If a written record is essential, surely it could be auto-created - saving time for both customer and teller - and cutting that queue. And a little bit of intelligence would identify that I regularly return from a trip with excess currency. Why doesn't my bank - which knows when I am back from my spending patterns - send me an invite with a rate for exchange (which I could compare with others). I could confirm an appointment to make the swift handover with form pre-completed and ready to roll.
Processes like these are easy to set-up in self-learning AI-powered workflow tools. With some platforms the set up work doesn't even require external expertise, the users themselves simply do their job and the AI learns which bits can be automated and/or optimised.
But for all this, unless the creation of value for the end-using human is the focus, each application of our engines of enhanced customer experience will only improve our efficiency at doing the wrong thing.

Friday, February 16, 2018

Should we code humanity into evolution?

Via:  https://news.vanderbilt.edu/vanderbiltmagazine/robot-evolution/
AI - if you take the leap to superintelligence and the singularity - may be our next and massively accelerating evolution. In the decades to come that evolution is likely to decide how many of the 'flaws' of humanity have a place in our/the future.

If we do have any control over it, how can we hard code our nobler selves into a new version Three Laws of Robotics? The evolutionary advantageous urge to co-operate, our empathy (leading to altruism, care for others, love), the value we place on trust (and our innate ability to sense deception). 

These are not questions of the far future. If we believe there is something worth protecting about humanity now is the time to consider it.

No-one and nothing survives the process of evolution indefinitely. We are in the unique position of both creating our replacement and having an opportunity to set its behaviours for the future.

The challenge when trying to set rules for behaviour though is the huge cultural weight shaping our view of wrong and right. That view varies from culture to culture and through time.

Do we have the right we have to set the rules for how our replacements must behave?

Or should we leave it to evolutionary forces among competing super-intelligences?

We have that choice.

Wednesday, February 07, 2018

The problem with loyalty

Imge from AirlineRatings.com

The problem with most loyalty programs is that they equate frequency with loyalty.
These are two very different things in customers' heads - and need treating and responding to very differently.
This becomes abundantly apparent if you take the time and trouble to contextualise your relationship with customers - but is easily missed if you charge head long towards one-size fits all operations in which the customer is simply the cash output device.
Let me give you an example. Imagine I have a strong affinity with an airline brand. Imagine that every time I fly long haul I choose them over all rivals. I'll even happily pay more for brand satisfaction I get from the reassurance of my choice.
But I'm not a frequent flyer.
In loyalty scheme terms I struggle to get off base.
But in actual loyalty - I'm the one who will be thrilled with the upgrade, I'm the one who will advocate to my peers how great the brand is and why they should follow my lead.
The frequent flyer has a sense of entitlement. Typically she is flying at least every week on business. If 30 per cent of those flights are with 'my; brand, the airline will see her as more deserving of special treatment - even though she doesn't see the treatment as in any way special.
She is in no way loyal - playing the varying airline loyalty status cards to get her the best deals. She cares much less which airline, more which rewards she can muster.
In summary; the frequent flyer doesn't care about your brand, expects you to go above and beyond for her (and will share negatively with her peers when you don't) and is not making you her default choice when flying. Yet these are your focus?
The loyal consumer chooses you by default every time they get to choose. They advocate for you. Going the extra mile for them creates massive value for them thatt they will talk about.
So isn't it time Loyaty grew up a bit and started recognising where rewards really create value. Lifetime Value has to factor for advocacy, for a real relatonship with the brand - one which runs far deeper than promiscuous frequency.

Tuesday, February 06, 2018

Imagination beyond experience - the leap to Super-Human

Image from the movie Superman
Humanising AI is a worthy and common dream. And it is where and how we should focus to create value in the near term. But a greater challenge looms.

While we always seem to want to make AI 'think like a human', we know that when it doesn't, it can outperform us (in narrow fields, where ambiguity is constrained, at least) for example in the games of Chess and Go!

While we always seem to want to make bots look like humans, we know that there are many more efficient designs to meet specific needs. The human body is a bit of a jack of all trades, master of none (compare us with the highest performers in any particular parameter from the animal kingdom.)

And while we always seem to want to make AI behave like humans, we know humans behave irrationally and often against our best interests.

Imagining super-human (ie outside of human) thinking, design and behaviours will be our next great challenge. And for that we are going to have to truly partner the machines because this will take us beyond our own experience.

Wednesday, January 24, 2018

If you build it, who wins what?

From the movie - Field of Dreams
Digital is the creation of value from connecting people, data and devices.
You can't create value for a device (that would require conscious machines and we are still some distance from that). You can't create value for data.
You can only create value for people.
People feel stuff.
If I instrument machines to automate their optimisation, their effectiveness, extending their lives, the machinery really doesn't care. It feels nothing.
The engineer who now doesn't have to tweak it to balance loads or speed up the run every few moments, or take time out to order parts, and fit them - she's happier. She now has more time to think about how this machine could be improved, where else a machine could be applied, what other aspects of the business around her could be automated, for example.
Creating value for people should be an absolutely natural part of any digital development (and by extension, any AI deployment).
Who wins what?
Only when we find that value and build to deliver it do we create technological solutions that matter.
The rest is just built on the assumption 'they will come'. And we now have much evidence that this is the road to expensive failure.
I read somewhere once how the average number of members of online message boards is a somewhat lonely, one.  They built, but nobody came.

FasterFuture.blogspot.com

The rate of change is so rapid it's difficult for one person to keep up to speed. Let's pool our thoughts, share our reactions and, who knows, even reach some shared conclusions worth arriving at?