We have the technology to provide one-to-one intimacy in marketing with engines of enhanced, in context, real-time and predictive customer experiences.
Neuroscience, behavioral economics and psychology combine to make the automated promise of AI one which delivers the truly effective next best offer or action.
The world of one-to-one CRM is here, with the capability to predict your needs and respond to them as they arise. We can apply Robotic Process Automation to deliver this efficiently and at scale.
And yet - when NatWest brings us Cora - an AI-driven, human-faced service bot, it creates an experience which could cut down the time required by a teller to serve you. But the upside for the customer is strangely limited. Despite our being able to speak with this voice-recognising screen-based bot, it responds by telling us to log in and complete a form.
It's an early test. I suspect it won't get deployed unless and until the team apply a little human-centred thought. If it recognises voice it could recognise YOUR voice - removing the need for log-in. If it recognises what you are asking, it could understand what of the records it has on you that it needs to access to complete the form for 'I've lost my credit card' or similar.
The same thinking can be applied to the very long queue in my local branch of LLoyds on Saturday morning.
Today's queue at the bank is made up of people who either do not or will not use internet services, and those who need some kind of physical exchange. The other folk in looking for mortgages etc have nice places to sit and wait for their appointments.
There was one teller on duty. Another employee fluttered around the queue asking what we were trying to achieve today, leading some folk off to machines if they found that was relevant - trying to lever some behaviour change into them.
My need was for physical exchange - converting unspent holiday currency into GBP - so I was left in the queue.
The experience illustrated much that is wrong with automating customer experience.
At the counter, I handed over my bank account card and my currency. The teller then had to fill in a form by hand to confirm she was handing over the currency exchanged and I was accepting the rate etc. She got to the point when she asked for a contact number and I was half way through responding when I said - "hang on a minute. You've got my bank account card, surely from that you can tell my name, address, contact number, bank account numbers etc. Why are we filling all that in again now?"
No doubt the poor teller's hand-written form will be typed in to create a digital record at some point further down the line.
If a written record is essential, surely it could be auto-created - saving time for both customer and teller - and cutting that queue. And a little bit of intelligence would identify that I regularly return from a trip with excess currency. Why doesn't my bank - which knows when I am back from my spending patterns - send me an invite with a rate for exchange (which I could compare with others). I could confirm an appointment to make the swift handover with form pre-completed and ready to roll.
Processes like these are easy to set-up in self-learning AI-powered workflow tools. With some platforms the set up work doesn't even require external expertise, the users themselves simply do their job and the AI learns which bits can be automated and/or optimised.
But for all this, unless the creation of value for the end-using human is the focus, each application of our engines of enhanced customer experience will only improve our efficiency at doing the wrong thing.
Tuesday, February 27, 2018
Friday, February 16, 2018
If we do have any control over it, how can we hard code our nobler selves into a new version Three Laws of Robotics? The evolutionary advantageous urge to co-operate, our empathy (leading to altruism, care for others, love), the value we place on trust (and our innate ability to sense deception).
These are not questions of the far future. If we believe there is something worth protecting about humanity now is the time to consider it.
No-one and nothing survives the process of evolution indefinitely. We are in the unique position of both creating our replacement and having an opportunity to set its behaviours for the future.
The challenge when trying to set rules for behaviour though is the huge cultural weight shaping our view of wrong and right. That view varies from culture to culture and through time.
Do we have the right we have to set the rules for how our replacements must behave?
Or should we leave it to evolutionary forces among competing super-intelligences?
We have that choice.
Wednesday, February 07, 2018
|Imge from AirlineRatings.com|
The problem with most loyalty programs is that they equate frequency with loyalty.
These are two very different things in customers' heads - and need treating and responding to very differently.
This becomes abundantly apparent if you take the time and trouble to contextualise your relationship with customers - but is easily missed if you charge head long towards one-size fits all operations in which the customer is simply the cash output device.
Let me give you an example. Imagine I have a strong affinity with an airline brand. Imagine that every time I fly long haul I choose them over all rivals. I'll even happily pay more for brand satisfaction I get from the reassurance of my choice.
But I'm not a frequent flyer.
In loyalty scheme terms I struggle to get off base.
But in actual loyalty - I'm the one who will be thrilled with the upgrade, I'm the one who will advocate to my peers how great the brand is and why they should follow my lead.
The frequent flyer has a sense of entitlement. Typically she is flying at least every week on business. If 30 per cent of those flights are with 'my; brand, the airline will see her as more deserving of special treatment - even though she doesn't see the treatment as in any way special.
She is in no way loyal - playing the varying airline loyalty status cards to get her the best deals. She cares much less which airline, more which rewards she can muster.
In summary; the frequent flyer doesn't care about your brand, expects you to go above and beyond for her (and will share negatively with her peers when you don't) and is not making you her default choice when flying. Yet these are your focus?
The loyal consumer chooses you by default every time they get to choose. They advocate for you. Going the extra mile for them creates massive value for them thatt they will talk about.
So isn't it time Loyaty grew up a bit and started recognising where rewards really create value. Lifetime Value has to factor for advocacy, for a real relatonship with the brand - one which runs far deeper than promiscuous frequency.
Tuesday, February 06, 2018
|Image from the movie Superman|
While we always seem to want to make AI 'think like a human', we know that when it doesn't, it can outperform us (in narrow fields, where ambiguity is constrained, at least) for example in the games of Chess and Go!
While we always seem to want to make bots look like humans, we know that there are many more efficient designs to meet specific needs. The human body is a bit of a jack of all trades, master of none (compare us with the highest performers in any particular parameter from the animal kingdom.)
And while we always seem to want to make AI behave like humans, we know humans behave irrationally and often against our best interests.
Imagining super-human (ie outside of human) thinking, design and behaviours will be our next great challenge. And for that we are going to have to truly partner the machines because this will take us beyond our own experience.