Friday, February 16, 2018

Should we code humanity into evolution?

Via:  https://news.vanderbilt.edu/vanderbiltmagazine/robot-evolution/
AI - if you take the leap to superintelligence and the singularity - may be our next and massively accelerating evolution. In the decades to come that evolution is likely to decide how many of the 'flaws' of humanity have a place in our/the future.

If we do have any control over it, how can we hard code our nobler selves into a new version Three Laws of Robotics? The evolutionary advantageous urge to co-operate, our empathy (leading to altruism, care for others, love), the value we place on trust (and our innate ability to sense deception). 

These are not questions of the far future. If we believe there is something worth protecting about humanity now is the time to consider it.

No-one and nothing survives the process of evolution indefinitely. We are in the unique position of both creating our replacement and having an opportunity to set its behaviours for the future.

The challenge when trying to set rules for behaviour though is the huge cultural weight shaping our view of wrong and right. That view varies from culture to culture and through time.

Do we have the right we have to set the rules for how our replacements must behave?

Or should we leave it to evolutionary forces among competing super-intelligences?

We have that choice.

Wednesday, February 07, 2018

The problem with loyalty

Imge from AirlineRatings.com

The problem with most loyalty programs is that they equate frequency with loyalty.
These are two very different things in customers' heads - and need treating and responding to very differently.
This becomes abundantly apparent if you take the time and trouble to contextualise your relationship with customers - but is easily missed if you charge head long towards one-size fits all operations in which the customer is simply the cash output device.
Let me give you an example. Imagine I have a strong affinity with an airline brand. Imagine that every time I fly long haul I choose them over all rivals. I'll even happily pay more for brand satisfaction I get from the reassurance of my choice.
But I'm not a frequent flyer.
In loyalty scheme terms I struggle to get off base.
But in actual loyalty - I'm the one who will be thrilled with the upgrade, I'm the one who will advocate to my peers how great the brand is and why they should follow my lead.
The frequent flyer has a sense of entitlement. Typically she is flying at least every week on business. If 30 per cent of those flights are with 'my; brand, the airline will see her as more deserving of special treatment - even though she doesn't see the treatment as in any way special.
She is in no way loyal - playing the varying airline loyalty status cards to get her the best deals. She cares much less which airline, more which rewards she can muster.
In summary; the frequent flyer doesn't care about your brand, expects you to go above and beyond for her (and will share negatively with her peers when you don't) and is not making you her default choice when flying. Yet these are your focus?
The loyal consumer chooses you by default every time they get to choose. They advocate for you. Going the extra mile for them creates massive value for them thatt they will talk about.
So isn't it time Loyaty grew up a bit and started recognising where rewards really create value. Lifetime Value has to factor for advocacy, for a real relatonship with the brand - one which runs far deeper than promiscuous frequency.

Tuesday, February 06, 2018

Imagination beyond experience - the leap to Super-Human

Image from the movie Superman
Humanising AI is a worthy and common dream. And it is where and how we should focus to create value in the near term. But a greater challenge looms.

While we always seem to want to make AI 'think like a human', we know that when it doesn't, it can outperform us (in narrow fields, where ambiguity is constrained, at least) for example in the games of Chess and Go!

While we always seem to want to make bots look like humans, we know that there are many more efficient designs to meet specific needs. The human body is a bit of a jack of all trades, master of none (compare us with the highest performers in any particular parameter from the animal kingdom.)

And while we always seem to want to make AI behave like humans, we know humans behave irrationally and often against our best interests.

Imagining super-human (ie outside of human) thinking, design and behaviours will be our next great challenge. And for that we are going to have to truly partner the machines because this will take us beyond our own experience.

Wednesday, January 24, 2018

If you build it, who wins what?

From the movie - Field of Dreams
Digital is the creation of value from connecting people, data and devices.
You can't create value for a device (that would require conscious machines and we are still some distance from that). You can't create value for data.
You can only create value for people.
People feel stuff.
If I instrument machines to automate their optimisation, their effectiveness, extending their lives, the machinery really doesn't care. It feels nothing.
The engineer who now doesn't have to tweak it to balance loads or speed up the run every few moments, or take time out to order parts, and fit them - she's happier. She now has more time to think about how this machine could be improved, where else a machine could be applied, what other aspects of the business around her could be automated, for example.
Creating value for people should be an absolutely natural part of any digital development (and by extension, any AI deployment).
Who wins what?
Only when we find that value and build to deliver it do we create technological solutions that matter.
The rest is just built on the assumption 'they will come'. And we now have much evidence that this is the road to expensive failure.
I read somewhere once how the average number of members of online message boards is a somewhat lonely, one.  They built, but nobody came.

Tuesday, December 19, 2017

What do the public need to know about AI?

2018 will be nothing like 2017. Just as 2017 was nothing like 2016. We are living in a period of unprecedented accelerated change.
2017 with its geo-political cataclysms (Brexit, Trump) show change is becoming more radical.
I have written and spoken previously about the lull we have been in, a time since the early 70s in which innovation has primarily filled time for us rather than provided time for us (it is perhaps not coincidental that wages in The West have fallen in real terms during the same period - and wealth has concentrated ever more in ever fewer people).
The lull is over - the promise of AI is starting to deliver.
Large organisations all around the world will be deploying AI in 2018 (at least in narrow-focused form) to tackle tasks where:
  • Creative thought is rarely required (or simply introduces risk)
  • Ambiguity can be constrained 
  • The requirement for human interaction in minimal
By many calculations this covers from 20 to 65 per cent of what many white collar clerical and management roles perform.  It can be applied to many of the tasks required when checks and filters are applied to requests from people (job applications, loan applications, insurance forms, RFP responses etc etc) - right the way through to automated ordering systems (powering new efficiencies in supply chains).
A well-data-fed AI should be able to predict my choice from a menu (a constraint on ambiguity) or from an e-commerce site. Right now it would struggle to come up with a creative addition. But the smartest AI is already providing evidence it can also 'out imagine' us.
I'm thinking of the example in which DeepMind beat a grand master of Go! One move it made was so beyond anything a human challenger had ever made that the human opponent had to leave the room to compose himself - before returning to be defeated.

Lots of jobs - lots of people. Millions globally. Lives will change. Wealth and time will be created. How it is controlled becomes a huge question for society - particularly as we head to the point at which a General AI could become more intelligent than any of us.

How will that SuperIntelligence view us? As pets? As workhorses? Could it be controlled to deliver against our goals? Can a horse control you?

Big questions face us all. You can join in TODAY with a briefing prepared by the UK House of Lords at which some of the deepest thinkers on the subject will share their view.

A session at 3.30pm UK time on December 19, 2017 will be live online here. (This has now passed but you can find resources on the links below).

  1. You can watch the session live on the internet at www.parliamentlive.tv. Sessions can also be viewed back at any time after the event and it is now possible to clip parts of evidence sessions and share them on social media and third-party websites. 
  2. You can keep up to date with the Committee’s work on its website or Twitter.

FasterFuture.blogspot.com

The rate of change is so rapid it's difficult for one person to keep up to speed. Let's pool our thoughts, share our reactions and, who knows, even reach some shared conclusions worth arriving at?