There is so much we still have to learn about the workings of our brains (let alone our minds) that I wonder how close we really are to creating a machine capable of learning in quite the way we do.
2017 seems very likely to be the year of AI (though more likely seeing implementations of its less 'intelligent' bed fellow Deep Learning, in platfoms of Cognitive Computing.
Robert Epstein (a senior research psychologist at the American Institute for Behavioral Research and Technology in California). reminds us that throughout history we have tried to understand how we think in the metaphors of the latest technological understanding. The six major ones over the past 2000 years being; spirit, humors, automota, electricity, telecommunication and finally digital.
He argues this final construction, with its language of uploads and storage and informaation processing and retrieval has given rise to an unreal view.
Instead, he states:
1) Professionally. For its impact on the weighting we should give each of The 4 Dimensions of Experience I am working on for deployment in the development of improved Customer Experience.
If we can't be sure of how the brain works we certainly can't be sure of an algorithm gathering such a complete set of data about our preferences and needs that it could make better decisions for us than we could. I don't argue that a technical replication of the brain's functions is impossible but it remains improbable while we don't know what it is we are trying to replicate. We can approximate intelligence in this respect (quite literally developing proxies for it) but we can't create a copy of it functionally.
So what does this mean for the value of the Experiencing Self (The one behind the third of my four dimensions, Sensitivity).
We can argue it remains important because our Sensitivity has been shaped by the total of our experiences (gathered by our Experiencing Self and conceivably far better stored by digital rather than patchy human means).
That Sensitivity - whether we remember how it was derived or not - is our base setting against which our Narrative Self does it's peek-end rule calculations when we recall an experience.
Therefore striving to improve experiences for the Experiencing Self (ie at each step) will still have impact on the overall experience recalled by the Narrative Self - even if the impact may not be as great as changes made at the peek and end points of the experience.
2. Philosophically. I have, for example, argued that should an algorithm be better able to know what is best for us perhaps we should let it vote for us. Or even govern us?
We have to consider what measures should be applied to 'best for us'. Algorithms could manage our calorie intake to match our output and only ever suggest the 'right' thing to do for your safety, longevity and even your sanity, But here I am using right rather than best. What the algorithm can't know - because we don't know how we do this ourselves - is how we acquire tastes and proclivities. Why some love and some hate Marmite, what we find attractive, funny, challenging, boring. An algorithm can copy the outputs but it would struggle to innovate collection of concepts that make us uniquely human.
The algorithm could learn to approximate an understanding of us (eg at its most basic, presented with object A subject 1 did not purchase, therefore offer object B next time) but this is not knowing what is best for us - it's simply learning how we have behaved in the past.
So maybe this gives us a hint about the kind of fulfilling roles which will be left for us humans when the machines are running flat-out to make all the wealth; craft, artisinal manufacture - things with limited but genuine appeal to a few (the adhoc se;f-forming groups of interest the web allows to form globally serves this well, too), art and literature, film and drama, sport and sculpture, fashion,architecture (the interesting bits) and of course the most interesting, inspired and inspiring bits of science, maths, geography, history, economics, politics and more.
Everywhere the expression of what it is to be the human you are offers an advantage, that will remain safe from the algortihm - at least until we really understand how our brains work.
2017 seems very likely to be the year of AI (though more likely seeing implementations of its less 'intelligent' bed fellow Deep Learning, in platfoms of Cognitive Computing.
Robert Epstein (a senior research psychologist at the American Institute for Behavioral Research and Technology in California). reminds us that throughout history we have tried to understand how we think in the metaphors of the latest technological understanding. The six major ones over the past 2000 years being; spirit, humors, automota, electricity, telecommunication and finally digital.
He argues this final construction, with its language of uploads and storage and informaation processing and retrieval has given rise to an unreal view.
Instead, he states:
"As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types:
(1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens);
(2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars);
(3) we are punished or rewarded for behaving in certain ways."We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded.
Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.
When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessaryI am interested in this for two reasons;
1) Professionally. For its impact on the weighting we should give each of The 4 Dimensions of Experience I am working on for deployment in the development of improved Customer Experience.
If we can't be sure of how the brain works we certainly can't be sure of an algorithm gathering such a complete set of data about our preferences and needs that it could make better decisions for us than we could. I don't argue that a technical replication of the brain's functions is impossible but it remains improbable while we don't know what it is we are trying to replicate. We can approximate intelligence in this respect (quite literally developing proxies for it) but we can't create a copy of it functionally.
So what does this mean for the value of the Experiencing Self (The one behind the third of my four dimensions, Sensitivity).
We can argue it remains important because our Sensitivity has been shaped by the total of our experiences (gathered by our Experiencing Self and conceivably far better stored by digital rather than patchy human means).
That Sensitivity - whether we remember how it was derived or not - is our base setting against which our Narrative Self does it's peek-end rule calculations when we recall an experience.
Therefore striving to improve experiences for the Experiencing Self (ie at each step) will still have impact on the overall experience recalled by the Narrative Self - even if the impact may not be as great as changes made at the peek and end points of the experience.
2. Philosophically. I have, for example, argued that should an algorithm be better able to know what is best for us perhaps we should let it vote for us. Or even govern us?
We have to consider what measures should be applied to 'best for us'. Algorithms could manage our calorie intake to match our output and only ever suggest the 'right' thing to do for your safety, longevity and even your sanity, But here I am using right rather than best. What the algorithm can't know - because we don't know how we do this ourselves - is how we acquire tastes and proclivities. Why some love and some hate Marmite, what we find attractive, funny, challenging, boring. An algorithm can copy the outputs but it would struggle to innovate collection of concepts that make us uniquely human.
The algorithm could learn to approximate an understanding of us (eg at its most basic, presented with object A subject 1 did not purchase, therefore offer object B next time) but this is not knowing what is best for us - it's simply learning how we have behaved in the past.
So maybe this gives us a hint about the kind of fulfilling roles which will be left for us humans when the machines are running flat-out to make all the wealth; craft, artisinal manufacture - things with limited but genuine appeal to a few (the adhoc se;f-forming groups of interest the web allows to form globally serves this well, too), art and literature, film and drama, sport and sculpture, fashion,architecture (the interesting bits) and of course the most interesting, inspired and inspiring bits of science, maths, geography, history, economics, politics and more.
Everywhere the expression of what it is to be the human you are offers an advantage, that will remain safe from the algortihm - at least until we really understand how our brains work.
No comments:
Post a Comment