Philosopher Galen Strawson challenges the idea that we have a cohesive, narrative self that lives in a structurally robust setting, and suggests that for many, the self will be episodic at best and that there is no real experience of self at all. The discussion of the self – from a stream of moments to a story to deep identity – is relevant in any discussion of artificial general intelligence for a couple of different reasons. The perhaps most important one is that if we want to create something that is intelligent, or perhaps even conscious, we need to understand what in our human experiences constitutes a flaw or a design inefficiency, and what actually is a necessary feature.
It is easy to suspect that a strong, narrative and cohesive self would be an advantage – and that we should aim to achieve that if we recreate man in machine. That, however, underestimates the value of change. If our self is fragmented, scattered and episodic it has the ability to navigate a highly complex reality much better. A narrative self would have to spend a lot of energy integrating experiences and events into a schema in order to understand itself. An episodic and fragmented self just needs to build islands of self-understanding, and these don’t even need to be coherent with each-other.
A narrative self would also be very brittle, unable to cope with changes that challenge the key elements and conflicts in the narrative governing self-understanding. Our selves seem able to absorb even the deepest conflicts and challenges in ways that are astounding and even seem somewhat upsetting. We associate identity with integrity, and something that lacks strong identity feels undisciplined, unprincipled. But again: that seems a mistake – the real integrity is in your ability to absorb and deal with an environment that is ultimately not narrative.
We have to make a distinction here. Narrative may not be a part of the structure of our internal selves, but that does not mean that it is useless or unimportant. One reason narrative is important, and any AGI needs to have strong capacity to create and manage narratives, is that they are tools, filters, through which we understand complexity. Narrative compresses information and reduces complexity in a way that allows us to navigate in a world that is increasingly complex.
We end up, then, suspecting that what we need here is an intelligence that does not understand itself narratively, but can make sense of the world in polyphonic narratives that will both explain and organize that reality. Artificial narrativity and artificial self are challenges that are far from solved, and in some ways we seem to think that they will emerge naturally from simpler capacities that we can design.
This “threshold view” of AGI, where we accomplish the basic steps and then the rest emerge from these basic steps, is just one model among many, and arguably needs to be both challenged and examined carefully. Vernor Vinge notes, in one of his Long Now-talks, that one way in which we may fail to create AGI is through not being able to “put it all together”. Thin slices of human capacity, carefully optimized, may not gel together to create a general intelligence at all – and may not form the basis for capacities like our ability narrate ourselves and our world.
Back to the self: what do we believe the self does? Dennett suggests that it is a part of a user illusion, just like the graphic icons on your computer desktop, an interface. Here, interestingly, Strawson lands in the other camp. He suggests that to believe that consciousness is an illusion is the “silliest” idea and argues forcefully for the existence of consciousness. That suggests a distinction between self and consciousness, or a complexity around the two concepts, that also is worth exploring.
If you believe in consciousness as a special quality (almost like a persistent musical note) but do not believe in anything but a fragmented self, and resist the idea of a narrated or narrative life – your stuck in an ambient atmosphere as your identity and anchor in experience. There is a there there, but it is going nowhere. While challenging, I find that an interesting thought – that we are stuck in a Stimmung, as Heidegger called it, a mood.
Self, mood, consciousness and narrative – there is no reason to think that any of these concepts can be reduced to constituent parts and so should be seen as secondary to any other human mental capacities – and so we should think hard about how to design and understand them as we continue to develop theories of the human mind. That emotions play a key part in learning (pain is the motivator) we already knew, but these more subtle nuances and complexities of human existence are each as important. Creating artificial selves with artificial moods, capable of episodic and fragmented narratives through a persistent consciousness — that is the challenge if we are really interested in re-creating the human.
And, of course, at the end of the day that suggests that we should not focus on that, but on creating something else — well aware that we may want to design simpler versions of all of these in order to enhance the functionality of the technologies we design. Artificial Eros and Thanatos may ultimately turn out to be efficient software to allow robots to prioritize.
Douglas Adams, a deep thinker in these areas as in so many others, of course knew this as he designed Marvin, the Paranoid Android, and the moody elevators in his work. They are emotional robots with moods that make them more effective, and more dysfunctional, at the same time.
Just like the rest of us.