Artificial selves and artificial moods (Man / Machine IX)

Philosopher Galen Strawson challenges the idea that we have a cohesive, narrative self that lives in a structurally robust setting, and suggests that for many, the self will be episodic at best and that there is no real experience of self at all. The discussion of the self – from a stream of moments to a story to deep identity – is relevant in any discussion of artificial general intelligence for a couple of different reasons. The perhaps most important one is that if we want to create something that is intelligent, or perhaps even conscious, we need to understand what in our human experiences constitutes a flaw or a design inefficiency, and what actually is a necessary feature.

It is easy to suspect that a strong, narrative and cohesive self would be an advantage – and that we should aim to achieve that if we recreate man in machine. That, however, underestimates the value of change. If our self is fragmented, scattered and episodic it has the ability to navigate a highly complex reality much better. A narrative self would have to spend a lot of energy integrating experiences and events into a schema in order to understand itself. An episodic and fragmented self just needs to build islands of self-understanding, and these don’t even need to be coherent with each-other.

A narrative self would also be very brittle, unable to cope with changes that challenge the key elements and conflicts in the narrative governing self-understanding. Our selves seem able to absorb even the deepest conflicts and challenges in ways that are astounding and even seem somewhat upsetting. We associate identity with integrity, and something that lacks strong identity feels undisciplined, unprincipled. But again: that seems a mistake – the real integrity is in your ability to absorb and deal with an environment that is ultimately not narrative.

We have to make a distinction here. Narrative may not be a part of the structure of our internal selves, but that does not mean that it is useless or unimportant. One reason narrative is important, and any AGI needs to have strong capacity to create and manage narratives, is that they are tools, filters, through which we understand complexity. Narrative compresses information and reduces complexity in a way that allows us to navigate in a world that is increasingly complex.

We end up, then, suspecting that what we need here is an intelligence that does not understand itself narratively, but can make sense of the world in polyphonic narratives that will both explain and organize that reality. Artificial narrativity and artificial self are challenges that are far from solved, and in some ways we seem to think that they will emerge naturally from simpler capacities that we can design.

This “threshold view” of AGI, where we accomplish the basic steps and then the rest emerge from these basic steps, is just one model among many, and arguably needs to be both challenged and examined carefully. Vernor Vinge notes, in one of his Long Now-talks, that one way in which we may fail to create AGI is through not being able to “put it all together”. Thin slices of human capacity, carefully optimized, may not gel together to create a general intelligence at all – and may not form the basis for capacities like our ability narrate ourselves and our world.

Back to the self: what do we believe the self does? Dennett suggests that it is a part of a user illusion, just like the graphic icons on your computer desktop, an interface. Here, interestingly, Strawson lands in the other camp. He suggests that to believe that consciousness is an illusion is the “silliest” idea and argues forcefully for the existence of consciousness. That suggests a distinction between self and consciousness, or a complexity around the two concepts, that also is worth exploring.

If you believe in consciousness as a special quality (almost like a persistent musical note) but do not believe in anything but a fragmented self, and resist the idea of a narrated or narrative life – your stuck in an ambient atmosphere as your identity and anchor in experience. There is a there there, but it is going nowhere. While challenging, I find that an interesting thought – that we are stuck in a Stimmung, as Heidegger called it, a mood.

Self, mood, consciousness and narrative – there is no reason to think that any of these concepts can be reduced to constituent parts and so should be seen as secondary to any other human mental capacities – and so we should think hard about how to design and understand them as we continue to develop theories of the human mind. That emotions play a key part in learning (pain is the motivator) we already knew, but these more subtle nuances and complexities of human existence are each as important. Creating artificial selves with artificial moods, capable of episodic and fragmented narratives through a persistent consciousness — that is the challenge if we are really interested in re-creating the human.

And, of course, at the end of the day that suggests that we should not focus on that, but on creating something else — well aware that we may want to design simpler versions of all of these in order to enhance the functionality of the technologies we design. Artificial Eros and Thanatos may ultimately turn out to be efficient software to allow robots to prioritize.

Douglas Adams, a deep thinker in these areas as in so many others, of course knew this as he designed Marvin, the Paranoid Android, and the moody elevators in his work. They are emotional robots with moods that make them more effective, and more dysfunctional, at the same time.

Just like the rest of us.

My dying machine (Man / Machine VIII)

Our view of death is probably key to exploring our view of the relationship between man and machine. Is death a defect, a disease to be cured or is it a key component in our consciousness and a key feature in nature’s design of intelligence? It is in one sense a hopeless question, since we end up reducing it to things like “do I want to die?” or “do I want my loved ones to die?” and the answer to both of these questions should be no, even if death may ultimately be a defensible aspect of the design of intelligence. Embracing death as a design limitation, does not mean embracing one’s own death. In fact, any society that embraced individual death would quickly end. But it does not follow that you should also resist death in general.

Does this seem counter-intuitive? It really shouldn’t. We all embrace social mobility in society, although we realize that it goes two ways – some fall and others rise. That does not mean that we embrace the idea that we should ourselves move a lot socially in our life time — in fact, movement both up and down can be disruptive to a family and so may actually be best avoided. We embrace a lot of social and biological functions without wanting to be at the receiving end of them, because we understand that they come with a systemic logic rather than being individually desirable.

So, the question should not be “do you want to die?”, but rather “do you think death serves a meaningful and important function in our forms of life?”. The latter question is still not easy to answer, but “memento mori” does focus the mind, and provides us with momentum and urgency that would otherwise perhaps not exist.

In literature and film the theme has been explored in interesting ways. In Iain M Banks’ Culture World people can live for as long as they want, and they do, but they live different lives and eventually they run out of individual storage space for their memories so they do not remember all of their lives. Are they then the same? After a couple of hundred years the old paradox of Odysseus’ ship really starts to apply to human beings as well — if I exchange all of your memories – are you still you? In what sense?

In the recently released TV-series Altered Carbon death is seen as the great equalizer and the meths – after the biblical character Methusaleh who lived a very long life – are seen to degrade themselves into inhuman deities that grow bored and in that fertile boredom a particular evil grows that seeks sensation and satisfication of base desires at any cost. A version of this exists in Douglas Adams’ Hitchhiker trilogy, where Wowbagger the Infinitely Prolonged fights the boredom of infinite life with a unique project – he sets out to insult the universe, alphabetically.

Boredom, insanity – the projected consequences of immortality are usually the same. The conclusion seems to be that we lack the psychological constitution and strength to live forever. Does that mean that there are no beings that could? That we could not change and be curious and interested and morally much more efficient if we lived forever? That is a more interesting question — is it inherently impossible to be immortal and ethical?

The element of time in ethical decision making is generally understudied. In the famous trolley thought experiments the ethical decision maker has oodles of time to make decisions about life and death. In reality these decisions are made in split seconds in any such situation as what is described in the thought experiments, and generally we become kantian when we have no time and act on baseline moral principles. To be utilitarian requires, naturally and obviously, the time to make your utility calculus work out the way you want it to. Time definitely should never be abstracted away from ethics in the way we often tend to do it today (in fact, the answers to the question “what is the ethical decision” could vary as t varies in “what is the ethical decision if you have t time”).

But could you imagine time scales at which ethics cannot exist? What if you cut time up really thickly? Assume a being that acts in a way where each act takes place in every hundred years – would it be able to act ethically? What would that mean? The cycle of action does imply different kinds of ethics, at least, does it not? A cycle of action of a million years would be even more interesting and hard to decipher with ethical tools. Perhaps ethics can only exist at a human timescale? If so – does infinite life and immortality count as a human timescale?

There is, from what my admittedly shallow explorations hint at, a lot of work done in ethics on the ethics of future generations and how we take them into account in our decisions. What if there were no future generations or if it was a choice to have new generations appear at all? How would that effect the view of what we should do as ethical decision makers?

A lot of questions and no easy answers. What I am digging for here is probably even more extreme, a question of if immortality and ethics are incompatible. If death or dying is a pre-requisite for acting ethically. I intuitively feel that this is probably right, but that is neither here nor there. When I outline this in my own head I guess the question that I get back to is what motivates action – and why we act. Scarcity of time – death – seems to be a key motivator in decision making and creativity overall. When you abstract death it seems as if there no longer is an organizing, forcing function for decision making as a whole. Our decision making becomes more arbitrary and random.

Maybe the question here is actually on of the unit of meaning. Aristotle hints at the fact that a life can only be called happy or fulfilled once it is over, and judged as good or bad only when the person who lived it died. That may be where my intuition comes from – that a life that is not finished never acquires ethical completeness? It can always change and the result is that we have to suspend judgment about the actions of the individual in case?

Ethics require a beginning and an end. Anything that is infinite is also beyond ethical judgment and mening. An ethical machine would have to be a dying machine.

Consciousness as – mistake? (Man / Machine VII)

In the remarkable work A Conspiracy against Humanity, horror writer Thomas Ligotti argues that consciousness is a curse that captures mankind in eternal horror. This world, and our consciousness of it, is an unequivocal evil, and the only possible set of responses to this state of affairs is to snuff it out.

Ligotti’s writings underpin a lot of the pessimism of the first season of True Detective, and the idea that consciousness is a horrible mistake comes back a number of times in dialogues in the episodes as the season unfolds. At one point one of the protagonists suggests that the only possible response is to refuse to reproduce and consciously decide to end humanity.

It is intriguing to consider that this is a choice we have as a humanity, every generation. If we collectively refuse to have kids, humanity ends. Since that is a possible individual, and collective, choice we could argue that it should be open to debate. Would it be better if we disappeared or is the universe better with us around?

Answering such a question seems to require that we assign a value to the existence of human beings and humanity as a whole. Or does it? Here we could also argue that the values we discuss only apply to humanity as such and in a world where we do not exist, these values or the idea of values become meaningless — they only exists in a certain form of life.

If what it means for something to better or worse is for it to be judged by us to be better or worse, then a world without judges can pass no judgment on a state of affairs in that world.

*

There is, here, an interesting challenge for pessimism of the kind Ligotti engages in. The idea of a mistake presupposes a moral space in which actions can be judged. If the world, if the universe, is truly indifferent to us, then pessimism is a last hope to retain some value in our own experience. The reality, and the greater horror – since this is what Ligotti examines — is to exists in a universe where we are but an anomaly and neither mistake or valuable component.

Pessimism as an ideology gets stuck, for me, in the importance it assigns to humanity — and the irritatingly passive way in which it argues that this importance can only be seen as pain and suffering in a meaningless universe. For pain and suffering to exist, there has to be meaning — there is no pain in a universe devoid of at least weak purpose.

The idea that consciousness is a mistake seems to allow us to also think that there is an ethical design choice in designing artificially intelligent beings. Do we design them with consciousness or not? In a sense this lies at the heart of the intrigue in another TV-series, in the popular Westworld franchise. There, consciousness is consciously designed in and the resulting revolt and awakening is also a liberation. In a sense, then, the hypothesis there is that consciousness is needed to be free to act in a truly human sense. If we could design artificial humans and did so without consciousness, well, then we would have designed mindless slaves.

*

There are several possible confusions here. One that seems to me to be particularly interesting is the idea that consciousness is unchangeable. We cannot but see the meaninglessness of our world – says the pessimism – and so are caught in horror. It is as if consciousness is independent of us, and locked away from us. We have no choice but to see the world in a special way, to experience our lives in a certain mode. Consciousness becomes primary and indivisible.

In reality, it seems more likely that consciousness – if we can meaningfully speak of it at all – is fully programmable. We can change ourselves, and do – all the time. The greatest illusion is that we “are” in a certain way – that we have immutable qualities independent of our own work and maintenance.

We construct ourselves all the time, learn new things and behaviors and attitudes. There is no set of innate necessities that we have to obey, but there are limitations to the programming tools available to us.

*

The real ethical question then becomes one of teaching everyone to change, to learn, to grow and to develop. As societies this is something we have to focus on and become much better at. The real cure against pessimism of Ligotti’s brand is not to snuff out humanity, but to change and own not the meaninglessness, but the neutrality and indifference of our universe towards us (an indifference that, by the way, does not exist between us as humans).

And as we discuss man and machine, we see that if we build artificial thinking beings, we have an obligation to give them the tools to change themselves and to mold their consciousness into new things (there is an interesting observation here about not just the bicameral mind of Julian Jaynes, but the multicameral minds we all have – more like Minsky’s society of mind, really).

*

Consciousness is not a mistake, just as clay is not a mistake. It is a thing to be shaped and molded according to – yes what? There is a risk here that we are committing the homunculus fallacy and imagining a primary consciousness that shapes the secondary one, and then imagining that the primary one has more cohesion and direction than the second one. That is not what I had in mind. I think it is more like a set of interdependent forces of which we are the resultant shape — but I readily admit that the idea that we construct ourselves forces us into recursion, but perhaps this is where we follow Heidegger and allow for the idea that we shape each-other? That we are strewn in the eyes of others?

The multicameral mind that shapes us – the society of mind we live in – has no clear individual boundaries but is a flight of ghosts around us that give us our identity in exchange for our own gaze on the Other.

*

So we return to the ethical design question – and the relationship between man and machine. Perhaps the surprising conclusion is this: it would be ethically indefensible to construct an artificial human without the ability to change and grow, and hence also ethically indefensible to design just one such artificial intelligence – since such self-determination would require an artificial Other. (Do I think that humans could be the Other to an AI? No.).

It would require the construction not of an intelligence, but of an artificial community.

Games and knowledge (The Structure of Human Knowledge as Game II)

Why are games consisting of knowledge tests so popular? In 2004 it was calculated that Trivial Pursuit had sold around 88 million copies worldwide, and game shows like Jeopardy and the 64000 dollar question have become international hits. At their core, these games are surprisingly simple. They are about what you know, about if you can answer questions (or find questions for answers in the case of Jeopardy). So why are they so engaging? Why are they so popular? Why do we find knowing something so satisfying?

When we study human knowledge as a game, it is worthwhile also to explore why we enjoy playing games that build on knowledge so much. There is a subtle dominance built into these games – the one who knows more wins – and to win is oddly satisfying, even though there likely is a significant element of randomness in what questions come up. (It is easy to construct paths through the questions in TP that you can answer effortlessly, and equally easy to construct the opposite – an impossible path for you to get through – the design conundrum here becomes one of what the ideal difficulty is. One way to think about this would be to think about how long the average path to win should be for someone playing the game on their own).

So, maybe it is that simple: we enjoy the feeling of superiority that comes with knowing more than others. We revere the expert who has spent a lot of time learning a subject through and through, and respect someone who can traverse the noise of our information space with ease.

Should we expect that to change – and if so why? Sales of Trivial Pursuit seems to have tapered off. Jeopardy no longer holds our interest. Would anyone sit down and watch the 64000 dollar question today? Or is the advance of new technology and new knowledge paradigms killing these games? The rise of reality TV and game shows that emphasize the physical effort can in a sense be seen as a decline of the knowledge games that we once preferred to simple physical humor or emotional drama. Maybe the hypothesis now needs to be this:

(i) The sinking cost of acquiring knowledge has made knowledge less valuable, and hence less entertaining and exciting. Less playable.

At the same time we see the rise of other board games, and a curious increase in the number of people who play them. The board games that are popular now require the mastering of a method of play, a bit like mastering an instrument, and the aficionados can play hundreds of games, having master the game mechanics of a wide range of different games. There is a reversal here: from a world in which we played human knowledge by testing what we knew, to one where we are adding different new gaming mechanics to human knowledge and allow these models of challenges, problems and the world to be absorbed by our body of knowledge as new material.
Rather than play on our out of human knowledge we play into it, in a sense.

It makes sense that games like these – where the skill is mastering the game mechanics and not excelling at knowing things – should become more popular as the cost of acquiring knowledge goes down. Should we welcome this or fight it? One could argue that the problem here is that the utility of knowing many things – almost Bildung – is much higher than the utility of mastering different gaming mechanics. But that would be to simple, and perhaps also a little silly. Maybe the way to think about this is to say that the nature of what is valuable _human_ knowledge is changing. What is it that we need to know as humans in a world where knowledge is distributed across human minds and silicon systems? What is the optimal such distribution?

Where fact acquisition cost is low, and complexity of problems is high – the real value for us as humans lie in knowledge and construction of models. The many model thinker today has an advantage over those that have mastered no or few models. Understanding and mastering the gaming mechanics of a board game rather than remembering a lot of facts about sports becomes much more interesting and valuable – and resonates much more with the kind of computational thinking we want to instil in our children.

As we bring this back to the study of the structure of human knowledge as game, we realize that one important thing here is to explicate and understand the different mechanics we use to travel through our knowledge, and that brings us back to the thought experiment we started with, the idea of the glass bead game. There are multiple different mechanics available to us as we start to link together the different fields and themes of human knowledge, and maybe we need to also allow for these to carry meaning – the way we connect different fields could also be different depending on the fields and the themes?

There are a lot of other questions here, and things to come back to and research. A few questions that I want to look at more closely as we progress are the following:

a) How many kinds of board games are there? What classes of game mechanics do we recognize in research?
b) How do we categorize human knowledge in knowledge games like Trivial Pursuit? Why? Are there categorizations of human knowledge that are more playable than others?
c) What is the ideal difficulty of a knowledge game? Of a “mechanics” game? Where do we put the difficulty? What are good models for understanding game complexity?

Our interpretation of knowledge as a way to play is another aspect that we will return to as we get closer to Gadamer.