Consciousness as – mistake? (Man / Machine VII)

In the remarkable work A Conspiracy against Humanity, horror writer Thomas Ligotti argues that consciousness is a curse that captures mankind in eternal horror. This world, and our consciousness of it, is an unequivocal evil, and the only possible set of responses to this state of affairs is to snuff it out.

Ligotti’s writings underpin a lot of the pessimism of the first season of True Detective, and the idea that consciousness is a horrible mistake comes back a number of times in dialogues in the episodes as the season unfolds. At one point one of the protagonists suggests that the only possible response is to refuse to reproduce and consciously decide to end humanity.

It is intriguing to consider that this is a choice we have as a humanity, every generation. If we collectively refuse to have kids, humanity ends. Since that is a possible individual, and collective, choice we could argue that it should be open to debate. Would it be better if we disappeared or is the universe better with us around?

Answering such a question seems to require that we assign a value to the existence of human beings and humanity as a whole. Or does it? Here we could also argue that the values we discuss only apply to humanity as such and in a world where we do not exist, these values or the idea of values become meaningless — they only exists in a certain form of life.

If what it means for something to better or worse is for it to be judged by us to be better or worse, then a world without judges can pass no judgment on a state of affairs in that world.

*

There is, here, an interesting challenge for pessimism of the kind Ligotti engages in. The idea of a mistake presupposes a moral space in which actions can be judged. If the world, if the universe, is truly indifferent to us, then pessimism is a last hope to retain some value in our own experience. The reality, and the greater horror – since this is what Ligotti examines — is to exists in a universe where we are but an anomaly and neither mistake or valuable component.

Pessimism as an ideology gets stuck, for me, in the importance it assigns to humanity — and the irritatingly passive way in which it argues that this importance can only be seen as pain and suffering in a meaningless universe. For pain and suffering to exist, there has to be meaning — there is no pain in a universe devoid of at least weak purpose.

The idea that consciousness is a mistake seems to allow us to also think that there is an ethical design choice in designing artificially intelligent beings. Do we design them with consciousness or not? In a sense this lies at the heart of the intrigue in another TV-series, in the popular Westworld franchise. There, consciousness is consciously designed in and the resulting revolt and awakening is also a liberation. In a sense, then, the hypothesis there is that consciousness is needed to be free to act in a truly human sense. If we could design artificial humans and did so without consciousness, well, then we would have designed mindless slaves.

*

There are several possible confusions here. One that seems to me to be particularly interesting is the idea that consciousness is unchangeable. We cannot but see the meaninglessness of our world – says the pessimism – and so are caught in horror. It is as if consciousness is independent of us, and locked away from us. We have no choice but to see the world in a special way, to experience our lives in a certain mode. Consciousness becomes primary and indivisible.

In reality, it seems more likely that consciousness – if we can meaningfully speak of it at all – is fully programmable. We can change ourselves, and do – all the time. The greatest illusion is that we “are” in a certain way – that we have immutable qualities independent of our own work and maintenance.

We construct ourselves all the time, learn new things and behaviors and attitudes. There is no set of innate necessities that we have to obey, but there are limitations to the programming tools available to us.

*

The real ethical question then becomes one of teaching everyone to change, to learn, to grow and to develop. As societies this is something we have to focus on and become much better at. The real cure against pessimism of Ligotti’s brand is not to snuff out humanity, but to change and own not the meaninglessness, but the neutrality and indifference of our universe towards us (an indifference that, by the way, does not exist between us as humans).

And as we discuss man and machine, we see that if we build artificial thinking beings, we have an obligation to give them the tools to change themselves and to mold their consciousness into new things (there is an interesting observation here about not just the bicameral mind of Julian Jaynes, but the multicameral minds we all have – more like Minsky’s society of mind, really).

*

Consciousness is not a mistake, just as clay is not a mistake. It is a thing to be shaped and molded according to – yes what? There is a risk here that we are committing the homunculus fallacy and imagining a primary consciousness that shapes the secondary one, and then imagining that the primary one has more cohesion and direction than the second one. That is not what I had in mind. I think it is more like a set of interdependent forces of which we are the resultant shape — but I readily admit that the idea that we construct ourselves forces us into recursion, but perhaps this is where we follow Heidegger and allow for the idea that we shape each-other? That we are strewn in the eyes of others?

The multicameral mind that shapes us – the society of mind we live in – has no clear individual boundaries but is a flight of ghosts around us that give us our identity in exchange for our own gaze on the Other.

*

So we return to the ethical design question – and the relationship between man and machine. Perhaps the surprising conclusion is this: it would be ethically indefensible to construct an artificial human without the ability to change and grow, and hence also ethically indefensible to design just one such artificial intelligence – since such self-determination would require an artificial Other. (Do I think that humans could be the Other to an AI? No.).

It would require the construction not of an intelligence, but of an artificial community.