Stanislaw Lem, Herbert Simon and artificial intelligence as broad social technology project (Man / Machine X)

Why do we develop artificial intelligence? Is it merely because of an almost faustian curiosity? Is it because of an innate megalomania that suggests that we could, if we want to, become gods? The debate today is ripe with examples of risks and dangers, but the argument for the development of this technology is curiously weak.

Some argue that it will help us with medicine, and improve diagnostics, others dutifully remind us of the productivity gains that could be unleashed by deploying these technologies in the right way and some even suggest that there is a defensive aspect to the development of AI — if we do not develop it, it will lead to an international imbalance where the nations that have AI will be akin to those nations that have nuclear capabilities: technologically superior and capable of dictating the fates of those countries that lag behind (some of this language is emerging in the on-going geo-politicization of artificial intelligence between The US, Europe and China).

Things were different in the early days of AI, back in the 1960s, and the idea of artificial intelligence was actually more connected then with the idea of a social and technical project, a project that was a distinct response to a set of challenges that seemed increasingly serious to writers of that age. Two very different examples support this observation: Stanislaw Lem and Herbert Simon.

Simon, in attacking the challenge of information overload – or information wealth as he prefers to call it – suggests that the only way we will be able to deal with the complexity and rich information produced in the information age will be to invest in artificial intelligence. The purpose of that, to him, is to help us learn faster – and if we take into account Simon’s definition of learning as very close to classical darwinian adaptation, we realize that for him the development of artificial intelligence was a way to ensure that we can continue to adapt to an information rich environment.

Simon does not call this out, but it is easy to read between the lines and see what the alternative is: a growing inability to learn, to adapt that generates increasing costs and vulnerabilities, the emergence of a truly brittle society that collapses under its own complexity.

Stanislaw Lem, the Polish science fiction author, suggests a very similar scenario (in his famously unread Summa Technologiae), but his is more general. We are, he argues, running out of scientists and we need to ensure that we can continue to drive scientific progress, since the alternative is not stability, but stagnation. He views the machine of progress as a homeostat that needs to be kept in constant operation in order to produce, in 30 year increments, a doubling of scientific insights and discoveries. Even if we, he argues, force people to train as scientists we will not be able to grow fast enough to respond to the need for continued scientific progress.

Both Lem and Simon suggest the same thing: we are facing a shortage of cognition, and we need to develop artificial cognition or stagnate as a society.

*

The idea of a scarcity or shortage of cognition as a driver of artificial intelligence is much more fundamental than any of the ideas we quickly reviewed in the beginning. What we find here is an existential threat against mankind, and a need to build a technological response. The lines of thought, the structure of the argument, here almost remind us of the environmental debate: we are exhausting a natural resource and we need innovation to help us continue to develop.

One could imagine an alternative: if we say that we are running out of cognition, we could argue that we need to ensure the analogue of energy efficiency. We need cognition efficiency. That view is not completely insane, and in a certain way that is what we are developing through stories, theories and methods in education. The connection with energy is also quite direct, since artificial intelligence will consume energy as it develops. A lot of research is currently being directed into the question of the energy consumption of computation. There is a boundary condition here: a society that builds out its cognition through technology does so at the cost of energy at some level, and the cognition / energy yield will become absolutely essential. There is also a more philosophical point around all of this, and that is the question of renewable cognition, sustainable cognition.

Cognition cost is a central element in understanding Simon’s and Lem’s challenge.

*

But is it true? Are we running out of cognition? How would you measure that? And is the answer really a technological one? What about educating and discovering the talent of the billions of people that today live in poverty, or without any chance of an education to grow their cognitive abilities? If you have a 100 dollars – what buys you the most cognition (all other moral issues aside): investing in developmental aid or in artificial intelligence?

*

Broad social technological projects are usually motivated by competition, not by environmental challenges. One reason – probably not the dominating one, but perhaps a contributing factor nonetheless – that climate change seems to inspire so little action in spite of the threat is this: there is no competition at all. The world is at stake, and so nothing is at stake relative to one another. The conclusion usually drawn from that observation is that we should all come together. What ends up happening is that we get weak engagement from all.

Strong social engagement in technological development – what are the examples? The race for nuclear weapons, the race for the moon. In one sense the early conception of the project to build artificial intelligence was as a global, non-competitive project. Has it slowly changed to become an analogue of the space race? The way China is now approaching the issue is to some reminiscent of the Manhattan project style. [1]

*

If we follow that analogy for a bit further — what comes next? What is the equivalent of the moonlanding for artificial intelligence? Surely not the Turing test – it has been passed multiple times in multiple versions, and as such has lost a lot of its salience as a test for progress. What would then be the alternative? Is there a new test?

One quickly realizes that it probably is not the emergence of an artificial general intelligence, since that seems to be decades away, and a questionable project at best. So what would be a moon landing moment? Curing cancer (too broad, many kinds of cancer)? Eliminating crime (a scary target for many reasons)? Sustained economic growth powered by both capital investment strategies and deployment of AI in industry?

An aside: far too often we talk about moonshots, without talking about what the equivalent of the moonlanding would be. It is one thing to shoot for the moon, another to walk on it. Defined outcomes matter.

*

Summing up: we could argue that artificial intelligence was conceived of, early on, as a broad social project to respond to a shortage of cognition. It then lost that narrative, and today it is getting more and more enmeshed in a geopolitical, competitive narrative. That will likely increase the speed with which a narrow set of applications develop, but there is still no single moonlanding moment associated with the field that stands out as the object of competition between the US, EU and China. But maybe we should expect the construction of such a moment in medicine, military affairs or economics? So far, admittedly, it has been games that have been the defining moments – tic-tac-toe, chess, go – but what is next? And if there is no single such moment, what does that mean for the social narrative, speed of development and evolution of the field?

 

[1] https://www.technologyreview.com/s/609038/chinas-ai-awakening/