Energy and complexity (Philosophy of Complexity II)

A brief note today, about something to look into more.

Could the energy consumption of a civilization be a measure of its complexity? If so, we could easily say that our civilization is becoming more and more complex – since we are consuming more energy all the time. There is something intriguing about this measure – it relates the complexity of a phenomenon to the amount of heat it produces, and so the entropy it drives.

It seems an obvious metric, but it also seems to suggests that there is nothing structural about complexity – by this metric, the sun is more complex than we are. But then, again, we could argue that there is a difference here between natural phenomena like the sun and a constructed artifact.

Can we say, then, that for artifacts it is a good proxy to think about the heat they generate? A car generates more heat than a computer, does it not? Consumes more energy? So again, it seems, the measure is shaky. But the attraction in this kind of metric seems to remain: our civilization is more complex than that of the Egyptians, and we consume much more energy.

A variation on this theme is to look at the energy we can produce, harness — that would connect this measure to the Kardashev scales. Maybe there is something there.

Progress and complexity (Philosophy of Complexity I)

I have heard it said, and have argued myself, that complexity is increasing in our societies, and that evolution leads to increasing complexity. I have also known that this is an imprecise statement that needs some examination – or a lot of examination – in order to understand exactly how it can be corroborated or supported.

The first, obvious, problem is how we measure complexity. There are numerous mathematical proposals, such as algorithmic metrics (how long would the shortest program be that described system A and if that program length expands over time then A is becoming more complex), but they require quite some modeling: how do you reduce society or evolution to a piece of software? Suddenly you run into other interesting problems, such as if society and evolution are indeed algorithmic?

The second problem is to understand if this increase in complexity is constant and linear or if it is non-linear. It seems as if it could be argued that human society plateaued for thousands of years after having organized around cities, leaving our nomadic state – but is this true? And if it is true, what makes a society suddenly break free from such plateaus? This seems to be a question of punctuated equilibria?

So, let’s invert and ask what we would like to say – what our intuition tells us – and then try to examine if we can find ways of falsifying it. Here are a few things that I think I believe:

(I) Human society becomes more complex as it progresses economically, socially and technologically.

(II) Evolution leads to increasing complexity.

(III) Technology is the way we manage complexity, and technological progress internalizes complexity in new devices and systems, leaving the sum total increases intact – and not stopping the increase continuing – but redistributing it across different systems.

These guesses are just that, guesses, but they deserve examination and exploration, so that is what we will spend time looking at in this series of blog posts. The nature of any such investigation is that it meanders, and finds itself stalled or locked into certain patterns — we will learn from where this happens.

This seems important.

A good, but skeptical, note on Sandboxes

The idea of regulatory sandboxes is getting more traction as the legislator is trying to grapple with regulating new technology while still allowing it to develop in unexpected ways. These sandboxes present a number of problems (i.a. how do you graduate from them?), but are worth thinking about. This is a useful piece with criticism to start exploring the idea more in detail.

One thought, though: innovation hubs – suggested as an alternative – are really in a different category and seem incommensurable to the sandbox-concept.

Simone Weil’s principles for automation (Man / Machine VI)

Philosopher and writer Simone Weil laid out a few principles on automation in her fascinating and often difficult book Need for Roots. Her view as positive, and she noted that among workers in factories the happiest ones seemed to be the ones that worked with machines. She had strict views on the design of these machines however, and her views can be summarized in three general principles.

First, these tools of automation need to be safe. Safety comes first, and should also be weighed when thinking about what to automate first – the idea that automation can be used to protect workers is an obvious, but sometimes neglected one.

Second, the tools of automation need to be general purpose. This is an interesting principle, and one that is not immediately obvious. Weil felt that this was important – when it came to factories – because they could then be repurposed for new social needs, and respond to changing social circumstances – most pressingly, and in her time acute, war.

Third, the machine needs to be designed so that it is used and operated by man. The idea that you would substitute man by machine she found ridiculous for several reasons, but not least because we need to work to finds purpose and meaning, and any design that eliminates us from the process of work would be socially detrimental.

All Weil’s principles are applicable and up for debate in our time. I think the safety principle is fairly accepted, but we should not that she speaks of individual safety and not our collective safety. In the cases where technology for automation could pose a challenge for broader safety concerns in different ways, Weil does not provide us with a direct answer. This need not be apocalyptic scenarios at all, but could simply be questions of systemic failures of connected automation technologies, for example. Systemic safety, individual safety, social safety are all interesting dimensions to explore here – are silicon / carbon hybrid models always safer, more robust, more resilient?

The idea about general purpose and easy to repurpose is something that I think reflects how we have seen 3d printing evolve. One idea of 3D-printing is exactly this, that we get generic factories that can manufacture anything. But the other observation that is close at hand here is that you could imagine Weil’s principle as an argument for general artificial intelligence. It should be admitted that this is taking it very far, but there is something to that, and it is that a general AI & ML model can be broadly and widely taught and we would avoid narrow guild experts emerging in our industries. That would, in turn, allow for quick learning and evolution as these technologies, needs and circumstances change. General purpose technologies for automation would allow for us to change and adapt faster to new ideas, challenges and selection pressures – and would serve us well in a quickly changing environment.

The last point is one that we will need to examine closely. Should we consider it a design imperative to design for complementarity rather than substitution? There are strong arguments for this, not least cost arguments. Any analysis of a process that we want to automate will yield a silicon – carbon cost function that gives us to cost of the process as different parts of it are performed by machines and humans. A hypothesis would be that for most processes this equation will see a distribution across the two and only for very few will we see a cost equation where the human component is zeroed out. Not least because human intelligence is produced at extraordinarily low energy cost and with great resilience. There is even a risk mitigation strategy argument here — you could argue that always including a human element, or designing for complementarity, necessarily generates more resilient and robust systems as the failure paths of AIs and human intelligence look different and are triggered by different kinds of factors. If, for any system, you can allow for different failure triggers and paths, you seem to ensure that the system self-monitors effectively and reduces risk.

Weil’s focus on automation is also interesting. Today, in many policy discussions, we see the emergence of principles on AI. One could argue that this is technology-centric principle making, and that the application of ethical and philosophical principles suit the use of a technology better and that use-centric principles are more interesting. The use-case of automation is a broad one, admittedly, but an interesting one to test this on and see if salient differences emerge. How we choose to think about principles also force us to think about the way we test them. An interesting question is to compare with other technologies that have emerged historically. How would we think about principles on electricity, computation, steam — ? Or principles on automobiles and telephones and telegraphs? Where do we effectively place principles to construct normative landscapes that benefit us as a society? Principles for driving, for communicating, for selling electricity (and using it and certifying devices etc (oh, we could actually have a long and interesting discussion about what it would mean to certify different ML models!)).

Finally, it is interesting also to think about the function of work from a moral cohesion standpoint. Weil argues that we have no rights but for the duties we assume. Work is a foundational duty that allows us to build those rights, we could add. There is a complicated and interesting argument here that ties rights to duties to human work in societies from a sociological standpoint. The discussions about universal basic income are often conducted in sociological isolation, not thinking about the network of social concepts tied up in work. If there is, as Weil assume, a connection between our work and duties and the rights a society upholds on an almost metaphysical level, we need to re-examine our assumptions here – and look carefully at complementarity design as a foundational social design imperative for just societies.

Notes on attention, fake news and noise #3: The Noise Society 10 years later

This February it is 10 years since I defended my doctoral thesis on what I then called the Noise Society. The main idea was that the idea of an orderly, domesticated and controllable information society – modeled on the post-industrial visions of Bell and others – probably was wrongheaded, and that we would see a much wilder society characterized by an abundance of information and a lack of control, and in fact: we would see information grow to a point where the value of it actually collapsed as the information itself collapsed into noise. Noise, I felt then, was a good description not only of individual disturbances in the signal, but also the cost for signal discovery over all. A noise society would face very different challenges than an information society.

Copyright in a noise society would not be an instrument of encouraging the production of information so much as a tool for controlling and filtering information in different ways. Privacy would not be about controlling data about us as much as having the ability to consistently project a trusted identity. Free expression would not be about the right to express yourself, but about the right not to be drowned out by others. The design of filters would become key in many different ways.

Looking back now, I feel that I was right in some ways and wrong in many, but that the overall conclusion – that the increase in information and the consequences of this information wealth are at the heart of our challenges with technology – was not far off target. What I am missing the thesis is a better understanding of what information does. My focus on noise was a consequence of accepting that information was a “thing” rather than a process. Information looks like a noun, but is really a verb, however.

Revisiting these thoughts, I feel that the greatest mistake was not including Herbert Simon’s analysis of attention as a key concept in understanding information. If I had done that I would have been able to see that noise also is a process, and I would have been able to ask what noise does to a society, theorize that and think about how we would be able to frame arguments of policy in the light of attention scarcity. That would have been a better way to get at what I was trying to understand at the time.

But, luckily, thought is about progress and learning, and not about being right – so what I have been doing in my academic reading and writing for the last three years at least is to emphasize Herbert Simon’s work, and the importance of understanding his major finding that with a wealth of information comes a poverty of attention and a need to allocate attention efficiently.

I believe this can be generalized, and that the information wealth we are seeing is just one aspect of an increasing complexity in our societies. The generalized Simon-theorem is this: with a wealth of complexity comes a poverty of cognition and a need to learn efficiently. Simon, in his 1969 talk on this subject, notes that it is only by investing in artificial intelligence we can do this, and he says that it is obvious to him that the purpose of all of our technological endeavours is to ensure that we learn faster.

Learning, adapting to a society where our problems are an order of magnitude more complex, is key to survival for us as a species.
It follows that I think the current focus on digitization and technology is a mere distraction. What we should be doing is to re-organize our institutions and societies for learning more, and faster. This is where the theories of Hayek and others on knowledge coordination become helpful and important for us, and our ideological discussions should focus on if we are learning as a society or not. There is a wealth of unanswered questions here, such as how we measure the rate of learning, what the opposite of learning is, how we organize for learning, how technology can help and how it harms learning — questions we need to dig into and understand at a very basic level, I think.

So, looking back at my dissertation – what do I think?

I think I captured a key way in which we were wrong, and I captured a better model – but the model I was working with then was still fatally flawed. It focused on information as a thing not a process, and construed noise as gravel in the machinery. The focus on information also detracts from the real use cases and the purpose of all the technology we see around us. If we were, for once, to take our ambitions “to make the world a better place” seriously, we would have to think about what it is that makes the world better. What is the process that does that? It is not innovation as such, innovation can go both ways. The process that makes our worlds better – individually and as societies – is learning.

In one sense I guess this is just an exercise in conceptual modeling, and the question I seem to be answering is what conceptual model is best suited to understand and discuss issues of policy in the information society. That is fair, and a kind of criticism that I can live with: I believe concepts are crucially important and before we have clarified what we mean we are unable to move at all. But there is a risk here that I recognize as well, and that is that we get stuck in analysis-paralysis. What, then, are the recommendations that flow from this analysis?

The recommendations could be surprisingly concrete for the three policy areas we discussed, and I leave as an exercise for the reader to think about them. How would you change the data protection frameworks of the world if the key concern was to maximize learning? How would you change intellectual property rights? Free expression? All are interesting to explore and to solve in the light of that one goal. I tend to believe that the regulatory frameworks we end up with would be very different than the ones that we have today.

As one part of my research as an adjunct professor at the Royal Institute of Technology I hope to continue exploring this theme and others. More to come.