Energy and complexity (Philosophy of Complexity II)

A brief note today, about something to look into more.

Could the energy consumption of a civilization be a measure of its complexity? If so, we could easily say that our civilization is becoming more and more complex – since we are consuming more energy all the time. There is something intriguing about this measure – it relates the complexity of a phenomenon to the amount of heat it produces, and so the entropy it drives.

It seems an obvious metric, but it also seems to suggests that there is nothing structural about complexity – by this metric, the sun is more complex than we are. But then, again, we could argue that there is a difference here between natural phenomena like the sun and a constructed artifact.

Can we say, then, that for artifacts it is a good proxy to think about the heat they generate? A car generates more heat than a computer, does it not? Consumes more energy? So again, it seems, the measure is shaky. But the attraction in this kind of metric seems to remain: our civilization is more complex than that of the Egyptians, and we consume much more energy.

A variation on this theme is to look at the energy we can produce, harness — that would connect this measure to the Kardashev scales. Maybe there is something there.

Progress and complexity (Philosophy of Complexity I)

I have heard it said, and have argued myself, that complexity is increasing in our societies, and that evolution leads to increasing complexity. I have also known that this is an imprecise statement that needs some examination – or a lot of examination – in order to understand exactly how it can be corroborated or supported.

The first, obvious, problem is how we measure complexity. There are numerous mathematical proposals, such as algorithmic metrics (how long would the shortest program be that described system A and if that program length expands over time then A is becoming more complex), but they require quite some modeling: how do you reduce society or evolution to a piece of software? Suddenly you run into other interesting problems, such as if society and evolution are indeed algorithmic?

The second problem is to understand if this increase in complexity is constant and linear or if it is non-linear. It seems as if it could be argued that human society plateaued for thousands of years after having organized around cities, leaving our nomadic state – but is this true? And if it is true, what makes a society suddenly break free from such plateaus? This seems to be a question of punctuated equilibria?

So, let’s invert and ask what we would like to say – what our intuition tells us – and then try to examine if we can find ways of falsifying it. Here are a few things that I think I believe:

(I) Human society becomes more complex as it progresses economically, socially and technologically.

(II) Evolution leads to increasing complexity.

(III) Technology is the way we manage complexity, and technological progress internalizes complexity in new devices and systems, leaving the sum total increases intact – and not stopping the increase continuing – but redistributing it across different systems.

These guesses are just that, guesses, but they deserve examination and exploration, so that is what we will spend time looking at in this series of blog posts. The nature of any such investigation is that it meanders, and finds itself stalled or locked into certain patterns — we will learn from where this happens.

This seems important.

A good, but skeptical, note on Sandboxes

The idea of regulatory sandboxes is getting more traction as the legislator is trying to grapple with regulating new technology while still allowing it to develop in unexpected ways. These sandboxes present a number of problems (i.a. how do you graduate from them?), but are worth thinking about. This is a useful piece with criticism to start exploring the idea more in detail.

One thought, though: innovation hubs – suggested as an alternative – are really in a different category and seem incommensurable to the sandbox-concept.

What a year it has been

As I return to this notebook, or collection of musings, I find that everything has changed. Not in the dramatic way of everything has changed but in the rather more subtle way of everything has changed.

A shift in the way we see ourselves and the societies we are in.

The day before yesterday I celebrated my 49th birthday, and I reflected on what I would like to do with my life in general going forward. And one of the things I would really like to do is to write more, simply because writing is thinking. I do write a fair bit in my day job, but that writing is often concentrated on the issues and challenges facing the tech industry in different ways – albeit from a futures studies standpoint – and so not something that I can always share.

But what I would like to do is to write more widely, think through things and develop this as a commonplace book, a Zibaldone. A place to collect thoughts and find over time if there are patterns in them, ideas, stories, theories – a good example is HP Lovecrafts collection of story ideas, sketching other uncharted parts of the Mythos.

So – a commitment to write, then. We will see how that goes.

Hartmunt Rosa and the acceleration of our lives (Rosa I)

Hartmunt Rosa has observed, in numerous essays and texts, that it is useful to analyze our age with a mental model built around acceleration. He finds that we accelerate along three different axes — technological, social and subjective — and that this acceleration has profound impact on the way we can live our lives.

It is, for example, hardly viable to have a life plan if you know that the world is changing so fast that you will have to change jobs four or five times over your active career. It also seems hard to innovate in a world where the future is a moving target and you are not sure how to invest your energies. Any intergenerational projects will seem vain and increasingly all of our thinking becomes intragenerational.

This will, among other things, make it harder for us to tackle long term problems like climate change since the future horizon we operate against is closing in on the present all the time.

Rosa’s model is compelling and probably resonates with most of us, but there are a couple of questions that we need to ask when we start to examine it closer.

First, it seems that any claim of acceleration needs to be qualified by a metric of some kind. What is it that is getting faster? And relative to what? If we only look at technology, we find that there are competing claims here: while a lot of voices will argue that things are changing faster than ever before, it is also true that a growing set of voices now claim that innovation has all but died down in the West (Thiel et al). So which is it? And by what metric?

Let’s first eliminate a few metrics that we know are quite useless. No-one should get away with measuring the speed of innovation by looking at the number of patents filed. This was always a noisy signal, but with the increase in defensive and performative patents (where the patent is filed to give the impression of great waves of innovation in official statistics from some countries) the signal is now almost completely useless.

The other set of metrics that should at least be viewed with suspicion are all metrics that have to do with the increase in a particular technology’s capacity. If we argue that we should be seeing speed reductions in, say, international flights, we assume that the pace of technology needs to be measured in relation to the individual technologies, not to how the change overall. This ignores things like the possibility to be connected to the Internet while flying, technical change that is related to but not confined to a specific technology.

Connectivity is interesting because it happens across the board, it is a ”horizontal” innovation in the sense that it affects all technology across the technosphere. The improvements in an engine are vertical to that technology (even if the web of technologies related to an engine will be affected in different ways).

This raises the more complex question of if we should speak of the pace of innovation or if it is more accurate to speak of the pace of Innovation as the sum total of different innovation vectors. The latter is not easy to even approximate, however, and so we end up as lost as if we were asked what the pace of evolution is. This should not surprise us, since technology is closely connected to evolution in different ways and indeed can be described as a kind of evolving systems (See W Brian Arthur’s work).

What all of this means is that the notion of acceleration is not as clear as Rosa’s model seems to assume. Of the three kinds of acceleration he studies it is the third that is most clearly evident: the subjective feeling of acceleration and of things speeding up. Here it is without a doubt clear that many people seem to share a sense of increasing speed all around them. But could we find other causes for that?

One strong candidate that I feel Rosa should have looked closer at is complexity. Our world is increasingly connected and complexity is increasing. This can be perceived as acceleration, but is very different. Imagine that you are playing a tune. Now, acceleration would be asking you to play it faster. Complexification would be asking you to play a second and third melody at the same time.

So is the change we are experiencing more like be asked to play a tune faster or like being asked to play a fugue?

This matters when we start looking at the broader social consequences and how they play out.

Models of speech (Fake News Notes XI)

One thing that has been occupying me recently is the question of what speech is for. In some senses this is a heretical question – many would probably argue that speech is an inalienable right, and so it really does not have to be for anything at all. I find that unconvincing, especially in a reality where we need to balance speech against a number of other rights. I also find it helpful to think through different mental models of speech in order to really figure out how they come into conflict with each-other.

Let me offer two examples of such models and what function they have speech serving – they are, admittedly, simplified, but they tell an interesting story that can be used to understand and explore part of the pressure that free expression and speech is under right now.

The first model is one in which the primary purpose of speech is discovery. It is through speech we find and develop different ideas in everything from art to science and politics. The mental model I have in mind here is a model of “the marketplace of ideas”. Here the discovery and competition between ideas is the key function of speech.

The second model is one in which speech is the means through which we deliberate in a democracy. It is how we solve problems, rather than how we discover new ideas. The mental model I have in mind here is Habermas’ public sphere. Here speech is collaborative and seeks solutions from commonly agreed facts.

So we end up with, in a broad strokes, coarse grained kind of way, these two different functions: discovery and deliberation.

Now, as we turn to the Internet and ask how it changes things, we can see that it really increases discovery by an order of magnitude – but that it so far seems to have done little (outside of the IETF) to increase our ability to deliberate. If we now generalise a little bit and argue that Europeans think of speech as deliberative and Americans think of speech as discovery, we see a major fault line open up between those different perspectives.

This is not a new insight. One of the most interesting renditions of this is something we have touched on before – Simone Weil’s notion of two spheres of speech. In the first sphere anything would allowed and absolutely no limitations allowed. In the second sphere you would be held accountable for the opinions you really intended to advance as your own. Weil argued that there was a clear, and meaningful, difference between what one says and what one means.

The challenge we have is that while technology has augmented our ability to say things, it has not augmented our ability to mean them. The information landscape is still surprisingly flat, and no particular rugged landscapes seem to be available for those who would welcome a difference between the two modes of speech. But that should not be impossible to overcome – in fact, one surprising option that this line of argument seems to suggest is that we should look to technical innovation to see how we can create much more rugged information landscapes, with clear distinctions between what you say and what you mean.

*

The other mental model that is interesting to examine more closely is the atomic model of speech, in which speech is considered mostly as a set of individual propositions or statements. The question of how to delineate the rights of speech then becomes a question of adjudicating different statements and determine which ones should be deemed legal and which ones should be deemed illegal, or with a more fine-grained resolution – which ones should be legal, which ones should be removed out of moral concerns and which ones can remain.

The atom of speech in this model is the statement or the individual piece of speech. This propositional model of speech has, historically, been the logical way to approach speech, but with the Internet there seems to be an alternative and complimentary model of speech that is based on patterns of speech rather than individual pieces. We have seen this emerge as a core individual concern in a few cases, and then mostly to identify speakers who through a pattern of speech have ended up being undesirable on a platform or in a medium. But patterns of speech should concern us even more than they do today.

Historically we have only been concerned with patterns of speech when we have studied propaganda. Propaganda is a broad-based pattern of speech where all speech is controlled by a single actor, and the resulting pattern is deeply corrosive, even if individual pieces of speech may still be fine and legitimate. In propaganda we care also about that which is being suppressed as well as what is being fabricated. And, in addition to that, we care about the dominating narratives that are being told because they create background against which all other statements are interpreted. Propaganda, Jacques Ellul teaches us, always comes from a single center.

But the net provides a challenge here. The Internet makes possible a weird kind of poly-centric propaganda that originates in many different places, and this in itself lends the pattern credibility and power. The most obvious example of this is the pattern of doubt that increasingly is eroding our common baseline of facts. This pattern is problematic because it contains no single statement that is violative, but ity opens up our common shared baseline of facts to completely costless doubt. That doubt has become both cheap to produce and distribute is a key problem that precedes that of misinformation.

The models we find standing against each-other here can be called the propositional model of speech and the pattern model of speech. Both ask hard questions, but in the second model the question is less about which statements should be judged to be legal or moral, and more about what effects we need to look out for in order to be able to understand the sum total effect of the way speech affects us.

Maybe one reason we focus on the first model is that it is simpler; it is easier to debate and discuss if something should be taken down based on qualities inherent in that piece of content, than to debate if there are patterns of speech that we need to worry about and counter act.

Now, again, coming back to the price of doubt I think we can say that the price of doubt is cheap, because we operate in an entirely flat information landscape where doubt is equally cheap for all statements. There is no one imposing a cost on you for doubting that we have been to the moon, that vaccines work or any other thing that used to be fairly well established.

You are not even censured by your peers for this behaviour anymore, because we have, oddly, come to think of doubt as a virtue in the guise of “openness”. Now, what I am saying is not that doubt is dangerous or wrong (cue the accusations about a medieval view of knowledge), but that when the pendulum swings the other way and everything is open to costless doubt, we lose something important that binds us together.

Patterns of speech – perhaps even a weaker version, such as tone of voice, – remain interesting and open areas to look at more closely as we try to assess the functions of speech in society.

*

One last model is worthwhile looking closer at, and that is the model of speech as a monologic activity. When we speak about speech we rarely speak about listeners. There are several different possibilities here to think carefully about the dialogic nature of speech, as this makes speech into a n-person game, rather than a monologic act of speaking.

As we do that we find that different pieces of speech may impact and benefit different groups differently. If we conceive of speech as an n-person game we can, for example, see that anti-terrorist researchers benefit from pieces of speech that let the study terrorist groups closer, that vulnerable people who have been radicalised in different ways may suffer from exposure to that same piece of speech and that politicians may gain in stature and importance from opposing that same piece off speech.

The pieces of speech we study become more like moves on a chess board with several different players. A certain speech act may threaten one player, weaken another and benefit a third. If we include counter speech in our model, we find that we are sketching out the early stages of speech as a game that can be played.

This opens up for interesting ideas, such as can we find an optimisation criterion for speech and perhaps build a joint game with recommendation algorithms, moderator functions and different consumer software speech and play that game a million times to find strategies for moderating and recommending content that fulfil that optimisation criterion?

Now, then, what would that criterion be? If we wanted to let an AI play the Game of Speech – what would we ask that it optimise? How would we keep score? That is an intriguing question, and it is easy to see that there are different options: we could optimise for variance in the resulting speech our for agreement or for solving any specific class of problems or for learning (as measured by accruing new topics and discussing new things?).

Speech as Game is an intriguing model that would take some flushing out to be more than an interesting speculative thought experiment – but it could be worth a try.

Jottings III: the problem with propositions

In a previous post we discussed computational vs ”biological thinking” and the question of why we assume that chunking the world in a specific way is automatically right. The outcome was that it is not obvious why the sentence

(i) Linda is a bank teller and a feminist

should always be analysed as containing two propositions that each can be assessed for truth and probability. It is quite possible that given the description we are given the sentence actually is indivisible and should be assessed as a single proposition. When asked, then, to assess the probability of this sentence and the sentence

(ii) Linda is a bank teller

we would argue that we do not compare p & q with p, but x with p where both sentences carry a probability and where the probability of x is higher than the probability of p. Now, this begs the question of why the probability for x – Linda is a bank teller and a feminist – is higher.

One possibility is that our assessment of probability is multidimensional – we assess fit rather than numerical probability. Given the story we are told in the thought experiment, the fit of x is higher than that of p.

A proposition’s fit is a compound of probability and connection with the narrative logic of what preceded it. So far, so good: this is in fact where the bias lies, right? That we consider narrative fit rather than probability, and so hence we are being irrational – right? Well, perhaps not. Perhaps the idea that we should try to assess fragmented propositions for probability without looking at narrative fit is irrational.

There is something here about propositions necessarily being abbreviations, answers and asymmetric.

Jottings II: Style of play, style of thought – human knowledge as a collection of local maxima

Pursuant to the last note, it is interesting to ask the following question: if human discovery of a game space like the one in go centers around what could be a local maxima, and computers can help us find other maxima and so play in an ”alien” way — i.e. a way that is not anchored in human cognition and ultimately perhaps in our embodied, biological cognition — should we then not expect the same to be true for other bodies of thought?

Let’s say that a ”body of thought” is the accumulated games in any specific game space, and that we agree we have discovered that human-anchored ”bodies of thought” seem to be quietly governed by our human nature — is the same then true for philosophy? Anyone reading a history of philosophy is struck by the way concepts, ideas, arguments and methods of thinking reminds you of different games in a vast game space. We don’t even need to deploy Wittgenstein’s notion of language games to see the fruitful application of that analogy across different domains of knowledge.

Can, then, machine learning help us discover ”alien” bodies of thought in philosophy? Or is there a requirement that a game space can be reduced to a set of formalized rules? If so – imagine a machine programmed to play Herman Hesse’s glass bead game, how would that work out?

In sum: have we underestimated the limiting effect on thinking across domains that our nature has? The real risk that what we hail as human knowledge and achievement is a set of local maxima?

 

Jottings I: What does style of play tell us?

If we examine the space of all possible chess games we should be able to map out all games a really played look at how they are distributed in the game space (what are the dimensions of a game space, though?). It is possible that these games cluster in different ways and we could then term these clusters ”styles” of play. We at least have a naive understanding of what this would mean.

But what about the distribution of these clusters overall in a game space – are they equally distributed? Are they parts of mega clusters that describe ”human play”, clusters that orient around some local optimum? And if so, do we now have tools to examine other mega clusters around other optima?

Is there a connection to non-ergodicity here? A flawed image: game style as collections of non-ergodic paths (how could paths be non-ergodic?) in a broader space? No. But there is something here – a question about why we traverse probabilities in certain ways, why we cluster, the role of human nature and cognition.The science fiction theme of cognitive clusters so far a part that they cannot connect. Styles that are truly, and necessarily alien.

How would we answer a question about how games are distributed in a game space? Surely this has been done. Strategies?

Innovation III: What is the price of a kilo of ocean plastic?

A thought experiment. What would happen if we crowdsourced a price – not just a sum – per kilo of ocean plastic retrieved? This would require solving a few interesting problems along the way but would not be impossible.

First, we would need to develop a means to crowdsourced prices rather than sums. What we would then need to do is to require the contributors to pay a part of some price – per kilo, hour etc – and define some upper limit for their engagement. This would of course equate to a sum, but the point would be to highlight that the crowd is setting a price, not collecting a sum.

Second, we would need to be able to verify the goods or services bought. How would we, for example, determine if a kilo of ocean plastic really is from the ocean? This may require a few process innovations but surely is not impossible.

With these problems solved we can start asking interesting questions. At what price do we begin seeing progress? At what price may we solve the problem in it’s entirety?

What if we committed to paying 150, 1500, 15000 USD a kilo of ocean plastic? At what point does this turn into a natural resource to be mined like any other? At what time do oil companies start filtering the ocean for plastic?

This also suggests that we should also examine moving from innovation prizes to innovation prices.

Future of work – second take

When we speak about the future of work we often do this: we assume that there will be a labor market much like today, and that there will be jobs like the ones we have today, but that they will just be different jobs. It is as if we think we are moving from wanting bakers to wanting more doctors, and well, what should the bakers do? It is really hard to become a doctor!

There are other possible perspectives, however. One is to ask how both the market and the jobs will change under a new technological paradigm.
First, the markets should become much faster at detecting new tasks and the skills needed to perform them. Pattern scans across labor information markets make it possible to construct a kind of “skills radar” that will allow for us to tailor and offer new skills much like you are recommended new movies when you use Netflix. Not just “Others with your title are studying this” but also “Others on a dynamic career trajectory are looking into this”. We should be able to build skill forecasts that are a lot like weather forecasts, and less like climate forecasts.

Second, we should be able to distinguish surface skills and deep skills — by mining data about labor markets we should be able to understand what general cognitive skills underpin the surface skills that we need to deal with what changes faster. Work has layers – using Excel is a surface skill, being able to abstract a problem into a lattice of mental models is a deep skill. Today we assume a lot about these deep skills – that they have to do with problem solving and mental models, for example, but we do not know yet.

Now, if we turn to look at the jobs themselves. A few things suggest themselves.

First, jobs today are bundles of tasks – and social status and insurance and so on. These bundles are wholly put together by a single employer who will guesstimate what kinds of skills they need and then hire for those assumed skills. This is not the only possible way to bundle tasks. You could imagine using ML to ask what skills are missing across the organisation and generate new jobs on the basis of those skills; there may well be hidden jobs – unexpected bundle of skills – that would improve your organisation immeasurably!

Second, the cost of assessing and bundling tasks is polarised. It is either wholly put on the employer, or – in the gig economy – on the individual worker. This seems arbitrary. Why shouldn’t we allow for new kinds of jobs that bundle tasks from Uber, Lyft and others and adds on a set of insurances to create a job? A platform solution for jobs would essentially allow you to generate jobs out of available tasks – and perhaps even do so dynamically so that you can achieve greater stability of the flow of tasks, and hence more economic value out of the bundle than the individual tasks. This latter point is key to building in social benefits and insurance solutions into the new “job”.

Third, it will be important to follow the evolution of centaur jobs. These may just be jobs where you look for someone who is really good at working with one set of neural networks or machine learning systems of a certain kind. These will, over time, become so complex as to almost exhibit “personalities” of different kinds – and you may temperamentally or otherwise be a better fit for some of these systems than others. It is also not impossible that AI/ML systems follow the individual to a certain degree – that you offer the labor market you centaur joint labor.

Fourth, jobs may be collective and collaborative and you could hire for collective skills that today you need to combine yourselves. As coordination costs sink you can suddenly build new kinds of “macro jobs” that need to be performed by several individuals AND systems. The 1:1 relationship between an individual and a job may well dissolve.

The future of work short term lies in the new jobs we need on an existing market, long term we should look more into the changing nature of both those jobs and those markets to understand where we might want to move things. The way things are working now are also part of what was once an entirely new and novel way to think about things.

Innovation and evolution I: Speciation rates and innovation rates

As we explore analogies between innovation and evolution, there are some concepts that present intriguing questions. The idea of a speciation rate is one of these concepts and it allows us to ask questions about the pace of innovation in new ways.

Are speciation rates constant or rugged? That is: should we expect bursts of innovation at certain points? Cambrian explosions seem different from purely vertical evolution, from single cell to multi-cell etcetera.

Are speciation rates related to extinction rates? Will increases in extinction rates trigger increases in speciation? If these are entirely decoupled in a system it will have states with high extinction / low speciation that can be existentially threatening if they persist for too long. And what is extinction in innovation?

Are there measures of technical diversity alongside biological diversity and if so what it is that these measure?

Food for thought.

There are no singular facts (Questions II)

There is more to explore here, and more thoughts to test. Let’s talk more about knowledge, and take two really simples examples. We believe we know the following.

(i) The earth is round.
(ii) Gravity is 9.8 G

Our model here is one of knowledge as a set of propositions that can be justified and defended as knowledge – they can be deemed true or false, and the sum total of that body of propositions is all we know. We can add to it by adding new propositions and we can change our mind by throwing old propositions out and replacing them with new ones.

This model is incredibly strong, in the sense that it is often confused with reality (at least this is one way in which we can speak of the strength of a model – the probability p that it is mistaken for reality and not seen as a model at all), but it is just a model. A different model would say that everything you know is based on a question and the answer you provide for it — just as Plato has Socrates suggesting. We can then reconstruct the example above in an interesting way.

(i) What is the best approximate geometrical form for representing the Earth in a simple model? The Earth is round.
(ii) What is gravity on the average on planet Earth? 9.8G.

Once you explicate the question that the proposition is an answer to you suddenly also realize the limits of the answer. If we are looking for the gravity on a specific place on earth, as the top of Mount Everest, the answer may be different. If we are looking for a more exact representation of the earth with all the topological geological data exact, the round model will not suffice. Articulating the question that the proposition you say you know is an answer to opens up the proposition and your knowledge and helps you see something potentially fundamental, if it holds for closer scrutiny.

There are no isolated facts.

Facts, in this new model, are always answers to questions, and if you do not know the question you do not really understand the limits and value of a fact. This is one alternative way of addressing the notion of “a half-life of facts” as laid out by Sam Arbesman in his brilliant book on how facts cease being facts over time. The reality is that they do not cease being facts, but the questions are asking change subtly over time with new knowledge.

Note that this model is in no way a defense for relativism. It is the opposite: questions and answers provide a strong bedrock on which we can build our world, and we can definitely say that not every answers suffices to answer a question. There are good and bad answers to questions (although more rarely bad questions).

So, then, when Obama says that we need to be operating our political discussion and debates from a common baseline of facts, or when senator Moynihan argued that you are entitled to your opinions but not your own facts, we can read them under the new model as saying something different.

Obama’s statement turns into a statement about agreeing on questions and what the answers to those questions are – and frankly that may be the real challenge we face with populism: a mismatch between the questions we ask and those the populists ask.

Senator Moynihan’s point is that if we agree on the questions you don’t get to invent answers – but your opinions matter in choosing what questions we ask.

So, what does the new model suggest? It suggests the following: you don’t have knowledge. There are no facts. You have and share with society a set of questions and answers and that is where we need to begin all political dialogue. These provide a solid foundation – an even more solid foundation – for our common polis than propositions do, and a return to them may be the long term cure for things like fact resistance, fake news, propaganda, polarization and populism. But it is no quick fix.

Strong claims, but interesting ones – and ones worthy of more exploration as we start digging deeper.

Socratic epistemology, Hintikka, questions and the end of propositional logic (Questions I)

The question of what knowledge is can be understood in different ways. One way to understand it is to focus on what it means to know something. The majority view here is that knowledge is about propositions that we can examine from different perspectives. Examples would include things like:

  • The earth is round.
  • Gravity is a force.
  • Under simple conditions demand and supply meet in a market.

These propositions can then be true or false and the value we assign to them decides if they are included in our knowledge. The way we assign truth or falsity can vary. In some theories truth is about correspondence with reality, and in some it is about coherence in the set of propositions we hold to be true.

Now, admittedly this is a quick sketch of our theory of knowledge, but it suffices to ask a very basic question. Why do we believe that propositions are fundamental to knowledge? Why do we believe that they are the atoms of which knowledge is constituted?

Philosopher and historian of ideas RG Collingwood thought the explanation for this was simple: logic and grammar grew up together, as sciences, so we ended up confusing one with the other. There are, Collingwood asserts, no reasons for assuming that knowledge breaks down into propositions. There are no grounds for asserting that propositions are more basic than other alternatives. The reason we have propositional logic is just because logic is so entwined with grammar.

That leaves us with an interesting problem: what, then, is knowledge made of?

*

Socrates was clear. In Plato’s Theaetetus we find the following discussion in passing:

I mean the conversation which the soul holds with herself in considering of anything. I speak of what I scarcely understand; but the soul when thinking appears to me to be just talking—asking questions of herself and answering them, affirming and denying. And when she has arrived at a decision, either gradually or by a sudden impulse, and has at last agreed, and does not doubt, this is called her opinion. I say, then, that to form an opinion is to speak, and opinion is a word spoken,—I mean, to oneself and in silence, not aloud or to another: What think you?

This idea, that knowledge may be dialogical, that it may consist in a set of questions and answers to those questions is key to open another perspective on knowledge. It also, potentially, explains the attraction of the dialogue form for the Greeks: what better way to structure philosophical debate than in the same way knowledge is structured and produced? Why state propositions, when dialogue mimics the way we ourselves arrive at knowledge?

It is worthwhile taking a moment here. In one way this all seems so evident: of course we ask ourselves question to know! That is how we arrive at the propositions we hold true! But this is exactly where we need to pause. The reality is that the leap from questions and answers to propositions is uncalled for, and a leap that fools us into believing that questions are merely tools with which we uncover our propositions. Shovels that shovel aside the falsity from the truth. But knowledge is not like nuggets of gold buried in the earth – knowledge is the tension between answer and question in equilibrium. If you change the question, the balance of the whole thing changes as well – and your knowledge is changed.

As an aside: that is why, in belief revision, we often are interested in generating surprise in the person whose views we want to change. One way to describe surprise is as the unexpected answer to a question, that then forces a new question to be asked and the network of questions and answers is then updated to reflect a new belief – a new pair of questions and answers.

This minority view is found again in people like RG Collingwood who writes extensively about the fundamental nature of questions and it has been explicated at length by Jaako Hintikka who in his later philosophy developed what he called Socratic epistemology. In the next couple of posts we will examine what this could mean for our view of the conscious mind, and perhaps also for our view of artificial intelligence.

I think it will allow us to say that the Turing test was the wrong way around: that the questions should have been asked by the human subject and the computer to the test leader. It will also allow us to understand why human questioning is so surprisingly efficient, and why randomly generating queries is a horrible way to learn any subject. Human questions shape the field of knowledge in an interesting way, and we see this in the peculiar shape of human go games in the overall game space of go, but equally in the shape of human knowledge in chess.

*

When new models for learning are devised they are able to explore completely different parts of the problem space, parts you don’t easily reach with the kinds of questions that we have been asking. Questions have a penumbra of possible knowledge, and I suspect – although this will be good to explore further – that our ability to question is intrinsically human, and perhaps in some sense even biological. Here I would point to the excellent work of professor Joseph Jordania on questions and evolutionary theory, in his work Who Asked The First Question?.

This is an area of exploration that I have been mining for some time now with a close collaborator in professor Fredrik Stjernberg, and we are getting ready to sum up the first part of our work soon, I hope. It is not just theoretical, but suggests interesting possibilities like dialogical networks (rather than adversarial ones) and a science of possible categories of questions and ways to ask new questions, or better questions.

Weil’s paradox: intention and speech (Fake News Notes #8)

Simone Weil, in her curious book Need for Roots, notes the following on the necessity for freedom of opinion:

[…] it would be desirable to create an absolutely free reserve in the field of publication, but in such a way as for it to be understood that the works found therein did not pledge their authors in any way and contained no direct advice for readers. There it would be possible to find, set out in their full force, all the arguments in favour of bad causes. It would be an excellent and salutary thing for them to be so displayed. Anybody could there sing the praises of what he most condemns. It would be publicly recognized that the object of such works was not to define their authors’ attitudes vis-à-vis the problems of life, but to contribute, by preliminary researches, towards a complete and correct tabulation of data concerning each problem. The law would see to it that their publication did not involve any risk of whatever kind for the author.

Simone Weil, Need for Roots, p. 22

She is imagining here a sphere where anything can be said, any view expressed and explored, all data examined — and it is interesting that she mentions data, because she is aware that a part of the challenge is not just what is said, but what data is collected and shared on social problems. But she also recognizes that such a complete free space needs to be distinguished from the public sphere of persuasion and debate:

On the other hand, publications destined to influence what is called opinion, that is to say, in effect, the conduct of life, constitute acts and ought to be subjected to the same restrictions as are all acts. In other words, they should not cause unlawful harm of any kind to any human being, and above all, should never contain any denial, explicit or implicit, of the eternal obligations towards the human being, once these obligations have been solemnly recognized by law.

Simone Weil, Need for Roots, ibid.

This category – ”publications destined to influence what is called opinion”, she wants to treat differently. Here she wants the full machinery of not just law, but also morals, to apply. Then she notes, wryly one thinks, that this will present some legal challenges:

The distinction between the two fields, the one which is outside action and the one which forms part of action, is impossible to express on paper in juridical terminology. But that doesn’t prevent it from being a perfectly clear one.

Simone Weil, Need For Roots, ibid.

This captures in a way the challenge that face platforms today. The inability to express this legally is acutely felt by most that study the area, and Weil’s articulation of the two competing interests – free thought and human responsibility – is clean and clear.

Now, the question is: can we find any other way to express this than in law? Are there technologies that could help us here? We could imagine several models.

One would be to develop a domain for the public sphere, for speech that intends to influence. To develop an ”on the record”-mode for the flat information surfaces of the web. You could do this trivially by signing your statement in different ways, and statements could be signed by several different people as well – the ability to support a statement in a personal way is inherent in the often cited disclaimers on Twitter — where we are always told that RT does not equal endorsement. But the really interesting question is how we do endorse something, and if we can endorse statements and beliefs with different force.

Imagine a web where we could choose not just to publish, but publish irrevocably (this is for sure connected with discussions around blockchain) and publish with the strength of not just one individual, but several. Imagine the idea that we could replicate editorial accountability not just in law, but by availing those that seek it of a mode of publishing, a technological way of asserting their accountability. That would allow us to take Weil’s clear distinction and turn it into a real one.

It would require, of course, that we accept that there is a lot of ”speech” – if we use that as the generic term for the first category of opinion that Weil explores – we disagree with. But we would be able to hold those that utter ”opinions” – the second category, speech intended to influence and change minds – accountable.

One solution to the issue of misinformation or disagreeable information or speech is to add dimensionality to the flat information surfaces we are interacting with today.

Lessons from Lucy Kellaway

I have been following, with increasing interest, Lucy Kellaway’s second career as a teacher, and the movement she has started around a second career aimed at giving back. It makes a lot of sense. In her latest column she muses on what happens with status as you change from high-power jobs to become a teacher, and she notes that it depends on if you derive your sense of self-worth from external or internal sources. Perhaps, she argues, older people can drop the need for external validation and instead build their sense of self-worth on their own evaluation of themselves.

As I tread closer to my 50s, I find I think more and more about what it is that I want to spend the next 10-20 years doing and how I want to approach them. It is not a simple question, and I like my current work – but there is something intriguing in the notion of a second career. If health and circumstance allow I think it could be worthwhile exploring options and ideas around at least a project or some kind of work that would be different from what I have done so far.

We are all after all just experiments in living, so maybe we should embrace that more. Now, this is a metacomment, but I wanted to make a note of these thoughts to make sure that I come back to them and perhaps even hold myself accountable for thinking this through properly. Sometimes we need to write things down to seed a change. In due time, without any hurry, but rigorously and with a certain slowness.

What is your cathedral?

Time is a funny thing, and the perspectives that you can get if you shift time around are extraordinarily valuable. Take a simple example: not long ago it was common to engage in building things that would take more than one generation to finish – giant houses, cathedrals and organizations. Today we barely engage in projects that take longer than a year – in fact, that seems long to some people. A three month project, a three week sprint is preferable.

And there is some truth to this. Slicing time finely is a way to ensure that progress is made – even in very long projects. But the curious effect we are witnessing today where the slicing of time into finer and finer moments also shortens the horizons of our projects seems unfortunate.

Sir Martin Rees recently gave a talk at the Long Now Foundation where one of the themes he mused on was this. He offered a theory for why we find ourselves in this state, and the theory was this: the pace of change is such that it makes no sense to undertake very long projects. We can build cathedrals in a year if we want to, and the more powerful our technology becomes the faster we will be able to do so. The extreme case? Starting to build a cathedral in an age where you know that within a short time frame – years – you will be able to 3-d print one quickly and with low cost makes no sense — better then to wait for the technology to reach a stage where it can solve the problem for you.

If we dig here we find a fundamental observation:

(i) In a society where technology develops fast it always makes sense to examine if the time t(1) it takes to create something is greater than the time (t2) you have to wait for it to be done in much shorter time t(3).

If you want to construct something that it would take 5 years to build, but think you will be able to build it in two years if you wait one year – well, the rational thing to do is simply to wait and then do it – right?

That sentiment or feeling may be a driving factor, as sir Martin argues, behind the collapse of our horizons to short term windows. But it seems also to be something that potentially excludes us from the experience of being a part of something greater that will be finished not with you, but by generations to come.

The horizon of your work matters. It is fine to be ”productive” in the sense that you finalize a lot of things, but maybe it would also be meaningful and interesting to have a Cathedral-project. Something you engage in that will live on beyond you, that will take a 100 or a 1000 years to complete if it is at all completed.

We have far too few such projects today. Arguably science is such a practice – but it is not a project. Think about it: if you were to start such a project or find one — what would it be? The Long Now Foundation has certainly found such a project in its clock, but that remains one of the few examples of ”cathedral”-projects today (Sagrada Familia is also a good example – it is under way and is a proper cathedral, but we cannot all build cathedrals proper).

Books: Semiosis by Sue Burke

Just finished this excellent and surprising science fiction book. It explores several different themes – our ability to start anew on a new planet, our inherent nature, our relationship to nature and plants (!) and the growing suspicion that we are always doing someone else’s bidding. It is also beautifully written, with living characters and original ideas.

One of the themes that will stay with me is how nature always plays a dominance game, and that the darwinian struggle in some way is a ground truth that we have to understand and relate to. I have always felt somewhat uneasy with that conclusion, but I think it ultimately is because there is a mono-semiosis assumption there: all things must be interpreted in light of this fact. They must not, and Burke highlights how dominance strategies may evolve into altruistic strategies, almost in an emergent fashion. I found that striking, and important.

Overall, we should resist the notion that there are ground truths that are more true than other things, truth is a coherence space of beliefs and interpretations. Not in a postmodern way, but in a much more complicated way — this is why I often return to the wittgensteinian notion of a ”form of life”. Only within that can sense be made of anything.

(Is this not also then a ”ground truth”? You could make that argument I suppose, but at some point you just reach not truths but the event horizon of axiomatic necessity. We are not infinite and cannot extend reason infinitely).

So – a recommended read, and an interesting set of issues and questions.

Computational vs Biological Thinking (Man / Machine XII)

Our study of thinking has so far been characterised by a need to formalize thinking. Ever since Boole’s ”Laws of Thought” the underlying assumption and metaphor for thinking has been mathematical or physical – even mechanical and always binary. Logic has been elevated to the position of pure thought, and we have even succumbed to thinking that is we deviate from logic or mathematics in our thinking, then that is a sign that our thinking is flawed and biased.

There is great value to this line of study and investigation. It allows us to test our own thinking in a model and evaluate it from the perspective of a formal model for thinking. But there is also a risk associated with this project, a risk that may become more troubling as our surrounding world becomes more complex, and it is this: that we neglect the study of biological thinking.

One way of framing this problem is to say that we have two different models of thinking: computational and biological; the computational is mathematical and follows the rules of logic – and the biological is different, it forces us to ask things about how we think that are assumed in computational thinking.

Let’s take a very simple example – the so-called conjunction fallacy. The simplest rendition of this fallacy is a case often called ”Linda the bank teller”.

This is the standard case:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

https://en.wikipedia.org/wiki/Conjunction_fallacy

What computational thinking tells us is that the first proposition is always more probable than the second. It follows from the fact that the probability p is always bigger than the probability p x q if either probability is less than 1.

Yet, a surprising amount of people seem to think that it is more likely that Linda is a bank teller and active in the feminist movement. Are they wrong? Or are they just thinking in a different mode?

We could argue that they are simply chunking the world differently. The assumption underlying computational thinking is that it is possible to formalize the world into single statement propositions and that these formalizations are obvious. We thus take the second statement to be a compound statement – p AND q – and so we end up saying that it is necessarily less probable than just p. But we could challenge that and simply say that the second proposition is as elementary as the first.

What is at stake here is the idea of atomistic propositions or elementary statements. Underlying the idea of formalized propositions is the idea that there is a hierarchy of statements or propositions starting from ”single fact”-propositions like ”Linda is a bank teller” and moving on to more complex compound propositions like ”Linda is a bank teller AND active in the feminist movement”.

Computational thinking chunks the world this way, but biological thinking does not. One way to think about it is to say that for computational thinking a proposition is a statement about the state of affairs in the world for a single variable, whereas for biological thinking it is a statement about the state of affairs for multiple related variables that are not separable nor possible to chunk into individuals.

What sets up the state space we are asked to predict is the premises, and they define the state space we are asked to predict as one that contains facts about someones activism. The premises determine the chunking of the state space, and the proposition ”Linda is a bank teller and active in the feminist movement” is a singular, elementary proposition in the state space set up by the premises — not a compound statement.

What we must challenge here is the idea that chunking state spaces into elementary propositions is the same as chunking them into the smallest possible propositions. For computational thinking this holds true – but not for biological thinking.

The result of this line of arguing is intriguing: it suggests that what is commonly identified as a bias here is in fact just a bias if you assume that computational thinking is the ideal to which we are all to be held — but that in itself is a value proposition. Why is one way of chunking the state space better than another?

Another version of this argument is to say that the premises set up a proposition chunk that contains a statement about activism, so that the suppressed second part of ”Linda is a bank teller” is ”and NOT active in the feminist movement” and cannot be excluded. That you do not write it out does not mean that the chunk does not automatically contain a statement about that as the second chunk and the premises set that up as the natural chunking of the state space we are asked to predict.

The real failure, then, is to assume that ”Linda is a bank teller” is the most probable statement – and that is not a failure of bias as such, but an interesting kind of thinking frame failure; the inability to move away from computational thinking instilled through study and application.

It is well-known that economists become more rational than others, that they are infected with mathematical rationality through study. Maybe there is this larger distortion in psychology where tests are infected with computational thinking? Are there other biases that are just examples of being unable to move from the biological frame of thinking?

Digital legal persons? Fragments (Man / Machine XI and Identity / Privacy III)

The following are notes ahead of a panel discussion this afternoon, where we will discuss the need for a legal structure for digital persons in the wake of the general discussion of artificial intelligence. 

The idea of a digital assistant seems to suggest a world in which we will see new legal actors. These actors will buy, order, negotiate and represent us in different ways, and so will have a massive impact on the emerging legal landscape. How do we approach this in the best possible way?

One strawman suggestion would be to propose a new legal construct in addition to natural and legal persons, people and companies, and introduce a new legal category for digital persons. The construct could be used to answer questions like:

  • What actions can a digital person perform on behalf of another person and how is this defined in a structured way?
  • How is the responsibility of the digital person divided of the 4 Aristotelian causes? Hardware error, software error, coder error and objective error all seem to suggest different responsible actors behind the digital person. Hardware manufacturers would be responsible for malfunction there, software producers for errors in software and coders for error that could not be seen as falling within the scope of the software companies — finally the one asking for the assistant to perform a task would have a responsibility for a clearly defined task and objective.
  • In n-person interactions between digital persons with complex failures, who is then responsible?
  • Is there a preference for human / digital person responsibility?
  • What legal rights and legal capacities does a digital person have? This one may seem still in the realm of science fiction – but remember that with legal rights we can also mean the right to incur a debt on behalf of a non-identified actor, and we may well see digital persons that perform institutional tasks rather than just representative tasks.

There are multiple other questions here as well, that would need to be examined more closely. Now, there are also questions that can be raised about this idea, and that seem to complicate things somewhat. Here are a few of the questions that occur to me.

Dan Dennett has pointed out that one challenge with artificial intelligence is that we are building systems that have amazing competence without the corresponding comprehension. Is comprehension not a prerequisite for legal capacity and legal rights? Perhaps not, but we would do well to examine the nature of legal persons – of companies – when we dig deeper into the need for digital persons in law.

What is a company? It is a legal entity defined by a founding document of some kind with a set of responsible natural persons identified clearly under the charter and operations of that company. In a sense that makes it a piece of software. A legal person, as identified today, is at least an information processing system with human elements. It has no comprehension as such (in fact legal persons are reminiscent of Searle’s Chinese room in a sense, they can act intelligently without us being able to locate the intelligence in the organization in any specific place). So – maybe we could say that the law already recognizes algorithmic persons, because that is exactly what a legal entity like a company is.

So, you can have legal rights and legal capacity based on a system of significant competence but without individual comprehension. The comprehension in the company is located in the specific institutions where the responsibility is located, e.g. the board. The company is held responsible for its actions through holding the board responsible, and the board is made up of natural persons – so maybe we could say that legal persons have derived legal rights, responsibilities and capacity?

Perhaps, but it is not crystal clear. In the US there is an evolving notion about corporate personhood that actually situates the rights and responsibilities within the corporation as such, and affords it constitutional protection. At the center of this debate the last few years have been the issue of campaign finance, and Citizens United.

At this point it seems we could suggest that the easiest way to deal with the issue of digital persons would be to simply incorporate digital assistants and AIs as they take on more and more complex tasks. Doing this would also allow for existing insurance schemes to adapt and develop around digital persons, and would resolve many issues by ”borrowing” from the received case law.

Questions around free expression for digital assistants would be resolved by reference to Citizen United, for example, in the US. Now, let’s be clear: this would be tricky. In fact, it would mean, arguably, that incorporated bot networks had free speech rights, something that flies in the face of how we have viewed election integrity and fake news. But incorporation would also place duties on these digital persons in the shape of economic reporting, transparency and the possibility of legal dissolution if there was illegal behavior on behalf of the digital persons in question. Turning digital persons into property would also allow for a market in experienced neural networks in a way that could be intriguing to examine more closely.

An interesting task, here, would also be to examine how rights would apply – such as privacy – to these new corporations. Privacy, purely from an instrumental perspective here, would be important for a digital person to be able to conceal certain facts and patterns about itself to retain the ability to act freely and negotiate. Is there, then, such a thing as digital privacy that is distinct from natural privacy?

This is, perhaps then, a track worth exploring more – knowing full well the complexities it seems to imply (not least the proliferation of legal persons and what that would do with existing institutional frameworks).

Another, separate, track of investigation would be to look at a different concept – digital agency. Here we would not focus on the assistants as ”persons”, but we would instead admit that this analysis only flows from the analogy and not from any closer analysis. When we speak of artificial intelligence as a separate thing, as some entity, we are lazily following along with a series of unchallenged assumptions. The more realistic scenarios are all about augmented intelligence and so about an extended penumbra of digital agency on top of our own human agency, and the real question then becomes one about how we integrate that extended agency into our analysis of contract law, tort law and criminal law.

There is – we would say – no such thing as a separate digital person, but just a person with augmented agency, and the better analysis would be to examine how that can be represented well in legal analysis. This is no small task, however, since a more and more networked agency dissolves the idea of legal personhood to a large degree, in a way that is philosophically interesting.

Much of the legal system has required the identification of a responsible individual. In the case of failure to do so, noone has been held responsible, even if it is quite possible to say that there is a class of people or a network that carries distributed responsibility. We have, for classical liberal reasons, been hesitant to accept any criminal judgment that is based on a joint responsibility in cases where the defendants identify each-other as the real criminal. There are many different philosophical questions that need to be examined here – starting with the difference between augmented agency, digital agency, individual agency, networked agency, collective agency and similar concepts. Other issues would revolve around whether we believe that we can pulverize legal rights and responsibility and say that someone is 0.5451 responsible for a bad economic decision? A distribution of responsibility that equates to the probability that you should have caught it multiplied by the cost for you to do so would introduce an ultra-rational approach to legal responsibility that would, perhaps, be more fair from an economic standpoint, but more questionable in criminal cases.

And where an entire network has failed a young person subsequently caught for a crime – could one sentence all of the network? Are there cases where we all are somewhat responsible because of actions or inactions? The dissolution of agency asks an order of magnitude more complex questions than simply focusing on the introduction of a new person, but it is still an intriguing avenue to explore.

As the law of artificial intelligence evolves, it is also interesting to take into account its endpoint. If we assume that we will reach – one day artificial general intelligence, then what we will have done is most likely to have created something towards which we have what Wittgenstein called an attitude towards a soul. At that point, any such new entities likely are, in a legal sense, human if we interact with them as human. And then no legal change at all is needed. So what do we say about the intermediate stages and steps and the need for a legal evolution that ultimately – we all recognize – will just bring us back to where we are today?

 

The free will to make slightly worse choices ( Man / Machine XI)

In his chapter on intelectronics, his word for what most closely resembles artificial intelligence, Stanislaw Lem suggests an insidious way in which the machine could take over. It would not be, he says, because it wants to terrorize us, but more likely because it will try to be helpful. Lem develops the idea of the control problem, and the optimization problem, decades before they are then re-discovered by Nick Bostrom and others, and he runs through the many different ways in which a benevolent machine may just manipulate us in order to get better results for us.

This, however, is not the worst scenario. At the very end of the chapter, Lem suggests something much more interesting, and – frankly – hilarious. He says that another, more credible, version of the machines taking over would look like this: we develop machines that are simply better at making decisions for us than we would be making these very same decisions ourselves.

A simple example: your personal assistant can help you book travel, and knowing your preferences, being able to weight them against those of the rest of the family, the assistant has always booked top-notch vacations for you. Now, you crave your personal freedom so you book it yourself, and naturally – since you lack the combinatorial intelligence of an AI – the result is worse. You did not enjoy it as much, and the restaurants were not as spot on as they usually are. The book stores you found were closed, and not very interesting and out of the three museums you went to, only one really captured all of the family’s interests.

But you made your own decision. You exercised your free will. But what happens, says Lem, when that free will is nothing but the free will to make decisions that are always slightly worse than the ones the machine would have made for you? When your autonomy always comes at the cost of less pleasure? That – surmises Lem – would be a tyranny as insidious as any control environment or Orwellian surveillance state.

A truly intriguing thought, is it not?

*

As we examine it closer we may want to raise objections: we could say that making our own decisions, exercising our autonomy, in fact always means that we enjoy ourselves a little bit more, and that there is utility in the choice itself – so we will never end up with a benevolent dictator machine. But does that ring true? Is it not rather the case that a lot of people feel that there is real utility in not having to choose at all, as long as they feel that could have made a choice? Have we not seen sociological studies that argue that we live in a society that imposes so many choices on us that we all feel stressed about the plethora of alternatives for us?

What if the machine could let you know what breakfast cereal out of the many hundreds in the shelf in the supermarket will taste best for you, and at the same time be healthy? Would it not be great not to have to choose?

Or is there value in self-sabotage that we are neglecting to take into account here? That thought – that there is value in making worse choices, not because we exercise our will, but because we do not like ourselves, and are happy to be unhappy – well, it seems a little stretched. For sure, there are people like this – but as a general rule I don’t find that argument credible.

Well, we could say, our preferences change so much that it is impossible for a machine to know what I will want tomorrow – so the risk is purely fictional. I am not so sure that is true. I would suggest we are much more patterned than we like to believe. We live, as Dr Ford in Westworld notes, in our little loops – just like his hosts. We are probably much more predictable than we would like to admit, for a large set – although not all – cases. It is unlikely, admittedly, that a machine would be better at making life choices around love, work and career – these are choices that are hard to establish a pattern in (in fact, we arguably only establish those patterns in retrospect when we tell ourselves autobiographical stories about our lives).

There is also the possibility that the misses would be so unpleasant that the hits would not matter. This is an interesting argument, and I think there is something to it. If you knew that your favorite candy tasted fantastically 9 out 10 cases and tasted garbage ever tenth, without any chance of predicting when that would be, would you still eat it? Where would you draw a line? Every second piece of candy? 99 out of a 100? There is such a thing as disappointment cost, and if the machine is righto in the money in 999 out of a 1000 cases — is the miss such that we would stop using it, or prefer our own slightly worse choices? In the end – probably not.

*

The free will to make slightly worse choices. That is one way in which our definition of humanity could change fundamentally in a society with thinking machines.

Stanislaw Lem, Herbert Simon and artificial intelligence as broad social technology project (Man / Machine X)

Why do we develop artificial intelligence? Is it merely because of an almost faustian curiosity? Is it because of an innate megalomania that suggests that we could, if we want to, become gods? The debate today is ripe with examples of risks and dangers, but the argument for the development of this technology is curiously weak.

Some argue that it will help us with medicine, and improve diagnostics, others dutifully remind us of the productivity gains that could be unleashed by deploying these technologies in the right way and some even suggest that there is a defensive aspect to the development of AI — if we do not develop it, it will lead to an international imbalance where the nations that have AI will be akin to those nations that have nuclear capabilities: technologically superior and capable of dictating the fates of those countries that lag behind (some of this language is emerging in the on-going geo-politicization of artificial intelligence between The US, Europe and China).

Things were different in the early days of AI, back in the 1960s, and the idea of artificial intelligence was actually more connected then with the idea of a social and technical project, a project that was a distinct response to a set of challenges that seemed increasingly serious to writers of that age. Two very different examples support this observation: Stanislaw Lem and Herbert Simon.

Simon, in attacking the challenge of information overload – or information wealth as he prefers to call it – suggests that the only way we will be able to deal with the complexity and rich information produced in the information age will be to invest in artificial intelligence. The purpose of that, to him, is to help us learn faster – and if we take into account Simon’s definition of learning as very close to classical darwinian adaptation, we realize that for him the development of artificial intelligence was a way to ensure that we can continue to adapt to an information rich environment.

Simon does not call this out, but it is easy to read between the lines and see what the alternative is: a growing inability to learn, to adapt that generates increasing costs and vulnerabilities, the emergence of a truly brittle society that collapses under its own complexity.

Stanislaw Lem, the Polish science fiction author, suggests a very similar scenario (in his famously unread Summa Technologiae), but his is more general. We are, he argues, running out of scientists and we need to ensure that we can continue to drive scientific progress, since the alternative is not stability, but stagnation. He views the machine of progress as a homeostat that needs to be kept in constant operation in order to produce, in 30 year increments, a doubling of scientific insights and discoveries. Even if we, he argues, force people to train as scientists we will not be able to grow fast enough to respond to the need for continued scientific progress.

Both Lem and Simon suggest the same thing: we are facing a shortage of cognition, and we need to develop artificial cognition or stagnate as a society.

*

The idea of a scarcity or shortage of cognition as a driver of artificial intelligence is much more fundamental than any of the ideas we quickly reviewed in the beginning. What we find here is an existential threat against mankind, and a need to build a technological response. The lines of thought, the structure of the argument, here almost remind us of the environmental debate: we are exhausting a natural resource and we need innovation to help us continue to develop.

One could imagine an alternative: if we say that we are running out of cognition, we could argue that we need to ensure the analogue of energy efficiency. We need cognition efficiency. That view is not completely insane, and in a certain way that is what we are developing through stories, theories and methods in education. The connection with energy is also quite direct, since artificial intelligence will consume energy as it develops. A lot of research is currently being directed into the question of the energy consumption of computation. There is a boundary condition here: a society that builds out its cognition through technology does so at the cost of energy at some level, and the cognition / energy yield will become absolutely essential. There is also a more philosophical point around all of this, and that is the question of renewable cognition, sustainable cognition.

Cognition cost is a central element in understanding Simon’s and Lem’s challenge.

*

But is it true? Are we running out of cognition? How would you measure that? And is the answer really a technological one? What about educating and discovering the talent of the billions of people that today live in poverty, or without any chance of an education to grow their cognitive abilities? If you have a 100 dollars – what buys you the most cognition (all other moral issues aside): investing in developmental aid or in artificial intelligence?

*

Broad social technological projects are usually motivated by competition, not by environmental challenges. One reason – probably not the dominating one, but perhaps a contributing factor nonetheless – that climate change seems to inspire so little action in spite of the threat is this: there is no competition at all. The world is at stake, and so nothing is at stake relative to one another. The conclusion usually drawn from that observation is that we should all come together. What ends up happening is that we get weak engagement from all.

Strong social engagement in technological development – what are the examples? The race for nuclear weapons, the race for the moon. In one sense the early conception of the project to build artificial intelligence was as a global, non-competitive project. Has it slowly changed to become an analogue of the space race? The way China is now approaching the issue is to some reminiscent of the Manhattan project style. [1]

*

If we follow that analogy for a bit further — what comes next? What is the equivalent of the moonlanding for artificial intelligence? Surely not the Turing test – it has been passed multiple times in multiple versions, and as such has lost a lot of its salience as a test for progress. What would then be the alternative? Is there a new test?

One quickly realizes that it probably is not the emergence of an artificial general intelligence, since that seems to be decades away, and a questionable project at best. So what would be a moon landing moment? Curing cancer (too broad, many kinds of cancer)? Eliminating crime (a scary target for many reasons)? Sustained economic growth powered by both capital investment strategies and deployment of AI in industry?

An aside: far too often we talk about moonshots, without talking about what the equivalent of the moonlanding would be. It is one thing to shoot for the moon, another to walk on it. Defined outcomes matter.

*

Summing up: we could argue that artificial intelligence was conceived of, early on, as a broad social project to respond to a shortage of cognition. It then lost that narrative, and today it is getting more and more enmeshed in a geopolitical, competitive narrative. That will likely increase the speed with which a narrow set of applications develop, but there is still no single moonlanding moment associated with the field that stands out as the object of competition between the US, EU and China. But maybe we should expect the construction of such a moment in medicine, military affairs or economics? So far, admittedly, it has been games that have been the defining moments – tic-tac-toe, chess, go – but what is next? And if there is no single such moment, what does that mean for the social narrative, speed of development and evolution of the field?

 

[1] https://www.technologyreview.com/s/609038/chinas-ai-awakening/

Law, technology and time (Law and Time I)

I just got a copy of the latest Scandinavian Studies of Law, no. 65. I contributed a small piece on Law, Technology and Time — examining how the different ways in which time is made available by technology changes demands on the law and legislation. It is a first sketch of a very big area, and something I aim to try to dig deeper into. I am very grateful for the chance to start to lay out thoughts here, and especially so since it was on the occasion of celebrating that the Swedish Law and Informatics Research Institute now is 50 years young!

As I continue to think about this research project, I would like to think about things like ”Long law” for contracts and legal rules that extend in the Long Now (-10 000 years to + 10 000 years) as well as different new modes of time – concurrent time, sequenced time et c. There may also be connections here to the average age of legal entities, the changing nature of law in cities and corporations, foundations and other similar ideas. I really like the idea of exploring time and law thoroughly and from a number of different angles.

Stay tuned.

A small note on cost and benefit

I have picked up Cass Sunstein’s latest book on cost / benefit trade offs, and am enjoying it. But it seems to me that there is a fundamental problem here with the framing. The model being put forward is one in which we straight-forwardly calculate costs and benefits for any choice and then we make the right, informed and rational choice. Yet, we know that this model breaks down in two significant cases – and that is when the costs or the benefits become very large.

At that point, the probability is subsumed by the gravity of the cost or benefit and deemed unimportant. These decision spaces, let’s call them the ”rights”-space and ”risk”-space, are spaces where we navigate in a mostly rule-based fashion, and where deontological and kantian methods apply.

We will not calculate the benefit of sacrificing human lives, because people have a right to their own life and the individual benefit of that is vast. We will not calculate the cost of a nuclear break-down accurately because if it happens it has such a great potential cost. Even if the probability is miniscule, and the expected cost and benefit could be calculated well, we don’t. Rationality breaks down at the event horizon of these two decision singularities.

Now, you could argue that this is just a human thing, and that we need to get over it. Or you could say that this is a really interesting characteristic of decision space and study it. I find that far fewer take the second approach, and so expose an interesting trait: rationality blindness. A striving for rationality that leads to a blindness for human nature.

If we were to develop a philosophy of decisions, one things we would need to do is to show that not all decisions are the same. That there is a whole taxonomy of decisions that needs to explicated and examined and explored. As this example shows there are decisions that do not admit of a probability calculus in the normal way.

(Is this not Kahneman’s and Tversky’s project? No, it is in fact the opposite. Showing that the idea of decisional bias actually reveals a catalogue of different categories of decisions – not weaknesses in human rationality.)

Memory, Ricoeur, History, Identity, Privacy and Forgetting (Identity and Privacy II)

In the literature on memory it is almost mandatory to cite the curious case of the man who who after an accident could remember no more than a few minutes of his life before resetting and then forgetting everything again. He had retained long term memory from before the accident, but lacked the ability to form any new long term memories at all.

His was a tragic case, and it is impossible to read about the case and not be dripped by both a deep sorrow for the man, and a fear that something like this would happen to anyone close to us or ourselves. Memory is an essential part of identity.

The case also highlights a series of complexities in the concept of privacy that are interesting to consider more closely.

First, the obvious question is this: what does privacy mean for someone that has no long term memory? There are the obvious answers – that he will still care about wearing clothes, that he will want to sleep in solitude, that there are conversations that he will want to have with some and not others, but does the lack of any long term memory change the concept of privacy?

What this questions brings out, I think, is that privacy is not a state, but a relationship. Not a new observation as such, but it is often underestimated in the legal analysis of privacy-related problems. Privacy is a negotiation of the narrative identity between individuals. That negotiations breaks down completely when one party has no long term memory. We end up with a strange situation in which everyone around the person in question may feel that his or her privacy is being infringed upon, but no such infringement is felt or experienced by the subject himself. Privacy is, in this sense, perception.

This follows from our first observation, that identity is collective narration (that may be a pleonasm, how could narration be individual?) and that privacy is about the shaping of that story. When one lacks the ability to hold the story in memory, both identity and privacy fade out.

Second, the case asks an interesting question about privacy and time. We can bring that to a point and ask — how long is privacy? European legislation has a peculiar answer – it seems to argue that privacy is only held by natural, living persons, and that death is a breaking point where privacy no longer applies. But if there was ever a case for a right to extend beyond the end of life, privacy is probably a good candidate. Should it be possible to reveal all about an individual at the very moment of that person’s death? Why is death a relevant moment in the determination of the existence of the right at all? And what would a society look like that entertained eternal privacy? What shared history could such a society have?

We run into another aspect of privacy here – that it is limited by legitimate interest, journalism, art, literature. So in a very real sense, privacy cannot be used to protect against unauthorized biography or infringing on the story we tell about ourselves. This is also a peculiar thing; it seems to fly in the face of the realization that identity is story, and suggest that if anyone really tells a story about you through the established vehicles of storytelling, then you are defenseless from a privacy perspective. There is a lack of consequence here, born out of the realization that storytelling may well be a value that is more important than privacy in our societies. That the value of history is greater than the value of privacy, and that the control over narrative ultimately needs to give in to the transformation of individual memory to history.

Time, memory, identity and history. All of them are essential to explore in the language game of privacy, and need to be explored more deeply. Ricoeur’s thinking and ideas are key here, and his exploration of these themes more and more appear as a prolegomena to any serious discussion on privacy.

What has been written here, has been written on the right to be forgotten, but that is just a narrow application of the body of thought that Ricoeur has offered on these themes. So we will need to return to this a new in a later post.

The Narrated Self (Identity and Privacy I)

The discussions and debates about privacy are key to trust in the information society. Yet, the our understanding of the concept of privacy is still in need of further exploration. This short essay is an attempt to highlight one aspect of the concept that seems to be crucial, and highlight a few observations about what we could conclude from studying this aspect. 

Privacy is not a concept that can be studied in isolation. It needs to be understood as a concept strongly related to identity. Wittgenstein notes that doubt is impossible to understand without having a clear concept of belief, since doubt as a concept is dependent on first believing something. You have to have believed something to be able to doubt something. 

The same applies for privacy. You have to have an identity in order to have privacy, and in order to have that privacy infringed upon in some way. Theories of identity, then, are key to theories of privacy. 

So far nothing new or surprising. As we then turn to theories of identity, we find that there are plenty to choose from. Here are a few, eclectically collected, qualities of identity that I think are rather basic.

1. Identity is not a noun, but a verb, it exists not as a quality in itself but as a relationship with someone else. You find yourself strewn in the eyes of the Others, to paraphrase (badly) Heidegger. Your identity is constructed, changed and developed over time. A corollary of this is that if you were the last human in the universe you would have no identity. And you would not enjoy any privacy. 

2. The means through which we create identity are simple, and were best laid out by philosopher Paul Ricoeur. We narrate our identity, we tell stories about ourselves, and others tell stories about us. That is how our identity is constituted. 

These two qualities then imply a few interesting observations about privacy. 

First, privacy is also relational, it is the negotiation of identity with different audiences and constituencies. At least this is how it has been. One of the key challenges with technology is that it flattens the identity landscape, unifies the islands of identity that you could previously enjoy. What once was a natural fragmentation of identity is flattened and clustered as the information sphere grows larger and information about us more prevalent. Our ability to tell different stories to different people almost disappears. 

An aside: this observation that privacy is the telling of different stories about ourselves has led some economists like Richard Posner to state that privacy enables lying, and so that transparency would be preferable, since it would allow people to minimize risk. The flaw in the argument is that it assumes that there is a single true identity, and that this identity is revealed in the flattening of the information space, and the transparency that this brings about. This is not necessarily true: there may not be any “true identity” in any meaningful way. Just as there is no absolute privacy. An infringement of privacy is not so much revealing a truth about you as negating your ability, your autonomy, in telling stories about yourself. 

Second, this means that any right to privacy is synonymous with a right to the narration of our identities. This is what several writers have observed when they have equated privacy and autonomy, I think, but the focus on autonomy easily devolves into a discussion about the autonomy of will, rather than the autonomy of identity narration. 

Third, a society with the strongest privacy protections would be one in which no one is allowed to narrate your identity other than yourself. It seems self-evident that this creates a tension with free expression in different ways, but it highlights the challenging and changing nature of privacy infringments in an age where everyone is telling stories about us on social media. 

To sum up, then: privacy is a concept secondary to identity, and identity is best understood as the narratives of the self. Privacy then becomes the right to narrate yourself, to tell your own story. The political control and power over the stories we tell is a key problem in the study of the information society. One could even imagine a work written entirely focusing on the power over stories in a technological world, and such a work could encompass controversial content, fake news, hate speech, defamation, privacy and perhaps even copyright — we have here a conceptual model that allows us to understand and study our world from a slightly different vantage point. 

*

Sad songs (Notes on Culture I)

A cursory examination of the landscape of sad songs suggest that they fall into a number of categories: break up songs, songs about missing someone, songs about falling apart — but the best ones probably mix all of these different categories and are about the sudden loss of meaning. Think of ”Disintegration” by The Cure, and its despair:

[…]But I never said I would stay to the end
I knew I would leave you and fame isn’t everything
Screaming like this in the hope of sincerity
Screaming it’s over and over and over
I leave you with photographs, pictures of trickery
Stains on the carpet and stains on the memory
Songs about happiness murmured in dreams
When we both of us knew how the end always is
How the end always is

How the end always is
How the end always is
How the end always is
How the end always is

A good sad song allows for the complete collapse of concepts and truths around us, and captures that feeling of semantic uncertainty, our inability to assign meaning to what is happening, our lack of pattern. There is something there – the lack of patterns, the inability to make sense of the world, and the feeling that meaning is seeping away.

I think one of the best examples of this feel in general – a kind of Weltschmerz – is Nine Inch Nails ”Right Where It Belongs”. Here the world is slipping away, the interpretations like claws on a smooth rock surface (this version is even scarier than the album one):

[…]What if all the world’s inside of your head?
Just creations of your own
Your devils and your gods all the living and the dead
And you really oughta know
You can live in this illusion
You can choose to believe
You keep looking but you can’t find the ones
Are you hiding in the trees?
What if everything around you
Isn’t quite as it seems?
What if all the world you used to know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself, find yourself afraid to see? 

The calmness with which the lyrics are delivered, the understated use of questions makes the doubt all the more personal and close. As the song slides into the last verse it comes closer and is drowned in the noise of a concert in the background, and we are invited to share the doubt carefully constructed through-out the song.

A variation on this theme of uncertainty, but brought home to a much more personal setting and therefore so much worse in a sense, is found in the National’s ”About Today” (this version is perhaps the best one – but beware, it is 8 minutes). The lyrics sketch out, in the darkest possible way, the uncertainty – and it is a lack of certainty about exactly what the title says – about today. What happened, how it will affect us all, what it means for the future. The breakup is there, but radiating from it are the cracks and fault lines through out our lives:

Today
You were far away
And I
Didn’t ask you why
What could I say
I was far away
You just walked away
And I just watched you
What could I say
How close am I
To losing you
Tonight
You just close your eyes
And I just watch you
Slip away
How close am I
To losing you
Hey, are you awake
Yeah I’m right here
Well can I ask you
About today
How close am I
To losing you
How close am I
To losing 

The haunting drummer’s rhythm and drifting violin just add to the uncertainty, the first beginnings of fear in the way the singer almost doesn’t dare to ask, but murmurs the words.
There is a difference between sad songs and songs of sorrow, that is hard to articulate, but it can be clearly discerned from some of Nick Cave’s works. His ”Push the sky away” is fundamentally a sad song:

[…]And if you feel you got everything you came for
If you got everything and you don’t want no more
You’ve got to just keep on pushing it
Keep on pushing it
Push the sky awayAnd some people say it’s just rock and roll
Ah but it gets you right down to your soul
You’ve got to just keep on pushing it
Keep on pushing it
Push the sky awayYou’ve got to just keep on pushing it
Keep on pushing it
Push the sky away

This song is all the more horrible because it deals with a variation of the team of meaninglessness – it is the feeling of being finished. All has been done. There is a certain kind of sadness that follows on completing complicated tasks or reaching ones goals, and the sense one gets from this song is that for this one person that sadness has spilled over, and now everything seems finished. They got everything they came for. They don’t want no more.

The album following this was the one Cave made after a horrendous personal tragedy, and I find it almost impossible to listen to – because those songs are not sad, they are filled with sorrow – and it is not that they are too private, it is just that that sorrow is so real that it cuts through. ”Girl in Amber” is one example.

Songs of sorrow are songs that seek to construct meaning, sad songs are about meaning slipping away. Songs of sorrow have thing strands of hope in them. Sad songs come from a point of hopelessness, of determinism. Songs of sorrow look backward, and sad songs look forward.

The grammar of sadness is fundamentally distinct from that of sorrow.

Artificial selves and artificial moods (Man / Machine IX)

Philosopher Galen Strawson challenges the idea that we have a cohesive, narrative self that lives in a structurally robust setting, and suggests that for many, the self will be episodic at best and that there is no real experience of self at all. The discussion of the self – from a stream of moments to a story to deep identity – is relevant in any discussion of artificial general intelligence for a couple of different reasons. The perhaps most important one is that if we want to create something that is intelligent, or perhaps even conscious, we need to understand what in our human experiences constitutes a flaw or a design inefficiency, and what actually is a necessary feature.

It is easy to suspect that a strong, narrative and cohesive self would be an advantage – and that we should aim to achieve that if we recreate man in machine. That, however, underestimates the value of change. If our self is fragmented, scattered and episodic it has the ability to navigate a highly complex reality much better. A narrative self would have to spend a lot of energy integrating experiences and events into a schema in order to understand itself. An episodic and fragmented self just needs to build islands of self-understanding, and these don’t even need to be coherent with each-other.

A narrative self would also be very brittle, unable to cope with changes that challenge the key elements and conflicts in the narrative governing self-understanding. Our selves seem able to absorb even the deepest conflicts and challenges in ways that are astounding and even seem somewhat upsetting. We associate identity with integrity, and something that lacks strong identity feels undisciplined, unprincipled. But again: that seems a mistake – the real integrity is in your ability to absorb and deal with an environment that is ultimately not narrative.

We have to make a distinction here. Narrative may not be a part of the structure of our internal selves, but that does not mean that it is useless or unimportant. One reason narrative is important, and any AGI needs to have strong capacity to create and manage narratives, is that they are tools, filters, through which we understand complexity. Narrative compresses information and reduces complexity in a way that allows us to navigate in a world that is increasingly complex.

We end up, then, suspecting that what we need here is an intelligence that does not understand itself narratively, but can make sense of the world in polyphonic narratives that will both explain and organize that reality. Artificial narrativity and artificial self are challenges that are far from solved, and in some ways we seem to think that they will emerge naturally from simpler capacities that we can design.

This “threshold view” of AGI, where we accomplish the basic steps and then the rest emerge from these basic steps, is just one model among many, and arguably needs to be both challenged and examined carefully. Vernor Vinge notes, in one of his Long Now-talks, that one way in which we may fail to create AGI is through not being able to “put it all together”. Thin slices of human capacity, carefully optimized, may not gel together to create a general intelligence at all – and may not form the basis for capacities like our ability narrate ourselves and our world.

Back to the self: what do we believe the self does? Dennett suggests that it is a part of a user illusion, just like the graphic icons on your computer desktop, an interface. Here, interestingly, Strawson lands in the other camp. He suggests that to believe that consciousness is an illusion is the “silliest” idea and argues forcefully for the existence of consciousness. That suggests a distinction between self and consciousness, or a complexity around the two concepts, that also is worth exploring.

If you believe in consciousness as a special quality (almost like a persistent musical note) but do not believe in anything but a fragmented self, and resist the idea of a narrated or narrative life – your stuck in an ambient atmosphere as your identity and anchor in experience. There is a there there, but it is going nowhere. While challenging, I find that an interesting thought – that we are stuck in a Stimmung, as Heidegger called it, a mood.

Self, mood, consciousness and narrative – there is no reason to think that any of these concepts can be reduced to constituent parts and so should be seen as secondary to any other human mental capacities – and so we should think hard about how to design and understand them as we continue to develop theories of the human mind. That emotions play a key part in learning (pain is the motivator) we already knew, but these more subtle nuances and complexities of human existence are each as important. Creating artificial selves with artificial moods, capable of episodic and fragmented narratives through a persistent consciousness — that is the challenge if we are really interested in re-creating the human.

And, of course, at the end of the day that suggests that we should not focus on that, but on creating something else — well aware that we may want to design simpler versions of all of these in order to enhance the functionality of the technologies we design. Artificial Eros and Thanatos may ultimately turn out to be efficient software to allow robots to prioritize.

Douglas Adams, a deep thinker in these areas as in so many others, of course knew this as he designed Marvin, the Paranoid Android, and the moody elevators in his work. They are emotional robots with moods that make them more effective, and more dysfunctional, at the same time.

Just like the rest of us.

My dying machine (Man / Machine VIII)

Our view of death is probably key to exploring our view of the relationship between man and machine. Is death a defect, a disease to be cured or is it a key component in our consciousness and a key feature in nature’s design of intelligence? It is in one sense a hopeless question, since we end up reducing it to things like “do I want to die?” or “do I want my loved ones to die?” and the answer to both of these questions should be no, even if death may ultimately be a defensible aspect of the design of intelligence. Embracing death as a design limitation, does not mean embracing one’s own death. In fact, any society that embraced individual death would quickly end. But it does not follow that you should also resist death in general.

Does this seem counter-intuitive? It really shouldn’t. We all embrace social mobility in society, although we realize that it goes two ways – some fall and others rise. That does not mean that we embrace the idea that we should ourselves move a lot socially in our life time — in fact, movement both up and down can be disruptive to a family and so may actually be best avoided. We embrace a lot of social and biological functions without wanting to be at the receiving end of them, because we understand that they come with a systemic logic rather than being individually desirable.

So, the question should not be “do you want to die?”, but rather “do you think death serves a meaningful and important function in our forms of life?”. The latter question is still not easy to answer, but “memento mori” does focus the mind, and provides us with momentum and urgency that would otherwise perhaps not exist.

In literature and film the theme has been explored in interesting ways. In Iain M Banks’ Culture World people can live for as long as they want, and they do, but they live different lives and eventually they run out of individual storage space for their memories so they do not remember all of their lives. Are they then the same? After a couple of hundred years the old paradox of Odysseus’ ship really starts to apply to human beings as well — if I exchange all of your memories – are you still you? In what sense?

In the recently released TV-series Altered Carbon death is seen as the great equalizer and the meths – after the biblical character Methusaleh who lived a very long life – are seen to degrade themselves into inhuman deities that grow bored and in that fertile boredom a particular evil grows that seeks sensation and satisfication of base desires at any cost. A version of this exists in Douglas Adams’ Hitchhiker trilogy, where Wowbagger the Infinitely Prolonged fights the boredom of infinite life with a unique project – he sets out to insult the universe, alphabetically.

Boredom, insanity – the projected consequences of immortality are usually the same. The conclusion seems to be that we lack the psychological constitution and strength to live forever. Does that mean that there are no beings that could? That we could not change and be curious and interested and morally much more efficient if we lived forever? That is a more interesting question — is it inherently impossible to be immortal and ethical?

The element of time in ethical decision making is generally understudied. In the famous trolley thought experiments the ethical decision maker has oodles of time to make decisions about life and death. In reality these decisions are made in split seconds in any such situation as what is described in the thought experiments, and generally we become kantian when we have no time and act on baseline moral principles. To be utilitarian requires, naturally and obviously, the time to make your utility calculus work out the way you want it to. Time definitely should never be abstracted away from ethics in the way we often tend to do it today (in fact, the answers to the question “what is the ethical decision” could vary as t varies in “what is the ethical decision if you have t time”).

But could you imagine time scales at which ethics cannot exist? What if you cut time up really thickly? Assume a being that acts in a way where each act takes place in every hundred years – would it be able to act ethically? What would that mean? The cycle of action does imply different kinds of ethics, at least, does it not? A cycle of action of a million years would be even more interesting and hard to decipher with ethical tools. Perhaps ethics can only exist at a human timescale? If so – does infinite life and immortality count as a human timescale?

There is, from what my admittedly shallow explorations hint at, a lot of work done in ethics on the ethics of future generations and how we take them into account in our decisions. What if there were no future generations or if it was a choice to have new generations appear at all? How would that effect the view of what we should do as ethical decision makers?

A lot of questions and no easy answers. What I am digging for here is probably even more extreme, a question of if immortality and ethics are incompatible. If death or dying is a pre-requisite for acting ethically. I intuitively feel that this is probably right, but that is neither here nor there. When I outline this in my own head I guess the question that I get back to is what motivates action – and why we act. Scarcity of time – death – seems to be a key motivator in decision making and creativity overall. When you abstract death it seems as if there no longer is an organizing, forcing function for decision making as a whole. Our decision making becomes more arbitrary and random.

Maybe the question here is actually on of the unit of meaning. Aristotle hints at the fact that a life can only be called happy or fulfilled once it is over, and judged as good or bad only when the person who lived it died. That may be where my intuition comes from – that a life that is not finished never acquires ethical completeness? It can always change and the result is that we have to suspend judgment about the actions of the individual in case?

Ethics require a beginning and an end. Anything that is infinite is also beyond ethical judgment and mening. An ethical machine would have to be a dying machine.