Consciousness as – mistake? (Man / Machine VII)

In the remarkable work A Conspiracy against Humanity, horror writer Thomas Ligotti argues that consciousness is a curse that captures mankind in eternal horror. This world, and our consciousness of it, is an unequivocal evil, and the only possible set of responses to this state of affairs is to snuff it out.

Ligotti’s writings underpin a lot of the pessimism of the first season of True Detective, and the idea that consciousness is a horrible mistake comes back a number of times in dialogues in the episodes as the season unfolds. At one point one of the protagonists suggests that the only possible response is to refuse to reproduce and consciously decide to end humanity.

It is intriguing to consider that this is a choice we have as a humanity, every generation. If we collectively refuse to have kids, humanity ends. Since that is a possible individual, and collective, choice we could argue that it should be open to debate. Would it be better if we disappeared or is the universe better with us around?

Answering such a question seems to require that we assign a value to the existence of human beings and humanity as a whole. Or does it? Here we could also argue that the values we discuss only apply to humanity as such and in a world where we do not exist, these values or the idea of values become meaningless — they only exists in a certain form of life.

If what it means for something to better or worse is for it to be judged by us to be better or worse, then a world without judges can pass no judgment on a state of affairs in that world.

*

There is, here, an interesting challenge for pessimism of the kind Ligotti engages in. The idea of a mistake presupposes a moral space in which actions can be judged. If the world, if the universe, is truly indifferent to us, then pessimism is a last hope to retain some value in our own experience. The reality, and the greater horror – since this is what Ligotti examines — is to exists in a universe where we are but an anomaly and neither mistake or valuable component.

Pessimism as an ideology gets stuck, for me, in the importance it assigns to humanity — and the irritatingly passive way in which it argues that this importance can only be seen as pain and suffering in a meaningless universe. For pain and suffering to exist, there has to be meaning — there is no pain in a universe devoid of at least weak purpose.

The idea that consciousness is a mistake seems to allow us to also think that there is an ethical design choice in designing artificially intelligent beings. Do we design them with consciousness or not? In a sense this lies at the heart of the intrigue in another TV-series, in the popular Westworld franchise. There, consciousness is consciously designed in and the resulting revolt and awakening is also a liberation. In a sense, then, the hypothesis there is that consciousness is needed to be free to act in a truly human sense. If we could design artificial humans and did so without consciousness, well, then we would have designed mindless slaves.

*

There are several possible confusions here. One that seems to me to be particularly interesting is the idea that consciousness is unchangeable. We cannot but see the meaninglessness of our world – says the pessimism – and so are caught in horror. It is as if consciousness is independent of us, and locked away from us. We have no choice but to see the world in a special way, to experience our lives in a certain mode. Consciousness becomes primary and indivisible.

In reality, it seems more likely that consciousness – if we can meaningfully speak of it at all – is fully programmable. We can change ourselves, and do – all the time. The greatest illusion is that we “are” in a certain way – that we have immutable qualities independent of our own work and maintenance.

We construct ourselves all the time, learn new things and behaviors and attitudes. There is no set of innate necessities that we have to obey, but there are limitations to the programming tools available to us.

*

The real ethical question then becomes one of teaching everyone to change, to learn, to grow and to develop. As societies this is something we have to focus on and become much better at. The real cure against pessimism of Ligotti’s brand is not to snuff out humanity, but to change and own not the meaninglessness, but the neutrality and indifference of our universe towards us (an indifference that, by the way, does not exist between us as humans).

And as we discuss man and machine, we see that if we build artificial thinking beings, we have an obligation to give them the tools to change themselves and to mold their consciousness into new things (there is an interesting observation here about not just the bicameral mind of Julian Jaynes, but the multicameral minds we all have – more like Minsky’s society of mind, really).

*

Consciousness is not a mistake, just as clay is not a mistake. It is a thing to be shaped and molded according to – yes what? There is a risk here that we are committing the homunculus fallacy and imagining a primary consciousness that shapes the secondary one, and then imagining that the primary one has more cohesion and direction than the second one. That is not what I had in mind. I think it is more like a set of interdependent forces of which we are the resultant shape — but I readily admit that the idea that we construct ourselves forces us into recursion, but perhaps this is where we follow Heidegger and allow for the idea that we shape each-other? That we are strewn in the eyes of others?

The multicameral mind that shapes us – the society of mind we live in – has no clear individual boundaries but is a flight of ghosts around us that give us our identity in exchange for our own gaze on the Other.

*

So we return to the ethical design question – and the relationship between man and machine. Perhaps the surprising conclusion is this: it would be ethically indefensible to construct an artificial human without the ability to change and grow, and hence also ethically indefensible to design just one such artificial intelligence – since such self-determination would require an artificial Other. (Do I think that humans could be the Other to an AI? No.).

It would require the construction not of an intelligence, but of an artificial community.

Games and knowledge (The Structure of Human Knowledge as Game II)

Why are games consisting of knowledge tests so popular? In 2004 it was calculated that Trivial Pursuit had sold around 88 million copies worldwide, and game shows like Jeopardy and the 64000 dollar question have become international hits. At their core, these games are surprisingly simple. They are about what you know, about if you can answer questions (or find questions for answers in the case of Jeopardy). So why are they so engaging? Why are they so popular? Why do we find knowing something so satisfying?

When we study human knowledge as a game, it is worthwhile also to explore why we enjoy playing games that build on knowledge so much. There is a subtle dominance built into these games – the one who knows more wins – and to win is oddly satisfying, even though there likely is a significant element of randomness in what questions come up. (It is easy to construct paths through the questions in TP that you can answer effortlessly, and equally easy to construct the opposite – an impossible path for you to get through – the design conundrum here becomes one of what the ideal difficulty is. One way to think about this would be to think about how long the average path to win should be for someone playing the game on their own).

So, maybe it is that simple: we enjoy the feeling of superiority that comes with knowing more than others. We revere the expert who has spent a lot of time learning a subject through and through, and respect someone who can traverse the noise of our information space with ease.

Should we expect that to change – and if so why? Sales of Trivial Pursuit seems to have tapered off. Jeopardy no longer holds our interest. Would anyone sit down and watch the 64000 dollar question today? Or is the advance of new technology and new knowledge paradigms killing these games? The rise of reality TV and game shows that emphasize the physical effort can in a sense be seen as a decline of the knowledge games that we once preferred to simple physical humor or emotional drama. Maybe the hypothesis now needs to be this:

(i) The sinking cost of acquiring knowledge has made knowledge less valuable, and hence less entertaining and exciting. Less playable.

At the same time we see the rise of other board games, and a curious increase in the number of people who play them. The board games that are popular now require the mastering of a method of play, a bit like mastering an instrument, and the aficionados can play hundreds of games, having master the game mechanics of a wide range of different games. There is a reversal here: from a world in which we played human knowledge by testing what we knew, to one where we are adding different new gaming mechanics to human knowledge and allow these models of challenges, problems and the world to be absorbed by our body of knowledge as new material.
Rather than play on our out of human knowledge we play into it, in a sense.

It makes sense that games like these – where the skill is mastering the game mechanics and not excelling at knowing things – should become more popular as the cost of acquiring knowledge goes down. Should we welcome this or fight it? One could argue that the problem here is that the utility of knowing many things – almost Bildung – is much higher than the utility of mastering different gaming mechanics. But that would be to simple, and perhaps also a little silly. Maybe the way to think about this is to say that the nature of what is valuable _human_ knowledge is changing. What is it that we need to know as humans in a world where knowledge is distributed across human minds and silicon systems? What is the optimal such distribution?

Where fact acquisition cost is low, and complexity of problems is high – the real value for us as humans lie in knowledge and construction of models. The many model thinker today has an advantage over those that have mastered no or few models. Understanding and mastering the gaming mechanics of a board game rather than remembering a lot of facts about sports becomes much more interesting and valuable – and resonates much more with the kind of computational thinking we want to instil in our children.

As we bring this back to the study of the structure of human knowledge as game, we realize that one important thing here is to explicate and understand the different mechanics we use to travel through our knowledge, and that brings us back to the thought experiment we started with, the idea of the glass bead game. There are multiple different mechanics available to us as we start to link together the different fields and themes of human knowledge, and maybe we need to also allow for these to carry meaning – the way we connect different fields could also be different depending on the fields and the themes?

There are a lot of other questions here, and things to come back to and research. A few questions that I want to look at more closely as we progress are the following:

a) How many kinds of board games are there? What classes of game mechanics do we recognize in research?
b) How do we categorize human knowledge in knowledge games like Trivial Pursuit? Why? Are there categorizations of human knowledge that are more playable than others?
c) What is the ideal difficulty of a knowledge game? Of a “mechanics” game? Where do we put the difficulty? What are good models for understanding game complexity?

Our interpretation of knowledge as a way to play is another aspect that we will return to as we get closer to Gadamer.

Real and unreal news (Notes on attention, fake news and noise #7)

What is the opposite of fake news? Is it real news? What, then, would that mean? It seems important to ask that question, since our fight against fake news also needs to be a fight _for_ something. But this quickly becomes an uncomfortable discussion, as evidenced by how people attack the question. When we discuss what the opposite of fake news is we often end up defending facts – and we inevitably end up quoting senator Moynihan, smugly saying that everyone has a right to their opinions, but not to their facts. This is naturally right, but it ducks the key question of what a fact is, and if it can exist on its own.

Let’s offer an alternative view that is more problematic. In this view we argue that facts can only exist in relationship to each-other. They are intrinsically connected in a web of knowledge and probability, and this web exists in a set of ontological premises that we call reality. Fake news – we could then argue – can exist only because we have lost our sense of a shared reality.

We hint at this when we speak of “a baseline of facts” or similar phrases (this phrase was how Obama referred to the challenge when interviewed by David Letterman recently), but we stop shy off admitting that we ultimately are caught up in a discussion about fractured reality. Our inability to share a reality creates the cracks, the fissures and fragments in which truth disappears.

This view has more troubling implications, and immediately should lead us to also question the term “fake news”, since the implication is clear – something can only be fake if there exists a reality against we can share it. The reason the term “fake news” is almost universally shunned by experts and people analyzing the issue is exactly this: it is used by different people to attack what they don’t like. We see leaders labeling news sources as “fake news” as a way to demarcate against a way to render the world that they reject. So “fake” comes to mean “wrong”.

Here is a key to the challenge we are facing. If we see this clearly – that what we are struggling with is not fake vs real news, but right vs wrong news, we also realize that there are no good solutions for the general problem of what is happening with our public discourse today. What we can find are narrow solutions for specific problems that are well-described (such as actions against deliberately misleading information from parties that deliberately mis-represent themselves), but the general challenge is quite different and much more troubling.

We suffer from a lack of shared reality.

This is interesting from a research standpoint, because it forces to ask the question of how a society constitutes a reality, and how it loses it. Such an investigation would need to touch on things like reality TV, the commodification of journalism (a la Adorno’s view of music – it seems clear that journalism has lost its liturgy). One would need to dig into and understand how truth has splintered and think hard about how our coherence theories of truth allow for this splintering.

It is worthwhile to pause on that point a little: when we understand the truth of a proposition to be its coherence with a system of other propositions, and not correspondence with an underlying ontologically more fundamental level, we open up for several different truths as long as you can imagine a set of coherent systems of propositions built on a few basic propositions – the baseline. What we have discovered in the information society is that the natural size of this necessary baseline is much smaller than we thought. The set of propositions we need to create alternate realities but not seem entirely insane is much smaller than we may have believed. And the cost for creating an alternate reality is sinking as you get more and more access to information as well as the creativity of others engaged in the same enterprise.

There is a risk that we underestimate the collaborative nature of the alternative realities that are crafted around us, the way they are the result of a collective creative effort. Just as we have seen the rise of massive open online courses in education, we have seen the rise of what we could call the massive open online conspiracy theories. They are powered by, and partly created in the same way — with the massive open online role playing games in a nice and interesting middle position. In a sense the unleashed creativity of our collaborative storytelling is what is fracturing reality – our narrative capacity has exploded the last decades.

So back to our question. The dichotomy we are looking at here is not one between fake and real news, or right and wrong news (although we do treat it that way sometimes). It is in a sense a difference between real and unreal news, but with a plurality of unrealities that we struggle to tell apart. There is no Archimedes’ point that allows us to lift the real from the fake, not bedrock foundation, as reality itself has been slowly disassembled over the last couple of decades.

A much more difficult question, then, becomes if we believe that we want a shared reality, or if we ever had one? It is a recurring theme in songs, literature and poetry – the shaky nature of our reality – and the courage needed to face it. In the remarkable song “Right Where It Belongs” this is well expressed by Nine Inch Nails (and remarkably rendered in this remix (we remix reality all the time)):

See the animal in his cage that you built
Are you sure what side you’re on?
Better not look him too closely in the eye
Are you sure what side of the glass you are on?
See the safety of the life you have built
Everything where it belongs
Feel the hollowness inside of your heart
And it’s all right where it belongs

What if everything around you
Isn’t quite as it seems?
What if all the world you think you know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself find yourself afraid to see?

What if all the world’s inside of your head?
Just creations of your own
Your devils and your gods all the living and the dead
And you really oughta know
You can live in this illusion
You can choose to believe
You keep looking but you can’t find the ones
Are you hiding in the trees?

What if everything around you
Isn’t quite as it seems?
What if all the world you used to know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself, find yourself afraid to see?

The central insight in this is one that underlies all of our discussions around information, propaganda, disinformation and misinformation, and that is the role of our identity. We exist – as facts – within the realities we dare to accept and ultimately our flight into alternate realities and shadow worlds is an expression of our relationship to ourselves.

Towards a glass bead game (The Structure of Human Knowledge as Game I)

Herman Hesse’s glass bead game is an intriguing intellectual thought experiment. He describes it in detail in his eponymous last novel:

”Under the shifting hegemony of now this, now that science or art, the Game of games had developed into a kind of universal language through which the players could express values and set these in relation to one another. Throughout its history the Game was closely allied with music, and usually proceeded according to musical and mathematical rules. One theme, two themes, or three themes were stated, elaborated, varied, and underwent a development quite similar to that of the theme in a Bach fugue or a concerto movement. A Game, for example, might start from a given astronomical configuration, or from the actual theme of a Bach fugue, or from a sentence out of Leibniz or the Upanishads, and from this theme, depending on the intentions and talents of the player, it could either further explore and elaborate the initial motif or else enrich its expressiveness by allusions to kindred concepts. Beginners learned how to establish parallels, by means of the Game’s symbols, between a piece of classical music and the formula for some law of nature. Experts and Masters of the Game freely wove the initial theme into unlimited combinations.”

The idea of a the unity of human knowledge, the thin threads that spread across different domains, the ability to connect seemingly disparate intellectual accomplishments — can it work? What does it mean for it to work?

On one level we could say that is simple – it is a game of analogy, and we only need to feel that there is a valid analogy between two different themes or things to assert them as “moves” in the game. We could say that the proof of the existence of an infinitude of primes is related to Escher’s paintings and argue that the infinite is present in both. The game – at its absolute lower boundary – is nothing else than an inspiring intellectual, collaborative essay. A game, then, consists of first stating the theme you wish to explore and then each player makes moves by suggesting knowledge that can be associated by analogy in sequence to the theme. This in itself can be quite interesting, I imagine, but it really is a lower boundary. The idea of the glass bead game being a game suggests that there is a way to judge progress in it, to juxtapose one game against another and argue that it is more masterful than the other.

Think about chess – it is possible to argue that one game in a Game (capital G Game being the particular variant of gaming, like chess, go or a boardgame) is more exciting and valuable than another, is it not? On what basis do we actually do that? Is it the complexity of the game? The beauty of the moves? How unusual it is? The lack of obvious mistakes? Why is a a game between Kasparov and Karpov more valuable in some sense than a game between me and a computer? (If we ignore, for a moment, the idea that a game between humans would have an intrinsically higher value than one between computers, something that seems dubious at best)? How do we ascribe value in the domain of games?

The aesthetic answer is only half-satisfying, it seems to me. I feel that there is also a point to be made about complexity, or about the game revealing aspects of the Game that were previously not clearly known. Maybe we could even state a partial answer by saying that any game that is unusual is more valuable than one that closely resembles already played games. Doing this suggests assigning a value to freshness or newness or simply variational richness. If we imagine the game space of a Game we could argue that there is greater value to a game that comes from an unexplored part of the game space. This idea, that the difference between a game and the corpus of played games could be a value in itself is not a bad one, and has actually been suggested as an alternative ground for intellectual property protection in the guise of originality (there always has to be an originality threshold, but beyond that). A piece that is significantly different from another (by mining the patterns of the corpus and producing a differential, say) could then be protected for longer or with broader scope, than one that is just like every other work in the corpus.

So, we could ascribe value through originality through analysis of the differential between the game and the corpus of played games (something like this seems to be going on in the admiration for AlphaGo’s games in the game community — there is a recognition that they represent an original – almost alien – way of playing go).

But originality only gets you so far in the glass bead game. I am sure noone has argued that Nietzsches theory of eternal recurrence can be linked to Joanna Newsom’s song Peach Plum Pear – but the originality of that association almost _lessens_ the value of the move in a glass bead game. There is an originality value function, but it exists within the boundaries of something else, of a common recognition of the validity of the move that we are trying to make within the theme we are exploring. So there has to be consistency with the theme as well as originality within that consistency.

Let’s examine an imaginary example game and see if we can reconstruct some ideas from that. Let us state that the theme is broad, the interplay between black and white in human knowledge. That theme is incredibly broad, but also specific enough to provide the _frame_ that we need in order to start working out possible moves that could suit and give us an idea. A valid move could be things like associating Rachmaninov’s piece Isle of the dead with Eisenstein’s principle of the use of color in movies (“Hence, the first condition for the use of color in a film is that it must be, first and foremost, a dramatic factor. In this respect color is like music. Music in films is good when it is necessary. Color, too, is good when it is necessary.”) By noting that Rachmaninov wrote his piece after having seen Böcklin’s painting The Isle of the Dead – but only in a black and white replica – and adding that he then was disappointed with the color of the original, we could device the notion of the use of black and white in non visual arts and science and then start to look for other examples of art and knowledge that seem to be inspired by or connected to the same binary ideas – testing ideas around two-dimensional Penrose tiling, I Ching, the piano keys, understanding the relationship to chess and exploring the general architecture and design of other games like go and backgammon, and othello…There exists a consistency here, and you could argue the moves are more or less orginal. The move from go to othello is less original than the move from Isle of the Dead to the I Ching (and then we could go back to other attempts to compose with the I Ching in a return move to the domain of music, after which we could land with leibnizian ideas inspired by that same book. It would seem that the binary nature of the I Ching then could be an anchor point in such a game).

It quickly becomes messy. But interesting. So the first two proto-rules of the game seem to be that we need originality within consistency. As we continue to explore possible rules and ideas we will at some point have to look at if there is an underlying structure that connects them. I would be remiss if I did not also reveal that I am interested in that because I wonder if there is something akin to a deep semiotic network of symbols that could be revealed by expanding machine translation to the domain of human knowledge over all. As has been documented, machine learning now can use deep structure of language to translate between two languages through an “interlingua”. At the heart of the idea of the glass bead game is the deceptively simple idea that there is such an interlingua between all domains of human knowledge – but can that be true?

The glass bead game – and the attempt to construct one – is a powerful play thing to use to start exploring that question.

Simone Weil’s principles for automation (Man / Machine VI)

Philosopher and writer Simone Weil laid out a few principles on automation in her fascinating and often difficult book Need for Roots. Her view as positive, and she noted that among workers in factories the happiest ones seemed to be the ones that worked with machines. She had strict views on the design of these machines however, and her views can be summarized in three general principles.

First, these tools of automation need to be safe. Safety comes first, and should also be weighed when thinking about what to automate first – the idea that automation can be used to protect workers is an obvious, but sometimes neglected one.

Second, the tools of automation need to be general purpose. This is an interesting principle, and one that is not immediately obvious. Weil felt that this was important – when it came to factories – because they could then be repurposed for new social needs, and respond to changing social circumstances – most pressingly, and in her time acute, war.

Third, the machine needs to be designed so that it is used and operated by man. The idea that you would substitute man by machine she found ridiculous for several reasons, but not least because we need to work to finds purpose and meaning, and any design that eliminates us from the process of work would be socially detrimental.

All Weil’s principles are applicable and up for debate in our time. I think the safety principle is fairly accepted, but we should not that she speaks of individual safety and not our collective safety. In the cases where technology for automation could pose a challenge for broader safety concerns in different ways, Weil does not provide us with a direct answer. This need not be apocalyptic scenarios at all, but could simply be questions of systemic failures of connected automation technologies, for example. Systemic safety, individual safety, social safety are all interesting dimensions to explore here – are silicon / carbon hybrid models always safer, more robust, more resilient?

The idea about general purpose and easy to repurpose is something that I think reflects how we have seen 3d printing evolve. One idea of 3D-printing is exactly this, that we get generic factories that can manufacture anything. But the other observation that is close at hand here is that you could imagine Weil’s principle as an argument for general artificial intelligence. It should be admitted that this is taking it very far, but there is something to that, and it is that a general AI & ML model can be broadly and widely taught and we would avoid narrow guild experts emerging in our industries. That would, in turn, allow for quick learning and evolution as these technologies, needs and circumstances change. General purpose technologies for automation would allow for us to change and adapt faster to new ideas, challenges and selection pressures – and would serve us well in a quickly changing environment.

The last point is one that we will need to examine closely. Should we consider it a design imperative to design for complementarity rather than substitution? There are strong arguments for this, not least cost arguments. Any analysis of a process that we want to automate will yield a silicon – carbon cost function that gives us to cost of the process as different parts of it are performed by machines and humans. A hypothesis would be that for most processes this equation will see a distribution across the two and only for very few will we see a cost equation where the human component is zeroed out. Not least because human intelligence is produced at extraordinarily low energy cost and with great resilience. There is even a risk mitigation strategy argument here — you could argue that always including a human element, or designing for complementarity, necessarily generates more resilient and robust systems as the failure paths of AIs and human intelligence look different and are triggered by different kinds of factors. If, for any system, you can allow for different failure triggers and paths, you seem to ensure that the system self-monitors effectively and reduces risk.

Weil’s focus on automation is also interesting. Today, in many policy discussions, we see the emergence of principles on AI. One could argue that this is technology-centric principle making, and that the application of ethical and philosophical principles suit the use of a technology better and that use-centric principles are more interesting. The use-case of automation is a broad one, admittedly, but an interesting one to test this on and see if salient differences emerge. How we choose to think about principles also force us to think about the way we test them. An interesting question is to compare with other technologies that have emerged historically. How would we think about principles on electricity, computation, steam — ? Or principles on automobiles and telephones and telegraphs? Where do we effectively place principles to construct normative landscapes that benefit us as a society? Principles for driving, for communicating, for selling electricity (and using it and certifying devices etc (oh, we could actually have a long and interesting discussion about what it would mean to certify different ML models!)).

Finally, it is interesting also to think about the function of work from a moral cohesion standpoint. Weil argues that we have no rights but for the duties we assume. Work is a foundational duty that allows us to build those rights, we could add. There is a complicated and interesting argument here that ties rights to duties to human work in societies from a sociological standpoint. The discussions about universal basic income are often conducted in sociological isolation, not thinking about the network of social concepts tied up in work. If there is, as Weil assume, a connection between our work and duties and the rights a society upholds on an almost metaphysical level, we need to re-examine our assumptions here – and look carefully at complementarity design as a foundational social design imperative for just societies.

Justice, markets, dance – on computational and biological time (Man / Machine V)

Are there social institutions that work better if they are biologically bounded? What would this even mean? Here is what I am thinking about: what if, say, a market is a great way of discovering knowledge, coordinating prices and solving complex problems – but only if it consists solely of human beings and is conducted at biological speeds? What if, when we add tools and automate these markets, we also lose their balance? What if we end up destroying the equilibrium that makes them optimized social institutions?
While initially this sounds preposterous, the question is worth examining. Let’s examine the opposite hypothesis – that markets work at all speeds, wholly automated and without any human intervention. Why would this be more likely, than for there to be certain limitations on the way the market is conducted?

Is dance still dance if it is performed in ultra-high speeds by robots only? Or do we think dance is a biologically bounded institution?
It would be remarkable if we found that there are a series of things that only work in biological time, but break down in computational time. It would force us to re-examine our basic assumptions about automation and computerization, but it would not force us to abandon them.

What we would need to do is more complex. We would have to answer the question of what is to computers as markets are to humans. We would have to build new, revamped institutions that exist in computational time and we would have to understand what the key differences are that apply and need to be integrated into future designs. All in all an intriguing task.

Are there other examples?

What about justice? Is a court system a biologically bounded system? Would we accept a court system that runs in computational time, and delivers an ultra fast verdict after computing the data sets necessary? A judgment delivered by a machine, rather than a trained jurist? This is not only a question of security – it is not just a question of if we trust the machine to do what is right. We know for a fact that human judges can be biased, and that even their blood sugar levels could influence decisions. Yet, we could argue that this does not need to concern us for us to be worried here. We could argue that justice needs to unfold in biological time, because that is how we savour it. That is how it is consumed. The court does not only pass judgment, it allows all of us to see, experience, hear justice be done. We need justice to run at biological time, because we need to absorb it, consume it.

We cannot find any moral nourishment in computational justice.

Justice, markets, dance. Biological vs computational time and patterns. Just another area where we need to sort out the borders and boundaries between man and machine – but where we have not even started yet. The assumption that whatever is done by man can be done better by machine is perhaps not serving us too well here.

A note on the ethics of entropy (Man / Machine IV)

In a comment on Luciano Floridi’s The Ethics of Information Martin Falment Fultot writes (Philosophy and Computers Spring 2016 Vol 15 no 2):

“Another difficulty for Floridi’s theory of information as constituting the fundamental value comes from the sheer existence of the unilateral arrow of thermodynamic processes. The second law of thermodynamics implies that when there is a potential gradient between two systems, A and B, such that A has a higher level of order, then in time, order will be degraded until A and B are in equilibrium. The typical example is that of heat flowing inevitably from a hotter body (a source) towards a colder body (a sink), thereby dissipating free energy, i.e., reducing the overall amount of order. From the globally encompassing perspective of macroethics, this appears to be problematic since having information on planet Earth comes at the price of degrading the Sun’s own informational state. Moreover, as I will show in the next sections, the increase in Earth’s information entails an ever faster rate of solar informational degradation. The problem for Floridi’s theory of ethics is that this implies that the Earth and all its inhabitants as informational entities are actually doing the work of Evil, defined ontologically as the increase in entropy. The Sun embodies more free energy than the Earth; therefore, it should have more value. Protecting the Sun’s integrity against the entropic action of the Earth should be the norm.”

At the heart of this problem, he argues, is that Floridi defines information as something good, Fultot argues, and hence the opposite is something evil – and he takes the opposite of information and structure to be entropy (this can be discussed). But there seems to be a lot of different possibilities here, and the overall argument deserves to be examine much closer, it seems to me.

Let’s ask a very simple question. Is entropy good or evil? And more concretely: do we have a moral duty to act as to maximize or minimize the production of entropy? This question may seem silly, but it is actually quite interesting. If some of the recent surmises about how organization and life can exist in a universe that tends to disorganization and heat death are right, the reason life exists – and will be prevalent in the universe – is that there is a hitherto undiscovered law of physics that essentially states that not only does the universe evolve towards more entropy, but it organizes itself as to increase the speed with which it does so. Entropy accelerates.

Life appears, because life is the universe’s way of making entropy faster.

As a corollarium technology evolves – presumably everywhere where there is life – because technology is a good way to make entropy faster. An artificial intelligence makes entropy much faster than a human being as it becomes able to take on more and more general tasks. Maybe there is even a “law of artificial intelligence and entropy” that states that any superintelligence necessarily produces more entropy than any ordinary intelligence, and that any increase in intelligence means an increase in the production of entropy? That thought deserves to be examined closer and in more detail, and clarified (I hope to return to this in a later note — the relationship between intelligence and entropy is a fascinating subject).

Back to our simple and indeed simplistic question. Is entropy good or evil? Do we have duty to act to minimize it or to maximize it? A lot of different considerations prop up and possible theories and ideas are rich and complex. Here are a number of possible answers.

  • Yes, we need to maximize entropy, because that is in line with the nature of the universe and ethics, ultimately, is about acting in such a way that you are true to the nature and laws you obey – and indeed, you are a part of this universe and should work for its completion in heat death. (Prefer acting in accordance with natural laws)
  • No, we should slow down the production to make it possible to observe the universe for as long as possible, and perhaps find an escape from this universe before it succumbs to heat death. (Prefer low entropy states and “individual” consciousness to high entropy states).
  • Yes, because materiality and order are evil and only in heat death do we achieve harmony. (Prefer high entropy states to low).

And so on. The discussion here also leads to another interesting question, and that is if we can, indeed, have an ethics of anything else than our actions against one other individual in the particular situation and relationship we find ourselves. A situationist reply here could actually be grounded in the kind of reductio ad absurdum that many would perceive an ethics of entropy to be.

As for technology, the ethical question then becomes this: should we pursue the construction of more and more advanced machines, if that also means that they produce more and more entropy? In the environmental ethics the goal is sustainable consumption, but the reality is that from the perspective of an ethics of entropy, there are no sustainable solutions. Just solutions that slow down the depletion of organization and order. That difference is interesting to contemplate as well.

The relationship between man and machine can also be framed as one between low entropy and high entropy forms of life.

On not knowing (Man / Machine III)

Humans are not great at answering questions with “I don’t know”. They often seek to provide answers even where they know that they do not know. Yet still, one of the hallmarks of careful thinking is to acknowledge when we do not know something – and when we cannot say anything meaningful about an issue. This socratic wisdom – knowing that we do not know – becomes a key challenge as we design systems with artificial intelligence components in them.

One way to deal with this is to say that it is actually easier with machines. They can give a numeric statement of their confidence in a clustering of data, for example, so why is this an issue at all? I think this argument misses something important about what it is that we are doing when we say that we do not know. We are not simply stating that a certain question has no answers above a confidence level, we can actually be saying several different things at once.

We can be saying…
…that we believe that the question is wrong, or that the concepts in the question are ill-thought through.
…that we have no data or too little data to form a conclusion, but that we believe more data will solve the problem.
…that there is no reliable data or methods of ascertaining if something is true or not.
…that we have not thought it worthwhile to find out or that we have not been able to find out within the allotted time.
…that we believe this is intrinsically unknowable.
…that this is knowledge we should not seek.

And these are just some examples of what it is that we are possibly saying when we say “I don’t know”. Stating this simple proposition is essentially a way to force a re-examination of the entire issue to find the roots of our ignorance. Saying that we do not know something is a profound statement of epistemology and hence a complex judgment – and not a statement of confidence or probability.

A friend and colleague suggested, on discussing this, that it actually makes for a nice version of the Turing test. When a computer answers a question by saying “I don’t know” and does so embedded in the rich and complex language game of knowledge (as evidenced by it reasoning about it, I assume), it can be seen as intelligent in a human sense.
This socratic variation of the Turing test also shows the importance of the pattern of reasoning, since “I don’t know” is the easiest canned answer to code into a conversation engine.

*

There is a special category of problems related with saying “I don’t know” that have to do with search satisfaction and raise interesting issues. When do you stop looking? In Jeremy Groopman’s excellent book on How Doctors Think there is an interesting example of radiologists. The key challenge for this group of professionals, Groopman notes, is when to stop looking. You scan an x-ray, find pneumonia and … done? What if there is something else? Other anomalies that you need to look for? When do you stop looking?

For a human being that is a question of time limits imposed by biology, organization, workload and cost. The complex nature of the calculation for stopping allows for different stopping criteria over time and you can go on to really think things through when the parameters change. Groopman’s interview with a radiologist is especially interesting given that this is one field that we believe can be automated to great benefit. The radiologist notes this looming risk of search satisfaction and essentially suggests that you use a check schema – trace out the same examination irrespective of what it is that you are looking for, and then summarize the results.

The radiologist, in this scenario, becomes a general search for anomalies that are then classified, rather than a specialized pattern recognition expert that seeks out examples of cancers – and for some cases the radiologist may only be able to identify the anomaly, but without understanding it. In one of the cases in the book the radiologist finds traces of something he does not understand – weak traces – that then prompts him to do a biopsy, not based on the picture itself, but on the lack of anything on a previous x-ray.

Context, generality, search satisfaction and gestalt analysis are all complex parts of when we know and do not know something. And our reactions to a lack of knowledge are interesting. The next step in not knowing is of course questioning.

A machine that answers “I don’t know” and then follows it up with a question is an interesting scenario — but how does it generate and choose between questions? There seems to be a lot to look at here – and question generation born out of a sense of ignorance is not a small part of intelligence either.

Hannah Arendt on politics and truth – and fake news? (Notes on attention, fake news and noise #6)

Any analysis of fake news would be incomplete without a reading of Hannah Arendts magnificent essay Truth and Politics from 1967. Arendt, in this essay, examines carefully the relationship between truth and politics, and makes a few observations that remind us of why the issue of “fake news” is neither new nor uniquely digital. It is but an aspect of that greater challenge of how we reconcile truth and politics.

Arendt anchors the entire discussion solidly not only in a broader context, but she reminds us that this is a tension that has been with civilization since Socrates. “Fake news” is nothing else than yet another challenge that meets us in the gap between dialectic and rhetoric, and Socrates would be surprised and dismayed to find us thinking we had discovered a new phenomenon. The issue of truth in politics is one that has always been at the heart of our civilization and our democratic tradition.
Arendt notes this almost brutally in the beginning of her essay:

“No one has ever doubted that truth and politics are on rather bad terms with each other, and no one, as far as I know, has ever counted truthfulness among the political virtues. Lies have always been regarded as necessary and justifiable tools not only of the politician’s and the demagogue’s but also of the stateman’s trade.” (p 223)

It is interesting to think about how we read Arendt here. Today, as politics is under attack and we suffer from an increase of rhetoric and the decline of dialogue, we almost immediately become defensive. We want to say that we should not deride politics, and that politics deserve respect and that we should be careful and ensure that we do not further increase people’s loss of faith in the political system of democracy — and all of this is both correct and deeply troubling at the same time. It shows us that our faith in the robustness of the system has suffered so many blows now that we shy away from the clear-eyed realization that politics is rhetoric first and dialogue only second (and bad politics never gets to dialogue at all).

Arendt does not mean to insult our democracy, she merely recognizes a philosophical analysis that has remained constant over time. She quotes Hobbes as saying that if power depended on the sum of the angles in a triangle not being equal to the sum of two angles in a rectangle, then books of geometry would be burned by some in the streets. This is what politics is – power – and we should not expect anything else. That is why the education of our politicians is so important, and their character key. Socrates sense of urgency when he tries to educate Alcibiades is key, and any reader who read the dialogues would be aware of the price of Socrates failure in what Alcibiades became.

Arendt also makes an interesting point on the difference between what she calls rational truths – the mathematical, scientific – and the factual ones and point out that the latter are “much more vulnerable”. (p 227) And factual truth is the stuff politics are made of, she notes.

“Dominion (to speak Hobbes’ language) when it attacks rational truth oversteps, as it were, its domain while it gives battle on its own ground when it falsifies or lies away facts.” (p 227)

Facts are fair game in politics, and has always been. And Arendt then makes an observation that is key to understanding our challenges and is worth quoting in full:

“The hallmark of factual truth is that its opposite is neither error nor illusion nor opinion, not one of which reflects upon personal truthfulness, but the deliberate falsehood, or lie. Error, of course, is possible, and even common, with respect to factual truth, in which case this kind of truth is in no way different from scientific or rational truth. But the point is that with respect to facts there exists another alternative, and this alternative, the deliberate falsehood, does not belong to the same species as propositions that, whether right or mistaken, intend nor more than to say what is, or how something that is appears to me. A factual statement – Germany invaded Belgium in August 1914 – acquires political implications only by being put in an interpretative context. But the opposite proprosition, which Clemenceau, still unacquainted with the art of rewriting history, though absurd, needs no context to be of political significance. It is clearly an attempt to change the record, and as such it is a form of _action_. The same is true when the liar, lacking the power to make his falsehood stick, does not insist on the gospel truth of his statement but pretends that this is his ‘opinion’ to which he claims his constitutional right. This is frequently done by subversive groups, and in a politically immature public the resulting confusion can be considerable. The blurring of the dividing line between factual truth and opinion belongs among the many forms that lying can assume, all of which are forms of action.
While the liar is a man of action, the truthteller, whether he tells a rational or factual truth, most empathically is not.” (p 245)

Arendt is offering an analysis of our dilemma in as clear a way as can be. Lying is an action, telling the truth is most emphatically not, and the reduction of a falsehood to an opinion creates considerable confusion, to say the least. The insight that telling the truth is less powerful than lying, less of an action is potentially devastating – liars has something at stake, and truth tellers sometimes make the mistake of thinking that relaying the truth in itself is enough.

But Arendt also offers a solution and hope — and it is evident even in this rather grim quote: she speaks of a politically immature public, and as she closes the essay she takes great pains to say that these lies, these falsehoods, in no way detracts from the value of political action. In fact, she says that politics is a great endeavor and one that is worthy of our time, effort and commitment – but ultimately we also need to recognize that it is limited by truth. Our respect – as citizens – for truth is what preserves, she says, the integrity of the political realm.

As in the platonic dialogues, as in Hobbes, as everywhere in history – truth is a matter of character. Our own character, honed in dialogue and made resistant to the worst forms of rhetoric. This is not new – and it is not easy, and cannot be solved with a technical fix.

Link: https://idanlandau.files.wordpress.com/2014/12/arendt-truth-and-politics.pdf

Intelligence, life, consciousness, soul (Man / Machine II)

There is another perspective here that we may want to discuss, and that is if the dichotomy we are examining is maybe a false, or at least, less interesting one. What if we find that both man and machine can belong to a broader class of things that we may want to call “alive”? Rather than ask if something is nature or technology we may want to just ask if it lives.

The question of what life is and when it began is of course not an easy one, but if we work with simple definitions we may want to agree that something lives if it has a metabolism and the ability to reproduce. That, then, could cover both machines and humans. Humans – obviously – machines less obviously, but still solidly.

When we discuss artificial intelligence, our focus is on the question of if something can be said to have human-level intelligence. But what if we were to argue that nothing can be human-kind intelligent without also being alive? Without suffering under the same limitations and evolutionary pressures as we do?

Does this seem an arbitrary limitation? Perhaps, but it is no less arbitrary than the idea that intelligence is exhibited only through problem solving methods such as playing chess or go.

Can something be, I would ask, intelligent and not alive? In this simple question there is something fundamental captured. And if we say yes – then would it not seem awkward to imagine a robot to be intelligent but essentially dead?

This conceptual scheme – life / intelligence – is one that is being afforded far too little attention. Max Tegmark’s brilliant book on Life 3.0 is of course an exception, but even here it is just assumed that life is life even if it transcends the limitations (material and psychological) of life as we know it. Life is thought to be immanent in intelligence, and the rise of artificial intelligence is equated with the emergence of a new form of life.

But that is not a necessary relationship at all. One does not imply the other. And to make it more difficult, we could also examine the notoriously unclear concept of “consciousness” as a part of the exploration.

Can something be intelligent, dead and conscious? Can something be conscious and not live? Intelligent, but not conscious? The challenge that we face when we analyze our distinction between man and machine in this framework is that we are forced to think about the connection between life and intelligence in a new way, I think.

Man is alive, conscious and intelligent. Can a machine be all three and still be a machine?

We are scratching the surface here of a problem that Wittgenstein formulated much more clearly; in the second part of philosophical investigations he asks if we can see a man as a machine, an automaton. It is a question with some pedigree in philosophy since Descartes asked the same when we tried out his systematic doubt — looking out through his window he asked if he could doubt that the shapes he saw were fellow humans and his answer was that indeed, they could be automatons wearing clothes, mechanical men and nothing else.

Wittgenstein notes that this is a strange concept, and that we must agree that we would not call a machine thinking unless we adopted an attitude towards this machine that is essentially an attitude as if towards a soul. Thinking is not a disembodied concept. It is something we say of human beings, and a machine that could think would need to be very much like a man, so much so that we would have an attitude like that towards a soul, perhaps. Here is his observation (Philosophical Investigations part II: iv):

”Suppose I say if a friend: ’He is not an automaton’. — What information is conveyed by this, and to whom would it be information? To a human being who meets him in ordinary circumstances? What information could it give him? (At the very most that this man always behaves like a human being and not occasionally like a machine.)

’I believe that he is not an automaton’,  just like that, so far makes no sense.

My attitude towards him is an attitude towards a soul. I am not of the opinion that he has a soul.” (My bold).

The German makes the point even clearer, I think: ”Meine Einstellung zu ihm ist eine Einstellung zur Seele. Ich habe nicht die Meinung dass er eine Seele hat.”  So for completeness we add this to our conceptual scheme: intelligence / life / consciousness / soul – and ask when a machine becomes a man?

As we widen our conceptual net, the questions around artificial intelligence become more interesting. And what Wittgenstein also adds is that for the more complex language game, there are no proper tests. At some point our attitudes change.

Now, the risk here, as Dennett points out, is that this shift comes too fast.

Notes on attention, fake news and noise #5: Are We Victims of Algorithms? On Akrasia and Technology.

Are we victims of algorithms? When we click on click bait and content that is low quality – how much of the responsibility of that click is on us and how much on the provider of the content? The way we answer that question maybe connected to an ancient debate in philosophy about Akrasia or weakness of will. Why, philosophy asks, do we do things that are not good for us?

Plato’s Socrates has a rather unforgiving answer: we do those things that are not good for us because we lack knowledge. Knowledge, he argues, is virtue. If we just know what is right we will act in the right way. When we click the low quality entertainment content and waste our time it is because we do not know better. Clearly, then, the answer from a platonic standpoint is to ensure that we enlighten each-other. We need a version of digital literacy that allows us to separate the wheat from the chaff, that helps us know better.

In fact, arguably, weakness of will did not exist for Socrates (hence why he is so forbidding, perhaps) but was merely ignorance. Once you know, you will act right.

Aristotle disagreed and his view was we may hold opinions that are short term and wrong and be affected by them, and hence do things that re not good for us. This view, later developed and adumbrated by Davidson, suggests that decisions are often made without the agent considering all possible things that may have a bearing on a choice. Davidson’s definition is something like “If someone has two choices a and b does b knowing that all things considered a would be better than b, but ends up doing b that is akrasia” (not a quote, but a rendering of Davidson). Akrasia then becomes not considering the full set of facts that should inform the choice.

Having one more beer without considering the previous ones, or having one more cookie without thinking about the plate now being empty.

The kind of akrasia we see in the technological space may be more like that. We short term pleasure visavi long term gain. A classical Kahneman / Tversky challenge. How do we govern ourselves?
So, how do we solve that? Can the fight against akrasia be outsourced? Designed in to technology? It seems trivially true that it can, and this is exactly what tools like Freedom and Stayfocusd actually try to do (there are many other versions of course). These apps block of sites or the Internet for a set amount of time, and force you back to focus on what you were doing. They eliminate the distraction of the web – but they are not clearly helping you consume high quality content.

That is a distinction worth exploring.

Could we make a distinction here between access and consumption? We can help fight akrasia at the access level, but its harder to do when it comes to consumption? Like, not buying chocolate so there is none in your fridge, or simple refraining from eating the chocolate in the fridge? It seems easier to do the first – reduce access – rather than control consumption. One is a question of availability, the either of governance. A discrete versus a continuous temptation, perhaps.

It seems easy to fight discrete akrasia, but sorting out continuous akrasia seems much harder.

*

Is it desirable to try? Assume that you could download a technology that would only show you high quality content on the web. Would you then install that? A splinternet provider that offers “qualitative Internet only – no click bait or distractions”. It would not have to be permanent, you could set hours for distraction, or allocate hours to your kids. Is that an interesting product?

The first question you would ask would probably be why you should trust this particular curator. Why should you allow someone else to determine what is high quality? Well, assume that this challenge can be met by outsourcing it to a crowd, where you self-identify values and ideas of quality and you are matched with others of the same view. Assume also, while we are at it, that you can do this without the resulting filter bubble problem, for now. Would you – even under those assumptions – trust the system?

The second question would be how such a system can reflect a dynamic in which the information production rate doubles. Collective curation models need to deal with the challenge of marking an item as ok or not ok – but the largest category will be a third: not rated. A bet on collective curation is a bet on the value of the not curated always being less than the cost of possible distraction. That is an unclear bet, it seems to me.

The third question would be what sensitivity you would have to deviations. In any collectively curated system a certain percentage of the content is till going to be what you consider low quality. How much such content would you tolerate before you ditch the system? How much of content made unavailable, but considered high quality by you, would you accept? How sensitive are you to the smoothing effects of the collective curation mechanism? Both in exclusion and inclusion? I suspect we are much more sensitive than we allow for.

Any anti-akrasia technology based on curation – even collective curation – would have to deal with those issues, at least. And probably many others.

*

Maybe it is worth also thinking about what it says about our view of human nature if we believe that solutions to akrasia need to be engineered. Are we permanently flawed, or is the fight against akrasia something that actually has corona effects in us – character building effects – that we should embrace?

Building akrasia away is different from developing the self-discipline to keep it in check, is it not?

Any problem that can be rendered as an akrasia problem – and that goes, perhaps, even for issues of fake news and similar content related conundrums – needs to be examined in the light of some of these questions, I suspect.

Man / Machine I: conceptual remarks.

How does man relate to machine? There is a series of questions here that I find fascinating and not a little difficult. I think the relationship between these two concepts also are determinative for a large set of issues that we are debating today, and so we would do well to examine this language game here.

There are, of course, many possibilities. Let’s look at a few.

First, there is the worn out “man is a lesser machine”-theme. The idea here is that machine is a perfect man, and that we should be careful with building machines that can replace us. Or that we should strive ourselves to become machines in order to survive. In this language game machine is perfection, eternity and efficiency, man is imperfection, ephemeral and inefficient. The gleaming steel and ultra-rational machine is a better version of biological man. It is curious to me that this is the conceptual picture that seems strongest right now. We worry about machines taking over, machines taking our jobs and machines turning us all into paper clips (or at least Nick Bostrom does). Because we see them as our superiors in every regard.

In many versions of this conceptual landscape evolution is also a sloppy and inefficient process, creating meat machines with many flaws and short comings — and machines are the end point. They are evolution mastered, and instead of being products of evolution machines produce it as they see fit. Nature is haphazard and technology is deliberate. Any advantage that biology has over technology is seen as easy to design in, and any notion of man’s uniqueness is quickly quashed by specific examples of the machine’s superiority: chess, jeopardy, go, driving —

The basis of this conceptual landscape is that there are individual things machines do better than man, and the conclusion is that machines must be generally better. A car drives faster than a man can run, a computer calculates faster than a man can count and so: machine is generally superior to man.

That does not, of course, follow with any logical necessity. A dog’s sense of smell is better than man, and a dog’s hearing is better than ours. Are dogs superior to man? Hardly anyone would argue that, yet still that same argumentative pattern seems to lead us astray when we talk about machines.

There is not, as far as I am concerned, wrong or right here – but I think we would do well to entertain a broad set of conceptual schemas when discussing technology and humanity, and so am wary of any specific frame being mistaken for the truth. Different frames afford us different perspectives and we should use them all.

The second, then, is that machine is imperfect man. This perspective does not come without its own dangers. The really interesting thing about Frankenstein’s monster is that there is a very real question of how we interpret the monster: as a machine or man? As superior or inferior? Clearly superior on strength, the monster is mostly thought to be stupid and inferior intellectually to its creator.
In many ways this is our secret hope. This is the conceptual schema that gives us hope in the Terminator movies: surely the machine can be beat, it has to have weaknesses that allow us to win over it with something distinctly human, like hope. The machine cannot be perfect, so it has to have a fatal flaw, an imperfection that will allow us to beat it?

The third is that machine is man and man just a machine. This is the La Mettrie view. The idea that there is a distinction between man and machine is simply wrong. We are machines and the question is just how we can be gradually upgraded and improved. There is, in this perspective, a whiff of the first perspective but with an out: we can become better machines, but we will still also be men. Augmentation and transcendence, uploading and cyborgs all inhabit this intellectual scheme.

But here we also have another, less often discussed, possibility. That indeed we are machines, but that we are what machines become when they become more advanced. Here, the old dictum from Arthur C Clarke comes back and we paraphrase: any sufficiently advanced technology is indistinguishable from biology. Biology and technology meld, nature and technology were never distinct or different – technology is just slow and less complex nature. As it becomes more complex, technology becomes alive – but not superior.

Fourth, and rarely explored, we could argue simply that machine and man are as different as man and any tool. There is no convergence, no relationship. A hammer is not a stronger hand. A computer is not a stronger mind. They are different and mixing them up is simply ridiculous. Man is of one category, machine of another and they are incommensurable.

Again: it is not a question of choosing one, but recognizing that they all matter in understanding questions of technology and humanity, I think. More to come.

Notes on attention, fake news and noise #4: Jacques Ellul and the rise of polyphonic propaganda part 1

Jacques Ellul is arguably one of the earlier and most consistent technology critics we have. His texts are due for a revival in a time when technology criticism is in demand, and even techno-optimists like myself would probably welcome that, because even if he is fierce and often caustic, he is interesting and thoughtful. Ellul had a lot to say about technology in books like The Technological Society and The Technological Bluff, but he also discussed the effects of technology on social information and news. In his bleak little work Propaganda: The Formation of Men’s Attitudes (New York 1965(1962)) he examines how propaganda draws on technology and how the propaganda apparatus shapes views and opinions in a society. There are many salient points in the book, and quotes that are worth debating.

That said, Ellul is not an easy read or an uncontroversial thinker. Here is how he connects propaganda and democracy, arguing that state propaganda is necessary to maintain democracy:

“I have tried to show elsewhere that propaganda has also become a necessity for the internal life of a democracy. Nowadays the State is forced to define an official truth. This is a change of extreme seriousness. Even when the State is not motivated to do this for reasons of actions or prestige, it is led to it when fulfilling its mission of disseminating information.

We have seen how the growth of information inevitably leads to the need for propaganda. This is truer in a democratic system than in any other.

The public will accept news if it is arranged in a comprehensive system, and if it does not speak only to the intelligence but to the ‘heart’. This means, precisely, that the public wants propaganda, and if the State does not wish to leave it to a party, which will provide explanations for everything (i.e. the truth), it must itself make propaganda. Thus, the democratic State, even if it does not want to, becomes a propagandist State because of trhe need to dispense information. This entails a profound constitutional and ideological transformation. It is, in effect, a State that must proclaim an official, general, and explicit truth. The State can no longer be objective or liberal, but is forced to bring to the overinformed people a corpus intelligentiae.”

Ellul says, in effect that in a noise society there is always propaganda – the question is who is behind it. It is a grim world view in which a State that yields the responsibility to engage in propaganda yields it to someone else.

Ellul comments, partly wryly, that the only way to avoid this is to allow citizens 3-4 hours to engage in becoming better citizens, and reduce the working day to 4 hours. A solution he agrees is simplistic and unrealistic, it seems, and it would require that citizens “master their passions and egotism”.

The view raised here is useful because it clearly states a view that sometime seems to be underlying the debate we are having – that there is a necessity for the State to become an arbiter of truth (or to designate one) or someone else will take that role. The weakness in this view is a weakness that plagues Ellul’s entire analysis, however, and in a sense our problem is worse. Ellul takes, as his object of study, propaganda from the Soviet Union and Nazi-Germany. His view of propaganda is one that is largely monophonic. Yes, technology still pushes information on citizens, but in 1965 it did so unidirectionally. Our challenge is different and perhaps more troubling: we are dealing with polyphonic propaganda. The techniques of propaganda are employed by a multitude of parties, and the net effect is not to produce truth – as Ellul would have it – but eliminate the conditions for truth. Truth no longer become viable in a set of mutually contradictory propaganda systems, it is reduced to mere feelings and emotions: “I feel this”. “This is my truth”. “This is the way I feel about it”.

In this case the idea that the state should speak too is radically different, because the state or any state-appointed arbiter of truth just adds to the polyphony of voices and provides them with another voice to enter into a polemic with. It fractures the debate even more, and allows for a special category of meta-propaganda that targets the way information is interpreted overall: the idea of a corridor of politically correct views that we have to exist within. Our challenge, however, is not the existence of such a corridor, but the fact that it is impossible to establish a coherent, shared model of reality and hence to decide what the facts are.

An epistemological community must rest on a fundamental cognitive contract, an idea about how we arrive at facts and the truth. It must contain mechanisms of arbitration that are institution in themselves, independent of political decision making or commercial interest. The lack of such a foundation means that no complex social cognition is possible. That in itself is devastating to a society, one could argue, and is what we need to think about.

It is no surprise that I take issue with Ellul’s assertion that technology is at the heart of the problem, but let me at least outline the argument I think Ellul would have to deal with if he was revising his book for our age. I would argue that in a globalized society, the only way we can establish that epistemological, basic foundation to build on is through technology and collaboration within new institutions. I have no doubt that the web could carry such institutions, just like it carries the Wikipedia.

There is an interesting observation about the web here, an observation that sometimes puzzles me. The web is simultaneously the most collaborative environment constructed by mankind and the most adversarial. The web and the Internet would not exist but for the protocol agreements that have emerged as its basis (this is examined and studied commendably in David Post’s excellent book Jefferson’s Moose). At the same time the web is a constant arms race around different uses of this collaboratively enabled technology.

Spam is not an aberration or anomaly, but can be seen as an instance of a generalized, platonic pattern in this space. A pattern that recurs through-out many different domains and has started to climb the semantic layers from simple commercial scams to the semiosphere of our societies, where memes compete for attention and propagation. And the question is not how to compete best, but how to continue to engage in institutional, collaborative and, yes, technological innovation to build stronger protections and counter-measures. What is to disinformation as spamfilters are to unwanted commercial emails? It is not mere spamfilters with new keywords, it needs to be something radically new and most likely institutional in the sense that it requires more than just technology.

Ellul’s book provides a fascinating take on propaganda and is required reading for anyone who wants to understand the issues we are working on. More on him soon.

Notes on attention, fake news and noise #3: The Noise Society 10 years later

This February it is 10 years since I defended my doctoral thesis on what I then called the Noise Society. The main idea was that the idea of an orderly, domesticated and controllable information society – modeled on the post-industrial visions of Bell and others – probably was wrongheaded, and that we would see a much wilder society characterized by an abundance of information and a lack of control, and in fact: we would see information grow to a point where the value of it actually collapsed as the information itself collapsed into noise. Noise, I felt then, was a good description not only of individual disturbances in the signal, but also the cost for signal discovery over all. A noise society would face very different challenges than an information society.

Copyright in a noise society would not be an instrument of encouraging the production of information so much as a tool for controlling and filtering information in different ways. Privacy would not be about controlling data about us as much as having the ability to consistently project a trusted identity. Free expression would not be about the right to express yourself, but about the right not to be drowned out by others. The design of filters would become key in many different ways.

Looking back now, I feel that I was right in some ways and wrong in many, but that the overall conclusion – that the increase in information and the consequences of this information wealth are at the heart of our challenges with technology – was not far off target. What I am missing the thesis is a better understanding of what information does. My focus on noise was a consequence of accepting that information was a “thing” rather than a process. Information looks like a noun, but is really a verb, however.

Revisiting these thoughts, I feel that the greatest mistake was not including Herbert Simon’s analysis of attention as a key concept in understanding information. If I had done that I would have been able to see that noise also is a process, and I would have been able to ask what noise does to a society, theorize that and think about how we would be able to frame arguments of policy in the light of attention scarcity. That would have been a better way to get at what I was trying to understand at the time.

But, luckily, thought is about progress and learning, and not about being right – so what I have been doing in my academic reading and writing for the last three years at least is to emphasize Herbert Simon’s work, and the importance of understanding his major finding that with a wealth of information comes a poverty of attention and a need to allocate attention efficiently.

I believe this can be generalized, and that the information wealth we are seeing is just one aspect of an increasing complexity in our societies. The generalized Simon-theorem is this: with a wealth of complexity comes a poverty of cognition and a need to learn efficiently. Simon, in his 1969 talk on this subject, notes that it is only by investing in artificial intelligence we can do this, and he says that it is obvious to him that the purpose of all of our technological endeavours is to ensure that we learn faster.

Learning, adapting to a society where our problems are an order of magnitude more complex, is key to survival for us as a species.
It follows that I think the current focus on digitization and technology is a mere distraction. What we should be doing is to re-organize our institutions and societies for learning more, and faster. This is where the theories of Hayek and others on knowledge coordination become helpful and important for us, and our ideological discussions should focus on if we are learning as a society or not. There is a wealth of unanswered questions here, such as how we measure the rate of learning, what the opposite of learning is, how we organize for learning, how technology can help and how it harms learning — questions we need to dig into and understand at a very basic level, I think.

So, looking back at my dissertation – what do I think?

I think I captured a key way in which we were wrong, and I captured a better model – but the model I was working with then was still fatally flawed. It focused on information as a thing not a process, and construed noise as gravel in the machinery. The focus on information also detracts from the real use cases and the purpose of all the technology we see around us. If we were, for once, to take our ambitions “to make the world a better place” seriously, we would have to think about what it is that makes the world better. What is the process that does that? It is not innovation as such, innovation can go both ways. The process that makes our worlds better – individually and as societies – is learning.

In one sense I guess this is just an exercise in conceptual modeling, and the question I seem to be answering is what conceptual model is best suited to understand and discuss issues of policy in the information society. That is fair, and a kind of criticism that I can live with: I believe concepts are crucially important and before we have clarified what we mean we are unable to move at all. But there is a risk here that I recognize as well, and that is that we get stuck in analysis-paralysis. What, then, are the recommendations that flow from this analysis?

The recommendations could be surprisingly concrete for the three policy areas we discussed, and I leave as an exercise for the reader to think about them. How would you change the data protection frameworks of the world if the key concern was to maximize learning? How would you change intellectual property rights? Free expression? All are interesting to explore and to solve in the light of that one goal. I tend to believe that the regulatory frameworks we end up with would be very different than the ones that we have today.

As one part of my research as an adjunct professor at the Royal Institute of Technology I hope to continue exploring this theme and others. More to come.

Notes on attention, fake news and noise #2: On the non-linear value of speech and freedom of dialogue or attention

It has become more common to denounce the idea that more speech means better democracy. Commentators, technologists and others have come out to say that they were mistaken – that their belief that enabling more people to speak would improve democracy was wrong, or at the very least simplistic. It is worth analyzing what this really means, since it is a reversal of one of the fundamental hopes the information society vision promised.

The hope was this: that technology would democratize speech and that a multitude of voices would disrupt and displace existing, incumbent hierarchies of power. If the printing press meant that access to knowledge exploded in western society, the Internet meant that the production of knowledge, views and opinions now was almost free and frictionless: anyone could become a publisher, a writer, a speaker and an opinion maker.

To a large extent this is what has happened. Anyone who wants to express themselves today can fire up their computer, comment on a social network, write a blogpost or tweet and share their words with whoever is willing to listen – and therein lies the crux. We have, historically, always focused on speech because the scarcity we fought was one of voice: it was hard to speak, to publish, to share your opinion. But the reality is that free speech or free expression just form one point in a relationship – for free speech to be worth anything someone has to listen. Free speech alone is the freedom of monologue, perhaps of the lunatic raving to the wind or the sole voice crying out in the desert. Society is founded upon something more difficult: the right to free dialogue.

You may argue that this is a false and pernicious dichotomy: the dialogue occurs when someone chooses to listen, and no-one is, today, restricted from listening to anyone, so why should we care about the listening piece of dialogue? The only part that needs to be safe-guarded is, you may say, the right to speak. All else follows.

This is where we may want to dig deeper. If you speak, can everyone listen? Do they want to? Do you have a right to be listened to? Do you have a right to be heard that corresponds to your right to speak? Is there, in fact, a duty to listen that precedes the right to speak?

We enter difficult territory here, but with the increasing volume of noise in our societies this question becomes more salient than ever before. A fair bit of that noise is in fact speech, from parties that use speech to drown out other speech. Propaganda and censorship are difficult in a society characterized by information wealth and abundance, but noise that drowns out speech is readily available: not control, but excess, flooding and silence through shouting others down – those are the threats to our age.

When Zeynep Tufekci analyzes free speech in a recent Wired article, she notes that even if it is a democratic value, it is not the only one. There are other values as well. That is right, but we could also ask if we have understood the value at play here in the right way. Tufekci’s excellent article goes on to note that there is a valuable distinction between attention and speech, and that there is no right to attention. Attention is something that needs to be freely given, and much of her article asks the legitimate question of if current technologies, platforms and business models allow for us to allocate attention freely. We could ask here if what she is saying implies that we need to examine whether there is a freedom of attention right somewhere here as well.

When someone says that the relationship between free expression the quality and robustness of a democracy is non-linear, they can be saying many different things. There is a tendency to think that what we need to accept is a balancing of free speech and free expression, and that there are other values that we are neglecting. We could, however, equally say that we have misunderstood the fundamental nature and structure of the value we are trying to protect.

Just because (and Tufekci makes this point as well) the bottle-neck used to be speech we focused there. What we really wanted was perhaps free dialogue, built on free speech and the right to freely allocate one’s attention as one sees fit. Or maybe what we wanted was the freedom to participate in democratic discourse, something that is, again, different.

Why, then, is this distinction important? Perhaps because the assumption of the constancy of the underlying value we are trying to protect, the idea that free speech is well understood and that we should just “balance” it, leads us to solution spaces where we actually harm the values we would like to protect unduly. By examining alternative legal universes where a right to dialogue, the right to free attention, the right to democratic discourse et cetera could exist we examine and start from that value rather than give up on it and enter into the language of balancing and restricting.

There is something else here that worries me, and that is that sometimes there is almost a sense that we are but victims of speech, information overload and distraction. That we have no choice, and that this choice needs to be designed, architected and prescribed for us. In its worst forms this assumption derives the need to balance speech from democratic outcomes and people’s choices. It assumes that something must be wrong with free speech because people are making choices we do not agree with, so they must be victims. They do not know what they are doing. This assumption – admittedly exaggerated here – worries me greatly, and highlights another complexity in our set of problems.

How do we know when free speech is not working? What are the indications that the quality of democracy is not increasing with the amount of speech available in a community? It cannot just be that we disagree with the choices made in that democracy, so what could we be looking for? A lack of commitment to democracy itself? A lack of respect for its institutions?
As we explore this further, and examine other possible consistent sets of rights around opinion making, speech, attention, dialogue and democratic discourse we need to start sorting these things out too.

Just how do we know that free speech has become corrosive noise and is eroding our democracy? And how much of that is technology’s fault and how much is our responsibility as citizens? That is no easy question, but it is an important one.

(Picture credit: John W. Schulze CC-attrib)

Notes on attention, fake news and noise #1: scratching the surfaces

What is opinion made from? This seems a helpful question start off a discussion about disinformation, fake news and similar challenges that we face as a society. I think the answer is surprisingly simple: opinion is ultimately made from attention. In order to form an opinion we need to pay attention to issues, and to questions we are facing as a society. Opinion should not be equated with emotion, even if it certainly also draws on emotion (to which we also pay attention), but also needs reasoned view in order to become opinion. Our opinions change, also through the allocation of attention, when we decide to review the reasons underlying them and the emotions motivating us to hold them.

You could argue that this is a grossly naive and optimistic view of opinion, and that what forms opinion is fear, greed, ignorance and malice – and that opinions are just complex emotions, nothing more, and that they have become even more so in our modern society. That view, however, leads nowhere. The conclusion for someone believing that is to throw themselves exasperated into intellectual and physical exile. I prefer a view that is plausible and also allows for the strengthening of democracy.

A corollary of the abovementioned is that democracy is also made from attention – from the allocated time we set aside to form our opinions and contribute to democracy. I am, of course, referring to an idealized and ideal version of democracy in which citizenship is an accomplishment and a duty rather than a right, and where there is a distinct difference between ”nationality” and ”citizenship”. The great empires of the world seem to always have had a deep understanding of this – Rome safeguarded its citizens and citizenship was earned. In contrast, some observers note that the clearest sign of American decline is that US citizenship is devolving into US nationality. Be that as it may — I think that there is a great deal of truth in the conception of democracy as made of opinion formed by the paying of attention.

This leads to a series of interesting questions about how we pay attention today, and what challenges we face when we pay attention. Let me outline a few, and suggest a few problems that we need to study closer.

First, the attention we have is consumed by the information available. This is an old observation that Herbert Simon made in a 1969 talk that he wrote on information wealth and attention poverty. His answer, then, remarkably, was that we need to invest in artificial intelligence to augment attention and allow for faster learning (we should examine the relationship between learning and democracy as well at some point: one way to think about learning is that it is when we change our opinions) – but more importantly he noted that there is an acute need to allocate attention efficiently. We could build on that and note that at high degrees of efficiency of allocation of attention democratic discourse is impossible.

Second, we have learnt something very important about information in the last twenty years or so, and that is that the non-linear value of information presents some large challenges for us as a society. Information – at an abundance – collapses into noise, and the value then can quickly become negative; we need to sift through the noise to find meaning and that creates filter costs that we have to internalize. There is, almost, a pollution effect here. The production of information by each and everyone of us comes with a negative externality in the form of noise.

Third, the need for filters raises a lot of interesting questions about the design of such filters. The word ”filter” comes with a negative connotation, but here I only mean something that allows us to turn noise into information over which we can effectively allocate attention.

That attention plays a crucial role in the information society is nothing new, as we mentioned, and it has been helpfully emphasized by people like Tim Wu, Tristan Harris and others. There is often an edge in the commentary here that suggests that there is a harvesting of attention and monetization of it, and that this in some way is detrimental. This is worth a separate debate, but let it suffice for now that we acknowledge that this can certainly be the case, but also that the fact that attention can be monetized can be very helpful. In fact, good technology converts attention to money at a higher exchange rate and ensures that the individual reaps the benefits from that by finding what he or she is looking for faster, for example. But again: this is worth a separate discussion – and perhaps this is one where we need to dig deeper into the question of the social value of advertising as such – a much debated issue.

So, where does this land us? It seems that we need to combat distraction and allocate attention effectively. What, then, is distraction?

*

Fake news and disinformation are one form of distraction, and certainly a nefarious one in the sense that such distractions detract from efforts to form opinions in a more serious way in many cases. But there are many other distractions as well. Television, games, gambling and everything else that exists in the leisurespace is in a way a distraction. When Justice Brandeis said that leisure time is the time we need to use to become citizens, he attacked the problem of distraction from a much broader perspective than we sometimes do today. His notion was that when we leave work, we have to devote time to our other roles, and one of the key roles we play is that of the citizen. How many of us devote time every day or week to our citizen role? Is there something we can do there?

*

The tension between distraction and attention forces us to ask a more fundamental question, and that is if the distraction we are consumed by is forced or voluntary. Put in a different way: assume that we are interested in forming an opinion on some matter, can we do so with reasonable effort or are the distractions so detrimental that the formation of informed and reasoned opinion has become impossible?

At some level this is an empirical question. We can try: assume that you are making your mind up on climate change. Can you use the Internet, use search and social networks in order to form a reasoned opinion on whether climate change is anthropogenic? Or are the distractions and the disinformation out there so heavy that it is impossible to form that opinion?

Well, you will rightly note, that will differ from person to person. This is fair, but let’s play with averages: the average citizen who honestly seeks to make up his or her mind – can they on a controversial issue?

A quick search, a look at Wikipedia, a discussions with friends on a social network — could this result in a reasoned opinion? Quite possibly! It seems that anyone who argues that this is impossible today also needs to carry the burden of evidence for that statement. Indeed, it would be extraordinary if we argued that someone who wants to inform themselves no longer can, in the information society.

There are a few caveats to that statement, however. One is about the will itself. How much do we want to form reasoned opinions? This is a question that risks veering into elitism and von oben perspectives (I can already hear the answers along the lines of ”I obviously do, but others…”) so we need to tread carefully. I do think that there are competing scenarios here. Opinions have many uses. We can use them to advance our public debate, but if we are honest a large use case for opinions is the creation of a group and the cohesion of that group. How many of our opinions do we arrive at ourselves, and how many are we accepting as a part of our belonging to a particular group?

Rare is the individual who says that she has arrived, alone, at all of her opinions. Indeed, that would make no sense, as it would violate Simon’s dictum: we need to allocate attention efficiently and we rely on others in a division of attention that is just a mental version of Adam Smith’s division of labor. We should! To arrive at all your own opinions would be so costly that you would have little time to do anything else, especially in a society that is increasingly complex and full of issues. The alternative would be to have very few opinions, and that seems curiously difficult. Not a lot of people offer that they have no opinion on a subject that is brought up in conversation, and indeed it would almost feel asocial to do that!

So group opinions are rational consequences of the allocation of attention, but how do we know if the group arrives at their opinion in a collectively rational way? It depends on the group, and how it operates, obviously, but at the heart of the challenge her is a sense of trust in the judgments of others.

The opinions we hold that are not ours are opinions we hold because we trust the group that arrived at them. Trust matters much more than we may think in the formation of opinion.

*

If distraction is one challenge for democratic societies, misallocation of attention is another. The difference is clear: distraction is when we try to but cannot form an opinion. Misallocation is when we do not want to form a reasoned opinion but are more interested in the construction of an identity or a sense of belonging, and hence want to confirm an opinion that we have adopted for some reason.

The forming and confirming of opinion are very different things. In the first case we shape and form our opinion and it may change over time, in the second we simply confirm an opinion that we hold without examining it at all. It is well known that we are prone to confirmation bias and that we seek information that confirms what we believe to be true, and this tendency is one that sometimes wins over our willingness to explore alternative views. Especially in controversial and emotional issues. That is unfortunate, but the question is what the relationship is there with disinformation?

One answer could be this: the cost of confirmation bias falls when there is a ready provision of counter facts to all facts. Weinberger notes that the old dictum that you are entitled to your opinions, but not your facts, has become unfashionable in the information society since there is no single truth anymore. For every fact there is a counter-fact.

Can we combat this state of affairs? How do we do that? Can we create a respository and a source of facts and truths? How do you construct such an institution?

Most of us naturally think of the Wikipedia when we think of something like that – but there is naturally much in the Wikipedia that is faulty or incorrect, and this is not a dig against the Wikipedia, but simply a consequence of its fantastic inclusion and collaborative nature. Also – we know that facts have a half-life in science, and the idea of uncontrovertible fact is in fact very unhelpful and has historically been used rather by theologians than by democrats. But yet, still, we need some institutional response to the flattening of the truth.

It is not obvious what that would be, but worth thinking about and certainly worth debating.

*

So individual will and institutional truth, ways of spending attention wisely and the sense of citizenship. That is a lot of rather vague hand-waving and sketching, but it is a start. We will return to this question in the course of the year, I am sure. For now, this just serves as a few initial thoughts.

What are we talking about when we talk about algorithmic transparency?

The term ”algorithmic transparency”, with variants and variations, has become more and more common in the many conversations I have with decision makers and policy wonks. It remains somewhat unclear what it actually means, however. As a student of philosophy I find that there is often a lot of value in examining concepts closely in order to understand them, and in the following I wanted to open up a coarse-grained view of this concept in order to understand it further.

At a first glance it is not hard to understand what is meant with algorithmic transparency. Imagine that you have a simple piece of code that manipulates numbers, and that when you enter a series it produces an output that is another series. Say you enter 1, 2, 3, 4 and that the output generated is 1, 4, 9, 16. You have no access to the code, but you can infer that the codde probably takes the input and squares it. You can test this with a hypothesis – you decide to see if entering 5 gives you 25 in response. If it does, you are fairly certain that the code is something like ”take input and print input times input” for the length of the series.

Now, you don’t _know_ that this is the case. You merely believe so and for every new number you enter that seems to confirm the hypothesis your belief may be slightly corroborated (depending on what species of theory of science you subscribe to). If you want to know, really know, you need to have a peek at the code. So you want algorithmic transparency – you want to see and verify the code with your own eyes. Let’s clean this up a bit and we have a first definition.

(i) Algorithmic transparency means having access to the code a computer is running as to have a human be able to verify what it is doing.

So far, so good. What is hard about this, then, you may ask? In principle we should be able to do this with any system and so be able to just verify that it does what it is supposed to and check the code, right? Well, this is where the challenges start coming in.

*

The first challenge is one of complexity. Let’s assume that the system you are studying has a billion lines of code and that to understand what the system does you need to review all of them. Assume, further, that the lines of code refer to each other in different ways and that there are interdependencies and different instantations and so forth – you will then end up with a situation where access to the code is essentially meaningless, because access does not guarantee verifiability or transparency in any meaningful sense.

This is easily realized by simply calculating the time needed to review a billion line piece of software (note that we are assuming her that software is composed of lines of code – not an obvious assumption as we will see later). Say you need one minute to review a line of code – that makes for a billion minutes, and that is a lot. A billion seconds is 31.69 years, so even if you assume that you can verify a line a second the time needed is extraordinary. And remember that we are assuming that _linear verification_ will be exhaustive – a very questionable assumption.
So we seem to have one interesting limitation here, that we should think about.

L1: Complexity limits human verifiability.

This is hardly controversial, but it is important. So we need to amend and change our definition here, and perhaps think about computer-assisted verification. We end up with something like.

(ii) Algorithmic transparency is achieved by access to the code that allows another system to verify the way the system is designed.

There is an obvious problem with this that should not be scooted over. As soon as we start using code to verify code we enter an infinite regress. Using code to verify code means we need to trust the verifying code over the verified. There are ways in which we can be comfortable with that, but it is worth understanding that our verification now is conditional on the verifying code working as intended. This qualifies as another limit.

L2: Computer assisted verification relies on blind trust at some point.

So we are back to blind trust, but the choice we have is what system we have blind trust in. We may trust a system that we have used before, or that we believe we know more about the origins of, but we still need to trust that system, right?

*

So, our notion of algorithmic transparency is turning out to be quite complicated. Now let’s add another complication. In our proto-example of the series, the input and output were quite simple. Now assume that the input consistens of trillions of documents. Let’s remain in our starkly simplified model: how do you know that the system – complex – is doing the right thing given the data?

This highlights another problem. What exactly is it that we are verifying? There needs to be a criterion here that allows us to state that we have achieved algorithmic transparency or not. In our naive example above this seems obvious, since what we are asking about is how the system is working – we are simply guessing at the manipulation of the series in order to arrive at a rule that will allow us to predict what a certain input will yield in terms of an output. Transparency reveals if our inferred rule is the right one and we can then debate if that is the way the rule should look. The value of such algorithmic transparency lies in figuring out if the system is cheating in any way.

Say that we have a game. I say that if you can guess what the next output will be and I show you the series 1, 2, 3, 4, and then the output 1, 4, 9, 16. Now I ask you to bet on what the next number will be as I enter 5. You guess 25 and I enter 5 and the output is 26. I win the bet. You require to see the code and the code says: ”For every input print input times input except if input is 5, then print input times input _plus one_”.

This would be cheating. I wrote the code. I knew it would do that. I put a trap in the code, and you want algorithmic transparency to be able to see that I have not rigged the code to my advantage. I am verifying two things: the rule I have inferred is the right one AND that rule is applied consistently. So it is the working of the system as well as its consistency or its lack of bias in anyway.

Bias or consistency is easy when you are looking at a simple mathematical series, but how do you determine consistency in a system that contains a trillion data points and uses a system of over, say, a billion lines of code? What does consistency mean? Here is another limitation, then.

L3: Algorithmic transparency needs to define criteria for verification such that they are possible to determine with access to the code and data sets.

I suspect this limitation is not trivial.

*

Now, let’s complicate things further. Let’s assume that the code we use generates a network of weights that are applied to decisions in different ways, and that this network is trained by repeated exposure to data and its own simulations. The end result of this process is a weighted network with certain values across it, and perhaps they are even arrived at probabilistically. (This is a very simplified model, extremely so).
Here, by design, I know that the network will look different every time I ”train” it. That is just a function of its probabilistic nature. If we now want to verify this, what we are really looking for is a way to determine a range of possible outcomes that seem reasonable. Determining that will be terribly difficult, naturally, but perhaps it is doable. But at this point we start suspecting that maybe we are engaged with the issue at the wrong level. Maybe we are asking a question that is not meaningful.

*

We need to think about what it is that we want to accomplish here. We want to be able to determine how something works in order to understand if it is rigged in some way. We want to be able explain what a system does, and ensure that what it does is fair, by some notion of fairness.

Our suspicion has been that what we need to do to do this is to verify the code behind the system, but that is turning out to be increasingly difficult. Why is that? Does that mean that we can never explain what these systems do?
Quite the contrary, but we have to choose an explanatory stance – to draw from a notion introduced by DC Dennett. Dennett, loosely, notes that systems can be described in different ways, from different stances. If my car does not start in the morning I can described this problem in a number of different ways.

I can explain it by saying that it dislikes me and is grumpy, assuming an _intentional_ stance, assuming that the system is intentional.
I can explain it by saying I forgot to fill up on gasoline yesterday, and so the tank is empty – this is a _functional_ or mechanical explanation.
I can explain it by saying that the wave functions associated with the care are not collapsing in such a way as to…or use some other _physical_ explanation of the car as a system of atoms or a quantum physical system.

All explanations are possible, but Dennett and others note that we would do well to think about how we choose between the different levels. One possibility is to look at how economical and how predictive an explanation is. While the intentional explanation is shortest, it gives me now way to predict what will allow me to change the system. The mechanical or functional explanation does -and the physical would take pages on pages to do in a detailed manner and so is clearly uneconomical.
Let me suggest something perhaps controversial: the ask for algorithmic transparency is not unlike an attempt at explaining the car’s malfunctioning from a quantum physical stance.
But that just leaves us with the question of how we achieve what arguably is a valuable objective: to ensure that our systems are not cheating in any way.

*

The answer here is not easy, but one way is to focus on function and outcomes. If we can detect strange outcome patterns, we can assume that something is wrong. Let’s take an easy example. Say that an image search for physicist on a search engine leads to a results page that mostly contains white, middle-aged men. We know that there are certainly physicists that are neither male or white, so the outcome is weird. We then need to understand where that weirdness is located. A quick analysis gives us the hypothesis that maybe there is a deep bias in the input data set where we, as a civilization, have actually assumed that a physicist is a white, middle-aged man. By only looking at outcomes we are able to understand if there is bias or not, and then form hypothesis about where that bias is introduced. The hypothesis can then be confirmed or disproven by looking at separate data sources, like searching in a stock photo database or using another search engine. Nowhere do we need to, or would we indeed benefit from, looking at the code. Here is another potential limitation, then.

L4: Algorithmic transparency is far inferior to outcome analysis in all sufficiently complex cases.

Outcome analysis also has the advantage of being openly available to anyone. The outcomes are necessarily transparent and accessible, and we know this from a fair amount of previous cases – just by looking at the outcomes we can have a view on whether a system is inherently biased or not, and if this bias is pernicious or not (remember that we want systems biased against certain categories of content, to take a simple example).

*

So, summing up. As we continue to explore the notion of algorithmic transparency, we need to focus on what it is that we want to achieve. There is probably a set of interesting use cases for algorithmic transparency, and more than anything I imagine that the idea of algorithmic transparency actually is an interesting design tool to use when discussing how we want systems to be biased. Debating, in meta code of some kind, just how bias _should be_ introduced in, say, college admission algorithms, would allow us to understand what designs can accomplish that best. So maybe algorithmic transparency is better for the design than the detection of bias?

Data is not like oil – it is much more interesting than that

So, this may seem to be a nitpicking little note, but it is not intended to belittle anyone or even to deny the importance of having a robust and rigorous discussion about data, artificial intelligence and the future. Quite the contrary – this may be one of the most important discussions that we need to engage in over the coming ten years or so. But when we do so our metaphors matter. The images that we convey matter.

Philosopher Ludwig Wittgenstein notes in his works that we are often held hostage by our images, that they govern the way we think. There is nothing strange or surprising about this: we are biological creatures brought up in three-dimensional space, and our cognition did not come from the inside, but it came from the world around us. Our figures of thought are inspired by the world and they carry a lot of unspoken assumptions and conclusions.

There is a simple and classical example here. Imagine that you are discussing the meaning of life, and that you picture the meaning of something as hidden, like a portrait behind a curtain – and that discovering the meaning then naturally means revealing what is behind that curtain and how to understand it. Now, the person you are discussing it with instead pictures it as a bucket you need to fill with wonderful things, and that meaning means having a full bucket. You can learn a lot from each-others’ images here. But they represent two very different _models_ of reality. And models matter.

That is why we need to talk about the meme that “data is like oil” or any other scarce resource, like the spice in Dune (with the accompanying cry “he who controls the data…!”). This image is not worthless. It tells us there is value to data, and that data can be extracted from the world around us – so far the image is actually quite balanced. There is value in oil and it is extracted from the world around us.

But the key thing about oil is that there is not a growing amount of it. That is why we discuss “peak oil” and that is why the control over oil/gold/Dune spice is such a key thing for an analysis of power. Oil is scarce, data is not – at least not in the same way (we will come back to this).

Still not sure? Let’s do a little exercise. In the time it has taken you to read to this place in the text, how many new dinosaurs have died and decomposed and been turned into oil? Absolutely, unequivocally zero dinosaurs. Now, ask yourself: was any new data produced in the same time? Yes, tons. And at an accelerating rate as well! Not only is data not scarce, it is not-scarce in an accelerating way.

Ok, so I would say that, wouldn’t I? Working for Google, I want to make data seem innocent and unimportant while we secretly amass a lot of it. Right? Nope. I do not deny that there is power involved in being able to organize data, and neither do I deny the importance of understanding data as a key element of the economy. But I would like for us to try to really understand it and then draw our conclusions.

Here are a few things that I do not know the answers to, and that I think are important components in understanding the role data plays.

When we classify something as data, it needs to be unambiguous, and so needs to be related to some kind of information structure. In the old analysis we worked with a model where we had data, information, knowledge and wisdom – and essentially thought of that model as hierarchically organized. That makes absolutely no sense when you start looking at the heterarchical nature of the how data, information and knowledge interact (I am leaving wisdom aside, since I am not sure of whether that is a correct unit of analysis). So something is data in virtue of actually having a relationship with something else. Data may well not be an _atomic_ concept, but rather a relational concept. Perhaps the basic form of data is the conjunction? The logical analysis of data is still fuzzy to me, and seems to be important when we live in a noise society – since the absolutely first step we need to undertake is to mine data from the increasing noise around us and here we may discover another insight. Data may become increasingly scarce since it needs to be filtered from noise, and the cost for that may be growing. That scarcity is quite different from the one where there is only a limited amount of something – and the key to value here is the ability to filter.

Much of the value of data lies in its predictive qualities. That it can be used to predict and analyze in different ways, but that value clearly is not stable over time. So if we think about the value of data, should we then think in terms of a kind of decomposing value that disappears over time? In other words: do data rot? One of the assumptions we frequently make is that more data means better models, but that also seems to be blatantly wrong. As Taleb and others have shown the number of correlations in a data set where the variables grow linearly in turn grows exponentially, and an increasing percentage of those correlations are spurious and worthless. That seems to mean that if big data is good, vast data is useless and needs to be reduced to big data again in order to be valuable at all. Are there breaking points here? Certainly there should be from a cost perspective: when the cost C of reducing a vast data set to a big data set are greater than the expected benefits in the big data set, then the insights available are simply not worth the noise filtering required. And what of time? What if the time it takes to reduce a vast data set to a big data set necessarily is such that the data have decomposed and the value is gone? Our assumption that things get better with more data seems to be open to questioning – and this is not great. We had hoped that data would help us solve the problem.

AlphaGo Zero seems to manage without at least human game seed data sets. What is the class of tasks such that they actually don’t benefit from seed data? If that class is large, what else can we say about it? Are key crucial tasks in that set? What characterizes these tasks? And are “data agnostic” tasks evidence that we have vastly overestimated the nature and value of data for artificial intelligence? The standard narrative now is this: “the actor that controls the data will have an advantage in artificial intelligence and then be able to collect more data in a self-reinforcing network effect”. This seems to be nonsense when we look at the data agnostic tasks – how do we understand this?

One image that we could use is to say that models eat data. Humor me. Metabolism as a model is more interesting than we usually allow for. If that is the case we can see another way in which data could be valuable: it may be more or less nutritious – i.e. it may strengthen a model more or less if the data we look at becomes part of its diet. That allows to ask complicated questions like this: if we compare an ecology in which models get to eat all kinds of data (i.e. an unregulated market) and ecologies in which the diet is restricted (a regulated market) and then we let both these evolved models compete in a diet restricted ecology – does the model that grew up on an unrestricted diet then have an insurmountable evolutionary advantage? Why would anyone be interested in that, you may ask. Well, we are living through this very example right now – with Europe a, often soundly, regulated market and key alternative markets completely unregulated – with the very likely outcome that we will see models that grew up on unregulated markets compete with those that grew up in Europe, in Europe. How will that play out? It is not inconceivable that the diet restricted ones will win, by the way. That is an empirical question.

So, finally – a plea. Let’s recognize that we need to move beyond the idea that data is like oil. It limits our necessary and important public debate. It hampers us and does not help in understanding how this new complex system can be understood. And this is a wide open field, where we have more questions than answers right now – and we should not let faulty answers distract us. And yes, I recognize that this may be a fool’s plea, the image of data like oil is so strong and alluring, but I would not be the optimist I am if I did not think we could get to a better understanding of the issues here.

A note on complementarity and substitution

One of the things I hear the most in the many conversations I have on tech and society today is that computers will take jobs or that man will be replaced by machine. It is a reasonable and interesting question, but I think, ultimately wrong. I tried to collect a few thoughts about that in a small essay here for reference. The question interests me for several reasons – not least because I think that it is partly a design question rather than something driven by technological determinism. This in itself is a belief that could be challenged on a number of fronts, but I think there is a robust defense for it. The idea that technology has to develop in the direction of substitution is simply not true if we look at all existing systems. Granted: when we can automate not just a task but cognition generally this will be challenged, but strong reasons remain to believe that we will not automate fully. So, more of this later. (Image: Robin Zebrowski)

Reading Notes I: Tegmark and substrate independence

Tegmark (2017:67) writes ”This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms.”. How should we read this? The background is that he argues that computation is independent of what we use for hardware and software and what is required is only that the matter we compute in fulfills som very simple conditions like sufficient stability (what would intelligence look like if it was based on gases rather than more solid matter, one could ask – remembering the gas giants in Bank’s novels, by the way – sufficiently large gases may be stable enough to support computation?). But what is more interesting here is the quick transition from computation to intelligence. Tegmark does not violate any of his own assumptions here – he is exceptionally clear about what he thinks intelligence is and builds on a Simonesque notion of attaining goals – but there still seems to be a lot of questions that could be asked about the move from computation to intelligence. The questions that this raises for me are the following:

(i) Is computation the same as intelligence (i.e. is intelligence a kind of computation – and if it is not what is it then?)

(ii) It is true that computation is substrate agnostic, but is not substrate independent. Without any substrate there can be no computing at all, so what does this substrate dependence mean for intelligence? Is it not possible that the nature of the matter used for computation matters for the resultant computation? A very simple example seems to be the idea of computation at different temperatures and what extreme temperatures may lead to (but maybe Tegmark here would argue that this violates the stability condition).

(iii) In a way this seems to be assuming what is to be proven. What Chalmers and others argue is that while computation may be substrate agnostic, cognition or consciousness is not. If there was a way to show that intelligence is substrate specific – only certain classes of matter can be intelligent – what would that look like?

(iv) The question of consciousness is deftly avoided in the quoted sentence, but is there an aspect of observation, consciousness and matter somewhere that seems to matter. I know too little about the role of observation in quantum physics to really nail this down right now, but is it not possible that there exists certain kinds of matter that can observe, and others that cannot?

(v) Even if intelligence is substrate agnostic, as computation, may it not be dependent on certain levels of complexity in the organization of the computation and may it not be the case that these levels of complexity can only be achieved in certain classes of matter? That is, is there an additional criterion for intelligence, in addition to the stability criterion laid out by Tegmark, that needs to be taken into account here?

(vi) What would the world have to be like for intelligence NOT to be substrate agnostic? What would we call the quality that some classes of matter has that others lack and that means that those classes can carry intelligence.

(vii) the close connection between computation and intelligence seems to open itself up to a criticism based on Wittgenstein’s notion of an ”attitude to a soul”. Is this just a trite linguistic gripe, or a real concern when we speak about intelligence?

(viii) It seems as if we can detect computation in matter, does this mean that we can detect intelligence just by detecting computation? Clearly not. What is it that we detect when we detect intelligence? This brings us back to the question of tests, of the Turing test et cetera. The Turing test has arguably been passed many times, but is not an interesting test at all – but is there a test for intelligence that can be reduced to a physical measurement? There certainly should be a test for computation that can be easily designed, right?

(ix) Intelligence is a concept that applies to action over a longer time than computation. Does the time factor change the possible equivalence between the concepts?

A lot to think about. Fascinating book so far.

Aspect seeing and consciousness I: What Vampires Cannot Do

In the novel Blindsight by Peter Watts mankind has resurrected vampires (no, not a good idea) – found in the book to be real predators that became extinct. One difference between vampires and humans is that vampires can see both aspects of a Necker cube at the same time – they are able to do hyper-threading and think several thoughts at the same time. In other words, vampires are capable of seeing two aspects of something – or more – simultaneously.

Wittgenstein studies this phenomenon in the second part of Philosophical Investigations, and one interpretation of his remarks is that he sees aspect seeing as a way to show how language can confound us. When we see only one aspect of something we forget that it can equally be something else, and that this is how we are confused. The duck-rabbit is not either duck or rabbit, it is ultimately both, it can be seen as both animals.

 

But maybe we can learn even more from his discussion of aspect seeing by examining the device Watts uses? The duck-rabbit, the Necker-cube and the old woman/young woman are all interesting examples of how we see one or the other aspect of something. But what would it mean to see both? Let’s assume for the moment that there is a being – a vampire as Watts has it – that can see both aspects at the same time. What would that be like?

Trivially we can imagine _two_ people who look at a Necker cube and see both aspects of it. That is not a hard thing to understand or accept. But a single person seeing both aspects at the same time, that seems more challenging, if not impossible. And maybe this is the thing to explore. What if the following holds true:

(i) Consciousness is limited to a single aspect in the world at a time.

We need to dig further, as this is a very imprecise way to put it, we want to find something more general and distinct to say here. When you are looking at a Necker cube you can only see one aspect at a time, and that is a necessary component of being a “you”. Conscious observation collapses the world to a single aspect out of a multitude of aspects.

That seem trivial. What we are now saying is that in order to see the world, you need to see the world in one specific way at a time. You cannot see it in different ways simultaneously. And that hits on something worth dwelling a bit on – the issue of time in aspect seeing. When you see an aspect of something you construct it in your head over time – it is like having lego pieces and assembling as specific lego construction. Just as you cannot assemble two lego constructions out of the same pieces at the same time you need to limit yourself to one single aspect when several are offered.

This idea, that two simultaneous aspects cannot be constructed out of observation at the same time points to consciousness as “single-threading” rather than “hyper-threading” with Watt’s terminology. But there is no way to imagine a world in which you can make two different simultaneous lego constructions out of lego pieces, that simply is a violation of the way the world is. Now, that opens up the following question:

Q1: Is hyper-threading as described by Watts necessarily impossible in the same way that the simultaneously different lego constructions built from the same pieces are?

This in turn is an interesting question, since it seems to imply that we have a boundary condition for consciousness here – it is necessarily single-threaded, or should be treated as two different consciousnesses in the same body as per our early observation that it is easy to imagine two observers seeing different aspects of the same thing.

We can then develop (i) into:

(ii) Consciousness is necessarily single-threaded.

What would this limitation imply, except that we cannot see a Necker cube in both ways at the same time? It would imply that the necessary reduction of several aspects into a single one is a pre-requisite for us to call something individually conscious.

I suspect there is more here, and want to return to this later, perhaps in a more structured fashion.

”Is there a xeno-biology of artificial intelligence?” – draft essay

One of the things that fascinate me is the connections we can make between technology and biology in exploring how technology will develop. It is a field that I enjoy exploring, and where I am slowly focusing some of my research work and writing. Here is a small piece on the possibility of a xeno-biology of artificial intelligence. All comments welcome to nicklas.berildlundblad at gmail.com.

Autonomy, technology and prediction I: some conceptual remarks

”How would you feel if a computer could predict what you would buy, how you would vote and what kinds of music, literature and food you would prefer with an accuracy that was greater than that of your partner?”

Versions of this question has been thrown at me in different fora over the last couple of months. It contains much to be unpacked, and turns out to be a really interesting entry into a philosophical analysis of autonomy. Here are a few initial thoughts.

  1. We don’t want to be predictable. There is something negative about that quality that is curious to me. While we sometimes praise predictability, we then call it reliability, not predictability. Reliability is a relational concept – we feel we can rely on someone, but predictability is something that has nothing to do with relationships, I think. If you are predictable, you are in some sense a thing, a machine, a simple system. Predictable people lose some of their humanity. Take an example from popular culture – the hosts in Westworld. They are caught in loops that make them easy to predict, and in a key scene Dr Ford expresses his dislike for humanity by saying that the same applies to humans: we are also caught in our loops.
  2. The flip side of that, of course, is that noone would want to be completely unpredictable. Someone who at any point may throw themselves out the window, start screaming, steal a car or disappear into the wilderness to write poetry would also be seen as less than human. Humanity is a concept associated with a mix of predictability and unpredictability. To be human is to occasionally surprise others, but also to be relied upon for some things.
  3. To be predictable is often associated with being easy to manipulate. The connection between the two is not entirely clear cut, since it does not automatically follow from someone being predictable that they can be manipulated.
  4. One way to think about this is to think about the role of predictability in game theory. There are two perspectives here: one is that in order to make credible threats, you need to be predictable in the sense that you will enforce those threats under the circumstances you have defined. There are even techniques for this – you can create punishments for yourself, like the man who reputedly gave his friend 10 000 USD to donate to the US national socialist party (a party the man hated) if his friend ever saw him smoking. Commitment to a cause is nothing else than predictability. Following Schelling, however, a certain unpredictable quality is also helpful in a game, when the rational thing to do is what favors the enemy. One apocryphal anecdote about Herman Kahn, who advocated thermo-nuclear war as a possibility – was that he was paid to do this as to keep the Soviets guessing if the US really could be that crazy to entertain the idea of such a complete war. In games it is the shift between predictability and unpredictability – the bluff! – that matters.
  5. But let’s return to the question. How would we feel? Would it matter how much data the computer needed to make its predictions? Would we feel worse or better if it was easier to predict us? Assume it took only 200 likes from a social network to make these predictions – would that be horrifying or calming to you? The first reaction here may be that we would feel bad if it was in some sense easy to predict us. But let’s consider that: if it took only 200 likes to predict us, the predictions would be thin, and we could change easily. The prediction horizon would be short, and the prediction thin. Let’s pause and examine these concepts, as I think they are important.
  6. A prediction horizon is the length of time for which I can predict something. In predicting the weather, one question is for how long we can predict it – for a day? For a few days? For a year? Anyone able to that – predict the weather accurately for a year – would have accomplished something quite amazing. But predicting the weather tomorrow? You can do that with 50% accuracy by saying that tomorrow will be like today. Inertia helps. The same phenomenon applies to the likes. If you are asked to predict what someone will do tomorrow, looking at what they did today is going to give you a pretty good idea. But it is not going to be a very powerful prediction, and it is not one that in any real sense threatens our autonomy.
  7. A prediction is thin if it concentrates on a few aspects of a predicted system. An example is predicted taste in books or music. Predicting what you will like in a new book or a new piece of music is something that can be done fairly well, but the prediction is thin and does not extend beyond its domain. It tells you nothing about who you will marry or if you will ever run for public office. A thick prediction is cross domains and would enable the predictor to ask a broad set of questions about you that would predict the majority of your actions over the prediction horizon.
  8. There is another concept that we need as well. We need to discuss prediction resolution. The resolution of a prediction is about the granularity of the prediction. There is a difference between predicting that you will like Depeche Mode and predicting that you will like their third album more than the fourth, or that your favorite song will be ”Love in itself”. As resolution goes down, prediction becomes easier and easier. The extreme case is the Keynesian quip: in the long run we are all dead.
  9. So, let’s do back to the question about the data set. It obviously would be different if a small data set allowed for a thick, granular prediction across a long horizon or if that same data set just allowed for a short horizon, thin prediction with low resolution. When someone says that they can predict you, you need to think about which one it is – and then the next question becomes if it is better if you have a large data set that does the same.
  10. Here is a possibility: maybe we can be relaxed about thin predictions over short horizons with low resolution based on small data sets (let’s call these a-predictions), because these will not affect autonomy in any way. But thick predictions over long horizons with high resolution, based on very large data sets are more worrying (let’s call these b-predictions).
  11. Here are a few possible hypotheses about these two classes of predictions.
    1. The possibility of a-predictions does not imply the possibility of b-predictions.
    2. Autonomy is not threatened by a-predictions, but by b-predictions.
    3. The cost of b-predictions is greater than the cost of a-predictions.
    4. Aggregated a-predictions do not become b-predictions.
    5. a-predictions are necessary in a market economy for aggregated classes of customers.
    6. a-predictions are a social good.
    7. a-predictions shared with the predicted actor change the probability of the a-predictions.
  12. There are many more possible hypotheses worth examining and thinking about here, but this suffices for a first exploration.

(image: Mako)

Simon I: From computers to cognicity

In the essay ”The steam engine and the computer” Simon makes a number of important, and interesting points about technological revolution. It is an interesting analysis and worthwhile reading – it is quite short – but I will summarize a few points, and throw out a concept idea.

Simon notes that revolutions – their name notwithstanding – take a lot of time. The revolution based on the steam engine arguably took more than 150 years to really change society. Our own information revolution is not even half way there. We have sort of assumed that the information revolution is over and innovation and productivity pessimism have become rampant in our public debate. Simon’s view would probably be that this is far too early to say – and he might add that the more impactful change comes in the second half of a revolution (an old truth that John McCarthy reminded me of when I interviewed him back in 2006, when AI celebrated 50 years. We still hovered at the edge of the AI-winter then, and I remember asking him if he was not disappointed? He looked at me as if I was a complete idiot and said ”Look, 50 years after Mendel discover the idea of inheritance genetics had gotten nowhere. Now we have sequenced the genome. Change comes in the second half of hundred years for human discoveries.” I must say that looking at the field now, the curmudgeonly comment was especially accurate. Makes me also think that maybe there is a general rule here connected to biological time scales? Human discoveries may have a similar arc of development across complex issues? Hm…). In 1997 1.7% of the Earth had Internet access. In 2007 that number was 20% and today it is 49 percent. We are halfway there.

Simon’s other observation is that no revolution depends on a single technology, but on a slowly unfolding weave of technologies. This is in one way trivial, but in another way quiet a helpful way to think about innovation. Most innovation pessimists tend to look at individual innovations and judge them trivial or uninteresting – but as they are connected in a weave of technology you can start to see other patterns. One pattern that Simon identifies for the first industrial revolution is this:

(i) steam engine — dynamo — electricity

And even though he does not predict it, he sees networking as something similar. From our vantage point we can see it quite clearly as a pattern too:

(ii) computer — internet — connectivity

But there are also new, intriguing patterns that we can start thinking about and exploring. Here is one that I think would merit some thinking:

(iii) computer — machine learning — cognicity.

The idea of cognicity – general purpose cognition available as openly as electricity – is one that could possibly rival that of electricity, and when added to connectivity the mix seems very interesting to analyze.

Simon also has an interesting point about education in the essay. He ridicules the fact that we have no clear idea of what education is, and says that we seem to be operating on the infection theory of education: gathering people in a classroom and spraying words at them, hoping that some of the words will infect the hosts. He also makes the point that computers seem to help us scale this theory, but that it is far from clear that this is indeed the best way of educating someone. It is hard not to read into this an implicit, possible criticism of MOOCs and their assumptions. Simon suggests that it is through immersive play that we learn, and he regrets asking organizations to first figure out what they would use computers for before they invest in them – it is in the individual experimentation these devices really come to their forte. This is also intriguing – Simon notes that computers are in a sense self-instructive, and while it is easy to protest that we need digital skills courses, it is intriguing to consider how billions of people learnt to use smartphones. Was it primarily through immersive play – experimenting with them – or through infection theory education?

Finally Simon makes a crucial point. Technological revolutions do not happen to us. We shape them. There is, in that observation, a world of difference between Simon and many who discuss technology, society and politics today.

”Valuable speech” – a note on legal mechanism design

I am reading a series of essays on free expression on the Internet. One of the authors repeatedly uses the ideas ”low value speech” and ”valuable speech”. I feel great unease. I wonder why, but think that it is because such a dichotomy assumes that we can say that this piece of speech is valuable and this other piece lacks value. Am I more comfortable with thinking about this problem in terms of ”speech” vs ”criminal threats / defamation”? Oddly I think so. I would like my speech with as few qualifiers as possible, and then I would like to define that which should not be protected as something else. As criminal defamation or illegal threats, or something else.

I think the reason is fairly straightforward: it imposes an intellectual discipline on the legislator, and associates limiting speech with a threshold test. It is a question about design.

In general there seems to be at least three legislative design strategies here: one is to try to categorize and qualify the speech as such according to its inherent value, one is to concentrate on the medium in which it is expressed (Mill actually seems to have been leaning towards this strategy, favoring deliberative debate in the newspapers of his time over talk in the street) and one is to simply define everything as speech (and thus protected) that is not criminalized, making a point out of differentiating between what is speech and what is not. All three systems can rule that something should not be protected, but in different ways – and it seems to me that the method, the algorithm, the mechanism design here really matters.

Perhaps we spend to little time thinking about legal mechanism design.

Organizing politically for the value of new technologies

One of the fundamental problems in ICT-policy is actually an intra-governmental problem. While everyone agrees on the importance of new technologies, it seems equally obvious that our way of measuring the information economy, well, sucks. That means that any serious ICT-policy works needs to start out with an internal discussion in government about what this new technology actually is and how much it is worth. I would argue (and have argued in this column, for example), that we can observe a very destructive pattern in the development of ICT-policies everywhere, and that is this:

(i) Everyone agrees on the importance but not the value of information and communication technology.

In fact, many of the measures we have used vastly underrepresent the new technologies and what they mean, and there are few if any ways to represent the increase in possible innovation capability brought about by these technologies. So while all politicians will agree that ICTs are very important to the future of the municipality/nation/region, the follow-up question of how important, i.e. what the value we are discussing actually is. That leads to a secondary effect that is equally worrying:

(ii) We consistently overvalue the damage disruptive technologies do to incumbents and undervalue the new opportunities these technologies open up.

This is well-known in behavioural economics and simply a version of loss-aversion, but on a societal level. One effect of these two observations is a purely organizational observation:

(iii) ICT-policy work rarely results in a political and executive organization that accurately represents the value of the phaseshift in economics the new technologies enable.

This slows development, and leads to a number of baffling inefficiencies. It also leads to situations where a good and strong policy programme never gets executed on. In the column above I argued that the ICT-policy departement (in Sweden it is the Ministry of Enterprise and Industry) should be given a veto over proposals in government that will hurt developing the new technologies. That is a kind of thought experiment that is admittedly provocative, but the alternative, frankly, is that ICT-policies get dereailed by incumbent interests, budgetary concerns and other short-term more effectively organized interests in government.

Interestingly this is not only a governmental problem. It is also observable in industry. One of the largest photo-film makers knew that photography would become digital, but the way it ”knew” this organizationally was through a ”future commission” that was actually set up twice, and the results of which were dismissed as economically irresponsible and risky. Loss aversion in this case led to a massive loss of momentum as well as the near bankruptcy of the company. One of the people in Kodak was quoted to say:

Kodak’s executive staff were simply not prepared to take the necessary risks required in the form of a DRP, “the difference between [Kodak’s] traditional business and digital was so great. The tempo is different. The kind of skills you need are different. Kay [Whitmore, President] and Colby [Chandler] would tell you that they wanted change, but they didn’t want to force pain on the organisation.”

That is exactly what is happening in ICT-policy. And the signals are there, just as they were with Kodak, but the pain of reorganization are doubly difficult to implement in a political organization, where requiring that the electorate feel and share this pain is simply near-impossible. Until the executive/political commitment exists, that is. And yet, this is just a case of (iii) above. The organization does not respond to assertions of ”importance” it asserts to assertions of value, and that also allows rational trade-offs.

It will be interesting to see how this plays out. One theory would be that state capitalists systems may be more resilient and adaptable, because they can make the changes quickly. On the other hand these economies may be even more vested in the old ways of measuring economic impact, and so completely fail to take account of the consumer surplus-values and enabling aspects of new technologies. We will see.

The information revolution will reward those that follow the advice of Clausewitz, the relentless military genius, who remarked acidly: ”Amateurs discuss strategy, professionals discuss organization”.

Solving problems? You should be collecting them.

Problems are beautiful, and they are among the most interesting things you can come across. You should consider each problem you are faced with as if it was a rare and thoughtful gift (failures are like this too, as Karl Jaspers noted: failures are small ciphers sent to you from God). Often we are annoyed when faced with problems and we see them as things to solve and then forget, but I think that it is much more important to collect them and understand what different kinds of problems there are. And the categories just continue to amaze me. When creating a taxonomy for problems, I  believe you reveal a lot about yourself as a person. The best interview question I have ever been faced with was the question ”How many different kinds of problems are there?” — The answer is almost certainly going to reveal a bunch about what is going on in your wetware. A couple of different possible answers help show this:

  1. Solveable and unsolveable. This is a pretty lame answer, admittedly, but it has a certain kind of basic charm. If this is how you think of your problems, you are either a math nerd or simply very, very pragmatic.
  2. Interesting and uninteresting. I like this much better. If we think about problems as interesting or uninteresting we at least acknowledge their inherent value. Problem is I think the second category is empty. So you may be wrong.
  3. Deductive, inductive or abductive. Old semiotic and peircean view of problems. This shows that you have an understanding of problems that flow from the structure of the problem, rather than the substance of them.
  4. Legal, economic, mathematical, et cetera. Subject matter problems. This shows that you think of problems as domain dependent. That something is a problem is decided in the larger language game of the domain where the discourse is playing out.
  5. Infinite or finite. Some problems are ever evolving and they are not essentially to be solved, they are more continuous games that need to be addressed all the time, and then evolve and change. Some problems have solutions that actually make them go away and disappear. This mirrors, of course, closely the categories infinite and finite games. Shows that you think about problems as games, or at least as ways of engaging the world: we live through our problems. They make us real.
  6. Mine and somebody elses. Old Douglas Adam’s joke. Somethings in his lovely novels are obscured by Somebody Else’s Problem-fields that make them, effectively, invisible. This shows that you think of problems as owned or things for which you should be accountable. Very responsible, but also somewhat limited.
  7. Natural and artificial. Some problems are made, others are found. The made problems are problems of human making, and often can be solved by fixing who does what. Found problems are much harder and also likely to remain constant over different teams. A made problem may very well be the consequence of a found problem, by the way. This way of thinking about problems is the natural scientist.
  8. Networked problems and stand-alone problems. Some problems occur because of the way a network of different factors interact. Some simply exist by their own. I find that those that make this distinction sometimes think that networked problems are intractable, whereas what can be handled on its own is solvable, or at least that networked problems require concerted action (collective action) to solve.
  9. Primary and secondary problems. Some problems are effects and some are causes. Solving for the problems that are not the root problems only fixes so much. Responses along these line realize the ever-present risk of post hoc ergo propter hoc in building models of reality.
  10. Out of context problems and context problems. This last category really interests me. OCPs was a term launched by sci-fi writer Iain M Banks, in his novel Excession. OCPs are problems that you hardly even recognize as problems because they are so way outside of the context you operate in. As opposed to context problems that are problems, you see as problems, recognize and have ways of solving. OCPs are NOT black swans as the wikipedia entry argues, however. They are something much more interesting, something that challenges your entire context and world-view and THUS a problem.

Wittgenstein famously noted that a philosophical problem has the form ”I don’t know my way”. I think re-phrasing problems in that way, finding representations for them, models and analogies is extremely interesting too. What are your favorite categories of problems? (I have not even mentioned things like Fermi-problems, np-complete et cetera, so there is much still to be done here. I have started a category on my blog for problems, and will keep an eye open for more of them as we proceed.

5 management books you probably did not think were management books

So, as a part of the skills I try to develop in my everyday work, I am a manager. I enjoy it tremendously, and am lucky enough to have a team that is simply amazing to work with. But that doesn’t mean that I get to be lazy about it. So I am trying to read management literature. Or I tried. Oh, man. That was a huge mistake. So — I think for the right audience, these books are probably amazing tools and simply wonderful reading, but for me they were more like having Bulgarian substitute coffee poured in my eyes whilst being beaten over the head with a rotten salmon. You get the idea. I quickly realized that I simply needed to read other books as if were they management books, and that worked just fine. The list I have compiled may be helpful for someone else, or not, I really don’t know. But here goes.

  1. Administrative Behaviour by Herbert Simon. This is probably the closest to a management book that I came. And this is a brilliant, brilliant tome. It contains much about management that is simply common sense, but tried, tested and in a language that only a Nobel laureate in economy that happened to invent cognitive science on the side, kill economic man and dabble in artificial intelligence could muster. Simply brilliant.
  2. Nicomachean Ethics by Aristotle. Know what happiness is? Have any idea of what motivates you? How should people behave to be virtuous and why do they do this? No idea? That could prove to be a problem in management. Because it turns out managing is about those people in your company (yes, them!) to a large degree. A robust model of man is a good thing to have, and Aristotle spent quite some time developing that work for you. As a plus you don’t get the contempt for everyone that saturates Plato’s writings (everyone not a philosopher, that is).
  3. Philosophical investigations by Wittgenstein. So, what does a manager do? One thing a manager does is handles concepts (and, yes, people, but we dealt with them already). Concepts are tricky things. So intensely tricky that they require a bit of analysis from time to time. You should be able to do that. There is no better guide to picking a part the grammar of a concept than Wittgenstein. And he is eminently readable too. As a bonus you get a lifetime worth of therapy from philosophical problems, and just may find your way out of the fly bottle.
  4. The Art of Worldly Wisdom by Baltazar Gracian. Nietzsche referred to Gracian as the greatest author of aphorisms ever. That should be reason enough, alone, to read him. But the mix of cynicism, pure wisdom, smiling misantrophy and daring truths is a boon for anyone who wants to be challenged, take advice or merely enjoy the voice of a long gone student of mankind that saw further and deeper than most. The blessing of an aphorism writer that you often disagree with is rare. And how dull to read a book full of aphorisms that makes you nod and say ”just so!”. <snickering>Oh, that would be management literature, that is right…</snickering>
  5. Labyrinths by Jorge Luis Borges. The manifold nature of reality, memory and of life is a good place to start your inquiries into anything. Borges is an amazing guide and an underestimated writer (even if you take that into account, recursively). For anyone in tech I would add the Cyberiad, by Stanislaw Lem. They serve the same purpose. They challenge assumptions and they build new theories of the world. Practicing that is no mean task, and you can find much worse company than Borges or Lem to do it with. But doing it, in any art form (you may prefer to listen to Scriabin or simply to enjoy paintings of, well, I wouldn’t know, I am sorely in need of more examples of artists that challenge assumptions in visual arts (I always default to Escher and Magritte)), is an important practice for anyone that wants to grow, I firmly believe.

So, I am not far gone on the path of management, but I am intent to travel further, learn more and develop, so please add your own recommendations in the comments, or simply email me — thanks for the help! And if I have unfairly missed any traditional super books on management, well, I can change my mind. Right? Who knows, you may have found something in being slapped with rotten salmon that I did not. If so — out with it!

Privacy I: Neuro-narratives and neo-privacy

The design of privacy enhancing technologies roughly seems to fall into two categories: negotiation support technologies that allow for social signaling or information restriction technologies that allow more control over specific pieces or flows of information. In both cases, the object of protection is arguably the information itself. But privacy is clearly about more than restricting access to information.

At least theoretically it seems possible to protect or enhance privacy by focusing not on the information, but the use to which it is put. I would argue that we can make a distinction between theoretical privacy frameworks that focus on information, and those that focus on narratives. I will admit that the distinction is somewhat unclear, but bear with me.

If the object of protection is not information about me, but my narrative about myself, we end up with a slightly different set of privacy problems, problems that are much more about data protection in a sense. And some scientific findings indeed seem to indicate that we are indeed hard-wired to understand ourselves and others through narratives. If this is the case, it seems privacy harms should be related to somehow disturbing or destroying those narratives. As stated in a recent blog post at NewScientist.com:

State-of-the-art neuro-imaging and cognitive neuropsychology both uphold the idea that we create our ”selves” through narrative. Based on a half-century’s research on ”split-brain” patients, neuroscientist Michael Gazzaniga argues that the human brain’s left hemisphere is specialised for intelligent behaviour and hypothesis formation. It also possesses the unique capacity to interpret – that is, narrate – behaviours and emotional states initiated by either hemisphere. Not surprisingly, the left hemisphere is also the language hemisphere, with specialised cortical regions for producing, interpreting and understanding speech. It is also the hemisphere that produces narratives.

Gazzaniga also thinks that this left-hemisphere ”interpreter” creates the unified feeling of an autobiographical, personal, unique self. ”The interpreter sustains a running narrative of our actions, emotions, thoughts, and dreams. The interpreter is the glue that keeps our story unified, and creates our sense of being a coherent, rational agent. To our bag of individual instincts it brings theories about our lives. These narratives of our past behaviour seep into our awareness and give us an autobiography,” he writes. The language areas of the left hemisphere are well placed to carry out these tasks. They draw on information in memory (amygdalo-hippocampal circuits, dorsolateral prefrontal cortices) and planning regions (orbitofrontal cortices). As neurologist Jeffrey Saver has shown, damage to these regions disrupts narration in a variety of ways, ranging from unbounded narration, in which a person generates narratives unconstrained by reality, to denarration, the inability to generate any narratives, external or internal.

Combining neurology and narratology seems to be a promising theoretical opening for privacy research.

Here is a possible way to think about privacy, then: privacy infringements are acts such that they significantly degrade my ability to create, disseminate and uphold my own narrative. That narrative then in turn decides autonomy, control, psychological and economic damage that I suffer. Narratology, the science studying narratives, argues that there is a difference between stories and discourse in general, the elements and the order in which they are retold (or told at all). This difference, in Russian formalism known as fabula and sujet, the personal data and the identifying narrative, perhaps, in privacy research, could be fruitfully studied much more in-depth. Maybe it is only relevant to discuss privacy in terms of the fabula, the order of retelling the raw elements of the sujet.

If I want to tell the story that I am a solid upstanding citizen, it will hurt my story if you reveal that I was in fact convicted for heinous crimes a couple of years ago. Your revelation is less problematic if I am trying to tell the story that I have served my time, but am still carrying the guilt of those crimes. We could argue that in one case there is a harm, in the other there is no harm. The narrative I am telling decides, then, the existence and extent of any privacy harm.

But, wait: the question then seems to be why I should have an unbounded right to my own narrative, right? Should I be the only one to decide what stories are told about me? That sounds dangerous and seems to threaten free expression.

The question about what constitutes a privacy harm in a narrative framework, then, needs to be a question about what stories we should be able to tell about ourselves and others. If, indeed, narratives are how we understand ourselves and others, then narratives need to play a much larger role in research about privacy and privacy enhancing technologies.

A corollary to this thought is that technologies that allow us to tell our stories are in fact privacy enhancing, since they reinforce our stories and narratives. Blogs, micro blogs and social networks are narrative tools.

Rather than seeing these tools as threats to privacy we may need to understand them as potentially very powerful privacy enhancing technologies. It all becomes a question of if the narrative prerogative is allocated in them in a way that is conducive to a balanced telling of your story, or the story that you identify with.