Man / Machine I: conceptual remarks.

How does man relate to machine? There is a series of questions here that I find fascinating and not a little difficult. I think the relationship between these two concepts also are determinative for a large set of issues that we are debating today, and so we would do well to examine this language game here.

There are, of course, many possibilities. Let’s look at a few.

First, there is the worn out “man is a lesser machine”-theme. The idea here is that machine is a perfect man, and that we should be careful with building machines that can replace us. Or that we should strive ourselves to become machines in order to survive. In this language game machine is perfection, eternity and efficiency, man is imperfection, ephemeral and inefficient. The gleaming steel and ultra-rational machine is a better version of biological man. It is curious to me that this is the conceptual picture that seems strongest right now. We worry about machines taking over, machines taking our jobs and machines turning us all into paper clips (or at least Nick Bostrom does). Because we see them as our superiors in every regard.

In many versions of this conceptual landscape evolution is also a sloppy and inefficient process, creating meat machines with many flaws and short comings — and machines are the end point. They are evolution mastered, and instead of being products of evolution machines produce it as they see fit. Nature is haphazard and technology is deliberate. Any advantage that biology has over technology is seen as easy to design in, and any notion of man’s uniqueness is quickly quashed by specific examples of the machine’s superiority: chess, jeopardy, go, driving —

The basis of this conceptual landscape is that there are individual things machines do better than man, and the conclusion is that machines must be generally better. A car drives faster than a man can run, a computer calculates faster than a man can count and so: machine is generally superior to man.

That does not, of course, follow with any logical necessity. A dog’s sense of smell is better than man, and a dog’s hearing is better than ours. Are dogs superior to man? Hardly anyone would argue that, yet still that same argumentative pattern seems to lead us astray when we talk about machines.

There is not, as far as I am concerned, wrong or right here – but I think we would do well to entertain a broad set of conceptual schemas when discussing technology and humanity, and so am wary of any specific frame being mistaken for the truth. Different frames afford us different perspectives and we should use them all.

The second, then, is that machine is imperfect man. This perspective does not come without its own dangers. The really interesting thing about Frankenstein’s monster is that there is a very real question of how we interpret the monster: as a machine or man? As superior or inferior? Clearly superior on strength, the monster is mostly thought to be stupid and inferior intellectually to its creator.
In many ways this is our secret hope. This is the conceptual schema that gives us hope in the Terminator movies: surely the machine can be beat, it has to have weaknesses that allow us to win over it with something distinctly human, like hope. The machine cannot be perfect, so it has to have a fatal flaw, an imperfection that will allow us to beat it?

The third is that machine is man and man just a machine. This is the La Mettrie view. The idea that there is a distinction between man and machine is simply wrong. We are machines and the question is just how we can be gradually upgraded and improved. There is, in this perspective, a whiff of the first perspective but with an out: we can become better machines, but we will still also be men. Augmentation and transcendence, uploading and cyborgs all inhabit this intellectual scheme.

But here we also have another, less often discussed, possibility. That indeed we are machines, but that we are what machines become when they become more advanced. Here, the old dictum from Arthur C Clarke comes back and we paraphrase: any sufficiently advanced technology is indistinguishable from biology. Biology and technology meld, nature and technology were never distinct or different – technology is just slow and less complex nature. As it becomes more complex, technology becomes alive – but not superior.

Fourth, and rarely explored, we could argue simply that machine and man are as different as man and any tool. There is no convergence, no relationship. A hammer is not a stronger hand. A computer is not a stronger mind. They are different and mixing them up is simply ridiculous. Man is of one category, machine of another and they are incommensurable.

Again: it is not a question of choosing one, but recognizing that they all matter in understanding questions of technology and humanity, I think. More to come.

Notes on attention, fake news and noise #4: Jacques Ellul and the rise of polyphonic propaganda part 1

Jacques Ellul is arguably one of the earlier and most consistent technology critics we have. His texts are due for a revival in a time when technology criticism is in demand, and even techno-optimists like myself would probably welcome that, because even if he is fierce and often caustic, he is interesting and thoughtful. Ellul had a lot to say about technology in books like The Technological Society and The Technological Bluff, but he also discussed the effects of technology on social information and news. In his bleak little work Propaganda: The Formation of Men’s Attitudes (New York 1965(1962)) he examines how propaganda draws on technology and how the propaganda apparatus shapes views and opinions in a society. There are many salient points in the book, and quotes that are worth debating.

That said, Ellul is not an easy read or an uncontroversial thinker. Here is how he connects propaganda and democracy, arguing that state propaganda is necessary to maintain democracy:

“I have tried to show elsewhere that propaganda has also become a necessity for the internal life of a democracy. Nowadays the State is forced to define an official truth. This is a change of extreme seriousness. Even when the State is not motivated to do this for reasons of actions or prestige, it is led to it when fulfilling its mission of disseminating information.

We have seen how the growth of information inevitably leads to the need for propaganda. This is truer in a democratic system than in any other.

The public will accept news if it is arranged in a comprehensive system, and if it does not speak only to the intelligence but to the ‘heart’. This means, precisely, that the public wants propaganda, and if the State does not wish to leave it to a party, which will provide explanations for everything (i.e. the truth), it must itself make propaganda. Thus, the democratic State, even if it does not want to, becomes a propagandist State because of trhe need to dispense information. This entails a profound constitutional and ideological transformation. It is, in effect, a State that must proclaim an official, general, and explicit truth. The State can no longer be objective or liberal, but is forced to bring to the overinformed people a corpus intelligentiae.”

Ellul says, in effect that in a noise society there is always propaganda – the question is who is behind it. It is a grim world view in which a State that yields the responsibility to engage in propaganda yields it to someone else.

Ellul comments, partly wryly, that the only way to avoid this is to allow citizens 3-4 hours to engage in becoming better citizens, and reduce the working day to 4 hours. A solution he agrees is simplistic and unrealistic, it seems, and it would require that citizens “master their passions and egotism”.

The view raised here is useful because it clearly states a view that sometime seems to be underlying the debate we are having – that there is a necessity for the State to become an arbiter of truth (or to designate one) or someone else will take that role. The weakness in this view is a weakness that plagues Ellul’s entire analysis, however, and in a sense our problem is worse. Ellul takes, as his object of study, propaganda from the Soviet Union and Nazi-Germany. His view of propaganda is one that is largely monophonic. Yes, technology still pushes information on citizens, but in 1965 it did so unidirectionally. Our challenge is different and perhaps more troubling: we are dealing with polyphonic propaganda. The techniques of propaganda are employed by a multitude of parties, and the net effect is not to produce truth – as Ellul would have it – but eliminate the conditions for truth. Truth no longer become viable in a set of mutually contradictory propaganda systems, it is reduced to mere feelings and emotions: “I feel this”. “This is my truth”. “This is the way I feel about it”.

In this case the idea that the state should speak too is radically different, because the state or any state-appointed arbiter of truth just adds to the polyphony of voices and provides them with another voice to enter into a polemic with. It fractures the debate even more, and allows for a special category of meta-propaganda that targets the way information is interpreted overall: the idea of a corridor of politically correct views that we have to exist within. Our challenge, however, is not the existence of such a corridor, but the fact that it is impossible to establish a coherent, shared model of reality and hence to decide what the facts are.

An epistemological community must rest on a fundamental cognitive contract, an idea about how we arrive at facts and the truth. It must contain mechanisms of arbitration that are institution in themselves, independent of political decision making or commercial interest. The lack of such a foundation means that no complex social cognition is possible. That in itself is devastating to a society, one could argue, and is what we need to think about.

It is no surprise that I take issue with Ellul’s assertion that technology is at the heart of the problem, but let me at least outline the argument I think Ellul would have to deal with if he was revising his book for our age. I would argue that in a globalized society, the only way we can establish that epistemological, basic foundation to build on is through technology and collaboration within new institutions. I have no doubt that the web could carry such institutions, just like it carries the Wikipedia.

There is an interesting observation about the web here, an observation that sometimes puzzles me. The web is simultaneously the most collaborative environment constructed by mankind and the most adversarial. The web and the Internet would not exist but for the protocol agreements that have emerged as its basis (this is examined and studied commendably in David Post’s excellent book Jefferson’s Moose). At the same time the web is a constant arms race around different uses of this collaboratively enabled technology.

Spam is not an aberration or anomaly, but can be seen as an instance of a generalized, platonic pattern in this space. A pattern that recurs through-out many different domains and has started to climb the semantic layers from simple commercial scams to the semiosphere of our societies, where memes compete for attention and propagation. And the question is not how to compete best, but how to continue to engage in institutional, collaborative and, yes, technological innovation to build stronger protections and counter-measures. What is to disinformation as spamfilters are to unwanted commercial emails? It is not mere spamfilters with new keywords, it needs to be something radically new and most likely institutional in the sense that it requires more than just technology.

Ellul’s book provides a fascinating take on propaganda and is required reading for anyone who wants to understand the issues we are working on. More on him soon.

Notes on attention, fake news and noise #3: The Noise Society 10 years later

This February it is 10 years since I defended my doctoral thesis on what I then called the Noise Society. The main idea was that the idea of an orderly, domesticated and controllable information society – modeled on the post-industrial visions of Bell and others – probably was wrongheaded, and that we would see a much wilder society characterized by an abundance of information and a lack of control, and in fact: we would see information grow to a point where the value of it actually collapsed as the information itself collapsed into noise. Noise, I felt then, was a good description not only of individual disturbances in the signal, but also the cost for signal discovery over all. A noise society would face very different challenges than an information society.

Copyright in a noise society would not be an instrument of encouraging the production of information so much as a tool for controlling and filtering information in different ways. Privacy would not be about controlling data about us as much as having the ability to consistently project a trusted identity. Free expression would not be about the right to express yourself, but about the right not to be drowned out by others. The design of filters would become key in many different ways.

Looking back now, I feel that I was right in some ways and wrong in many, but that the overall conclusion – that the increase in information and the consequences of this information wealth are at the heart of our challenges with technology – was not far off target. What I am missing the thesis is a better understanding of what information does. My focus on noise was a consequence of accepting that information was a “thing” rather than a process. Information looks like a noun, but is really a verb, however.

Revisiting these thoughts, I feel that the greatest mistake was not including Herbert Simon’s analysis of attention as a key concept in understanding information. If I had done that I would have been able to see that noise also is a process, and I would have been able to ask what noise does to a society, theorize that and think about how we would be able to frame arguments of policy in the light of attention scarcity. That would have been a better way to get at what I was trying to understand at the time.

But, luckily, thought is about progress and learning, and not about being right – so what I have been doing in my academic reading and writing for the last three years at least is to emphasize Herbert Simon’s work, and the importance of understanding his major finding that with a wealth of information comes a poverty of attention and a need to allocate attention efficiently.

I believe this can be generalized, and that the information wealth we are seeing is just one aspect of an increasing complexity in our societies. The generalized Simon-theorem is this: with a wealth of complexity comes a poverty of cognition and a need to learn efficiently. Simon, in his 1969 talk on this subject, notes that it is only by investing in artificial intelligence we can do this, and he says that it is obvious to him that the purpose of all of our technological endeavours is to ensure that we learn faster.

Learning, adapting to a society where our problems are an order of magnitude more complex, is key to survival for us as a species.
It follows that I think the current focus on digitization and technology is a mere distraction. What we should be doing is to re-organize our institutions and societies for learning more, and faster. This is where the theories of Hayek and others on knowledge coordination become helpful and important for us, and our ideological discussions should focus on if we are learning as a society or not. There is a wealth of unanswered questions here, such as how we measure the rate of learning, what the opposite of learning is, how we organize for learning, how technology can help and how it harms learning — questions we need to dig into and understand at a very basic level, I think.

So, looking back at my dissertation – what do I think?

I think I captured a key way in which we were wrong, and I captured a better model – but the model I was working with then was still fatally flawed. It focused on information as a thing not a process, and construed noise as gravel in the machinery. The focus on information also detracts from the real use cases and the purpose of all the technology we see around us. If we were, for once, to take our ambitions “to make the world a better place” seriously, we would have to think about what it is that makes the world better. What is the process that does that? It is not innovation as such, innovation can go both ways. The process that makes our worlds better – individually and as societies – is learning.

In one sense I guess this is just an exercise in conceptual modeling, and the question I seem to be answering is what conceptual model is best suited to understand and discuss issues of policy in the information society. That is fair, and a kind of criticism that I can live with: I believe concepts are crucially important and before we have clarified what we mean we are unable to move at all. But there is a risk here that I recognize as well, and that is that we get stuck in analysis-paralysis. What, then, are the recommendations that flow from this analysis?

The recommendations could be surprisingly concrete for the three policy areas we discussed, and I leave as an exercise for the reader to think about them. How would you change the data protection frameworks of the world if the key concern was to maximize learning? How would you change intellectual property rights? Free expression? All are interesting to explore and to solve in the light of that one goal. I tend to believe that the regulatory frameworks we end up with would be very different than the ones that we have today.

As one part of my research as an adjunct professor at the Royal Institute of Technology I hope to continue exploring this theme and others. More to come.

Notes on attention, fake news and noise #2: On the non-linear value of speech and freedom of dialogue or attention

It has become more common to denounce the idea that more speech means better democracy. Commentators, technologists and others have come out to say that they were mistaken – that their belief that enabling more people to speak would improve democracy was wrong, or at the very least simplistic. It is worth analyzing what this really means, since it is a reversal of one of the fundamental hopes the information society vision promised.

The hope was this: that technology would democratize speech and that a multitude of voices would disrupt and displace existing, incumbent hierarchies of power. If the printing press meant that access to knowledge exploded in western society, the Internet meant that the production of knowledge, views and opinions now was almost free and frictionless: anyone could become a publisher, a writer, a speaker and an opinion maker.

To a large extent this is what has happened. Anyone who wants to express themselves today can fire up their computer, comment on a social network, write a blogpost or tweet and share their words with whoever is willing to listen – and therein lies the crux. We have, historically, always focused on speech because the scarcity we fought was one of voice: it was hard to speak, to publish, to share your opinion. But the reality is that free speech or free expression just form one point in a relationship – for free speech to be worth anything someone has to listen. Free speech alone is the freedom of monologue, perhaps of the lunatic raving to the wind or the sole voice crying out in the desert. Society is founded upon something more difficult: the right to free dialogue.

You may argue that this is a false and pernicious dichotomy: the dialogue occurs when someone chooses to listen, and no-one is, today, restricted from listening to anyone, so why should we care about the listening piece of dialogue? The only part that needs to be safe-guarded is, you may say, the right to speak. All else follows.

This is where we may want to dig deeper. If you speak, can everyone listen? Do they want to? Do you have a right to be listened to? Do you have a right to be heard that corresponds to your right to speak? Is there, in fact, a duty to listen that precedes the right to speak?

We enter difficult territory here, but with the increasing volume of noise in our societies this question becomes more salient than ever before. A fair bit of that noise is in fact speech, from parties that use speech to drown out other speech. Propaganda and censorship are difficult in a society characterized by information wealth and abundance, but noise that drowns out speech is readily available: not control, but excess, flooding and silence through shouting others down – those are the threats to our age.

When Zeynep Tufekci analyzes free speech in a recent Wired article, she notes that even if it is a democratic value, it is not the only one. There are other values as well. That is right, but we could also ask if we have understood the value at play here in the right way. Tufekci’s excellent article goes on to note that there is a valuable distinction between attention and speech, and that there is no right to attention. Attention is something that needs to be freely given, and much of her article asks the legitimate question of if current technologies, platforms and business models allow for us to allocate attention freely. We could ask here if what she is saying implies that we need to examine whether there is a freedom of attention right somewhere here as well.

When someone says that the relationship between free expression the quality and robustness of a democracy is non-linear, they can be saying many different things. There is a tendency to think that what we need to accept is a balancing of free speech and free expression, and that there are other values that we are neglecting. We could, however, equally say that we have misunderstood the fundamental nature and structure of the value we are trying to protect.

Just because (and Tufekci makes this point as well) the bottle-neck used to be speech we focused there. What we really wanted was perhaps free dialogue, built on free speech and the right to freely allocate one’s attention as one sees fit. Or maybe what we wanted was the freedom to participate in democratic discourse, something that is, again, different.

Why, then, is this distinction important? Perhaps because the assumption of the constancy of the underlying value we are trying to protect, the idea that free speech is well understood and that we should just “balance” it, leads us to solution spaces where we actually harm the values we would like to protect unduly. By examining alternative legal universes where a right to dialogue, the right to free attention, the right to democratic discourse et cetera could exist we examine and start from that value rather than give up on it and enter into the language of balancing and restricting.

There is something else here that worries me, and that is that sometimes there is almost a sense that we are but victims of speech, information overload and distraction. That we have no choice, and that this choice needs to be designed, architected and prescribed for us. In its worst forms this assumption derives the need to balance speech from democratic outcomes and people’s choices. It assumes that something must be wrong with free speech because people are making choices we do not agree with, so they must be victims. They do not know what they are doing. This assumption – admittedly exaggerated here – worries me greatly, and highlights another complexity in our set of problems.

How do we know when free speech is not working? What are the indications that the quality of democracy is not increasing with the amount of speech available in a community? It cannot just be that we disagree with the choices made in that democracy, so what could we be looking for? A lack of commitment to democracy itself? A lack of respect for its institutions?
As we explore this further, and examine other possible consistent sets of rights around opinion making, speech, attention, dialogue and democratic discourse we need to start sorting these things out too.

Just how do we know that free speech has become corrosive noise and is eroding our democracy? And how much of that is technology’s fault and how much is our responsibility as citizens? That is no easy question, but it is an important one.

(Picture credit: John W. Schulze CC-attrib)

Data is not like oil – it is much more interesting than that

So, this may seem to be a nitpicking little note, but it is not intended to belittle anyone or even to deny the importance of having a robust and rigorous discussion about data, artificial intelligence and the future. Quite the contrary – this may be one of the most important discussions that we need to engage in over the coming ten years or so. But when we do so our metaphors matter. The images that we convey matter.

Philosopher Ludwig Wittgenstein notes in his works that we are often held hostage by our images, that they govern the way we think. There is nothing strange or surprising about this: we are biological creatures brought up in three-dimensional space, and our cognition did not come from the inside, but it came from the world around us. Our figures of thought are inspired by the world and they carry a lot of unspoken assumptions and conclusions.

There is a simple and classical example here. Imagine that you are discussing the meaning of life, and that you picture the meaning of something as hidden, like a portrait behind a curtain – and that discovering the meaning then naturally means revealing what is behind that curtain and how to understand it. Now, the person you are discussing it with instead pictures it as a bucket you need to fill with wonderful things, and that meaning means having a full bucket. You can learn a lot from each-others’ images here. But they represent two very different _models_ of reality. And models matter.

That is why we need to talk about the meme that “data is like oil” or any other scarce resource, like the spice in Dune (with the accompanying cry “he who controls the data…!”). This image is not worthless. It tells us there is value to data, and that data can be extracted from the world around us – so far the image is actually quite balanced. There is value in oil and it is extracted from the world around us.

But the key thing about oil is that there is not a growing amount of it. That is why we discuss “peak oil” and that is why the control over oil/gold/Dune spice is such a key thing for an analysis of power. Oil is scarce, data is not – at least not in the same way (we will come back to this).

Still not sure? Let’s do a little exercise. In the time it has taken you to read to this place in the text, how many new dinosaurs have died and decomposed and been turned into oil? Absolutely, unequivocally zero dinosaurs. Now, ask yourself: was any new data produced in the same time? Yes, tons. And at an accelerating rate as well! Not only is data not scarce, it is not-scarce in an accelerating way.

Ok, so I would say that, wouldn’t I? Working for Google, I want to make data seem innocent and unimportant while we secretly amass a lot of it. Right? Nope. I do not deny that there is power involved in being able to organize data, and neither do I deny the importance of understanding data as a key element of the economy. But I would like for us to try to really understand it and then draw our conclusions.

Here are a few things that I do not know the answers to, and that I think are important components in understanding the role data plays.

When we classify something as data, it needs to be unambiguous, and so needs to be related to some kind of information structure. In the old analysis we worked with a model where we had data, information, knowledge and wisdom – and essentially thought of that model as hierarchically organized. That makes absolutely no sense when you start looking at the heterarchical nature of the how data, information and knowledge interact (I am leaving wisdom aside, since I am not sure of whether that is a correct unit of analysis). So something is data in virtue of actually having a relationship with something else. Data may well not be an _atomic_ concept, but rather a relational concept. Perhaps the basic form of data is the conjunction? The logical analysis of data is still fuzzy to me, and seems to be important when we live in a noise society – since the absolutely first step we need to undertake is to mine data from the increasing noise around us and here we may discover another insight. Data may become increasingly scarce since it needs to be filtered from noise, and the cost for that may be growing. That scarcity is quite different from the one where there is only a limited amount of something – and the key to value here is the ability to filter.

Much of the value of data lies in its predictive qualities. That it can be used to predict and analyze in different ways, but that value clearly is not stable over time. So if we think about the value of data, should we then think in terms of a kind of decomposing value that disappears over time? In other words: do data rot? One of the assumptions we frequently make is that more data means better models, but that also seems to be blatantly wrong. As Taleb and others have shown the number of correlations in a data set where the variables grow linearly in turn grows exponentially, and an increasing percentage of those correlations are spurious and worthless. That seems to mean that if big data is good, vast data is useless and needs to be reduced to big data again in order to be valuable at all. Are there breaking points here? Certainly there should be from a cost perspective: when the cost C of reducing a vast data set to a big data set are greater than the expected benefits in the big data set, then the insights available are simply not worth the noise filtering required. And what of time? What if the time it takes to reduce a vast data set to a big data set necessarily is such that the data have decomposed and the value is gone? Our assumption that things get better with more data seems to be open to questioning – and this is not great. We had hoped that data would help us solve the problem.

AlphaGo Zero seems to manage without at least human game seed data sets. What is the class of tasks such that they actually don’t benefit from seed data? If that class is large, what else can we say about it? Are key crucial tasks in that set? What characterizes these tasks? And are “data agnostic” tasks evidence that we have vastly overestimated the nature and value of data for artificial intelligence? The standard narrative now is this: “the actor that controls the data will have an advantage in artificial intelligence and then be able to collect more data in a self-reinforcing network effect”. This seems to be nonsense when we look at the data agnostic tasks – how do we understand this?

One image that we could use is to say that models eat data. Humor me. Metabolism as a model is more interesting than we usually allow for. If that is the case we can see another way in which data could be valuable: it may be more or less nutritious – i.e. it may strengthen a model more or less if the data we look at becomes part of its diet. That allows to ask complicated questions like this: if we compare an ecology in which models get to eat all kinds of data (i.e. an unregulated market) and ecologies in which the diet is restricted (a regulated market) and then we let both these evolved models compete in a diet restricted ecology – does the model that grew up on an unrestricted diet then have an insurmountable evolutionary advantage? Why would anyone be interested in that, you may ask. Well, we are living through this very example right now – with Europe a, often soundly, regulated market and key alternative markets completely unregulated – with the very likely outcome that we will see models that grew up on unregulated markets compete with those that grew up in Europe, in Europe. How will that play out? It is not inconceivable that the diet restricted ones will win, by the way. That is an empirical question.

So, finally – a plea. Let’s recognize that we need to move beyond the idea that data is like oil. It limits our necessary and important public debate. It hampers us and does not help in understanding how this new complex system can be understood. And this is a wide open field, where we have more questions than answers right now – and we should not let faulty answers distract us. And yes, I recognize that this may be a fool’s plea, the image of data like oil is so strong and alluring, but I would not be the optimist I am if I did not think we could get to a better understanding of the issues here.

A note on complementarity and substitution

One of the things I hear the most in the many conversations I have on tech and society today is that computers will take jobs or that man will be replaced by machine. It is a reasonable and interesting question, but I think, ultimately wrong. I tried to collect a few thoughts about that in a small essay here for reference. The question interests me for several reasons – not least because I think that it is partly a design question rather than something driven by technological determinism. This in itself is a belief that could be challenged on a number of fronts, but I think there is a robust defense for it. The idea that technology has to develop in the direction of substitution is simply not true if we look at all existing systems. Granted: when we can automate not just a task but cognition generally this will be challenged, but strong reasons remain to believe that we will not automate fully. So, more of this later. (Image: Robin Zebrowski)

“Is there a xeno-biology of artificial intelligence?” – draft essay

One of the things that fascinate me is the connections we can make between technology and biology in exploring how technology will develop. It is a field that I enjoy exploring, and where I am slowly focusing some of my research work and writing. Here is a small piece on the possibility of a xeno-biology of artificial intelligence. All comments welcome to nicklas.berildlundblad at gmail.com.