Real and unreal news (Notes on attention, fake news and noise #7)

What is the opposite of fake news? Is it real news? What, then, would that mean? It seems important to ask that question, since our fight against fake news also needs to be a fight _for_ something. But this quickly becomes an uncomfortable discussion, as evidenced by how people attack the question. When we discuss what the opposite of fake news is we often end up defending facts – and we inevitably end up quoting senator Moynihan, smugly saying that everyone has a right to their opinions, but not to their facts. This is naturally right, but it ducks the key question of what a fact is, and if it can exist on its own.

Let’s offer an alternative view that is more problematic. In this view we argue that facts can only exist in relationship to each-other. They are intrinsically connected in a web of knowledge and probability, and this web exists in a set of ontological premises that we call reality. Fake news – we could then argue – can exist only because we have lost our sense of a shared reality.

We hint at this when we speak of “a baseline of facts” or similar phrases (this phrase was how Obama referred to the challenge when interviewed by David Letterman recently), but we stop shy off admitting that we ultimately are caught up in a discussion about fractured reality. Our inability to share a reality creates the cracks, the fissures and fragments in which truth disappears.

This view has more troubling implications, and immediately should lead us to also question the term “fake news”, since the implication is clear – something can only be fake if there exists a reality against we can share it. The reason the term “fake news” is almost universally shunned by experts and people analyzing the issue is exactly this: it is used by different people to attack what they don’t like. We see leaders labeling news sources as “fake news” as a way to demarcate against a way to render the world that they reject. So “fake” comes to mean “wrong”.

Here is a key to the challenge we are facing. If we see this clearly – that what we are struggling with is not fake vs real news, but right vs wrong news, we also realize that there are no good solutions for the general problem of what is happening with our public discourse today. What we can find are narrow solutions for specific problems that are well-described (such as actions against deliberately misleading information from parties that deliberately mis-represent themselves), but the general challenge is quite different and much more troubling.

We suffer from a lack of shared reality.

This is interesting from a research standpoint, because it forces to ask the question of how a society constitutes a reality, and how it loses it. Such an investigation would need to touch on things like reality TV, the commodification of journalism (a la Adorno’s view of music – it seems clear that journalism has lost its liturgy). One would need to dig into and understand how truth has splintered and think hard about how our coherence theories of truth allow for this splintering.

It is worthwhile to pause on that point a little: when we understand the truth of a proposition to be its coherence with a system of other propositions, and not correspondence with an underlying ontologically more fundamental level, we open up for several different truths as long as you can imagine a set of coherent systems of propositions built on a few basic propositions – the baseline. What we have discovered in the information society is that the natural size of this necessary baseline is much smaller than we thought. The set of propositions we need to create alternate realities but not seem entirely insane is much smaller than we may have believed. And the cost for creating an alternate reality is sinking as you get more and more access to information as well as the creativity of others engaged in the same enterprise.

There is a risk that we underestimate the collaborative nature of the alternative realities that are crafted around us, the way they are the result of a collective creative effort. Just as we have seen the rise of massive open online courses in education, we have seen the rise of what we could call the massive open online conspiracy theories. They are powered by, and partly created in the same way — with the massive open online role playing games in a nice and interesting middle position. In a sense the unleashed creativity of our collaborative storytelling is what is fracturing reality – our narrative capacity has exploded the last decades.

So back to our question. The dichotomy we are looking at here is not one between fake and real news, or right and wrong news (although we do treat it that way sometimes). It is in a sense a difference between real and unreal news, but with a plurality of unrealities that we struggle to tell apart. There is no Archimedes’ point that allows us to lift the real from the fake, not bedrock foundation, as reality itself has been slowly disassembled over the last couple of decades.

A much more difficult question, then, becomes if we believe that we want a shared reality, or if we ever had one? It is a recurring theme in songs, literature and poetry – the shaky nature of our reality – and the courage needed to face it. In the remarkable song “Right Where It Belongs” this is well expressed by Nine Inch Nails (and remarkably rendered in this remix (we remix reality all the time)):

See the animal in his cage that you built
Are you sure what side you’re on?
Better not look him too closely in the eye
Are you sure what side of the glass you are on?
See the safety of the life you have built
Everything where it belongs
Feel the hollowness inside of your heart
And it’s all right where it belongs

What if everything around you
Isn’t quite as it seems?
What if all the world you think you know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself find yourself afraid to see?

What if all the world’s inside of your head?
Just creations of your own
Your devils and your gods all the living and the dead
And you really oughta know
You can live in this illusion
You can choose to believe
You keep looking but you can’t find the ones
Are you hiding in the trees?

What if everything around you
Isn’t quite as it seems?
What if all the world you used to know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself, find yourself afraid to see?

The central insight in this is one that underlies all of our discussions around information, propaganda, disinformation and misinformation, and that is the role of our identity. We exist – as facts – within the realities we dare to accept and ultimately our flight into alternate realities and shadow worlds is an expression of our relationship to ourselves.

Towards a glass bead game (The Structure of Human Knowledge as Game I)

Herman Hesse’s glass bead game is an intriguing intellectual thought experiment. He describes it in detail in his eponymous last novel:

“Under the shifting hegemony of now this, now that science or art, the Game of games had developed into a kind of universal language through which the players could express values and set these in relation to one another. Throughout its history the Game was closely allied with music, and usually proceeded according to musical and mathematical rules. One theme, two themes, or three themes were stated, elaborated, varied, and underwent a development quite similar to that of the theme in a Bach fugue or a concerto movement. A Game, for example, might start from a given astronomical configuration, or from the actual theme of a Bach fugue, or from a sentence out of Leibniz or the Upanishads, and from this theme, depending on the intentions and talents of the player, it could either further explore and elaborate the initial motif or else enrich its expressiveness by allusions to kindred concepts. Beginners learned how to establish parallels, by means of the Game’s symbols, between a piece of classical music and the formula for some law of nature. Experts and Masters of the Game freely wove the initial theme into unlimited combinations.”

The idea of a the unity of human knowledge, the thin threads that spread across different domains, the ability to connect seemingly disparate intellectual accomplishments — can it work? What does it mean for it to work?

On one level we could say that is simple – it is a game of analogy, and we only need to feel that there is a valid analogy between two different themes or things to assert them as “moves” in the game. We could say that the proof of the existence of an infinitude of primes is related to Escher’s paintings and argue that the infinite is present in both. The game – at its absolute lower boundary – is nothing else than an inspiring intellectual, collaborative essay. A game, then, consists of first stating the theme you wish to explore and then each player makes moves by suggesting knowledge that can be associated by analogy in sequence to the theme. This in itself can be quite interesting, I imagine, but it really is a lower boundary. The idea of the glass bead game being a game suggests that there is a way to judge progress in it, to juxtapose one game against another and argue that it is more masterful than the other.

Think about chess – it is possible to argue that one game in a Game (capital G Game being the particular variant of gaming, like chess, go or a boardgame) is more exciting and valuable than another, is it not? On what basis do we actually do that? Is it the complexity of the game? The beauty of the moves? How unusual it is? The lack of obvious mistakes? Why is a a game between Kasparov and Karpov more valuable in some sense than a game between me and a computer? (If we ignore, for a moment, the idea that a game between humans would have an intrinsically higher value than one between computers, something that seems dubious at best)? How do we ascribe value in the domain of games?

The aesthetic answer is only half-satisfying, it seems to me. I feel that there is also a point to be made about complexity, or about the game revealing aspects of the Game that were previously not clearly known. Maybe we could even state a partial answer by saying that any game that is unusual is more valuable than one that closely resembles already played games. Doing this suggests assigning a value to freshness or newness or simply variational richness. If we imagine the game space of a Game we could argue that there is greater value to a game that comes from an unexplored part of the game space. This idea, that the difference between a game and the corpus of played games could be a value in itself is not a bad one, and has actually been suggested as an alternative ground for intellectual property protection in the guise of originality (there always has to be an originality threshold, but beyond that). A piece that is significantly different from another (by mining the patterns of the corpus and producing a differential, say) could then be protected for longer or with broader scope, than one that is just like every other work in the corpus.

So, we could ascribe value through originality through analysis of the differential between the game and the corpus of played games (something like this seems to be going on in the admiration for AlphaGo’s games in the game community — there is a recognition that they represent an original – almost alien – way of playing go).

But originality only gets you so far in the glass bead game. I am sure noone has argued that Nietzsches theory of eternal recurrence can be linked to Joanna Newsom’s song Peach Plum Pear – but the originality of that association almost _lessens_ the value of the move in a glass bead game. There is an originality value function, but it exists within the boundaries of something else, of a common recognition of the validity of the move that we are trying to make within the theme we are exploring. So there has to be consistency with the theme as well as originality within that consistency.

Let’s examine an imaginary example game and see if we can reconstruct some ideas from that. Let us state that the theme is broad, the interplay between black and white in human knowledge. That theme is incredibly broad, but also specific enough to provide the _frame_ that we need in order to start working out possible moves that could suit and give us an idea. A valid move could be things like associating Rachmaninov’s piece Isle of the dead with Eisenstein’s principle of the use of color in movies (“Hence, the first condition for the use of color in a film is that it must be, first and foremost, a dramatic factor. In this respect color is like music. Music in films is good when it is necessary. Color, too, is good when it is necessary.”) By noting that Rachmaninov wrote his piece after having seen Böcklin’s painting The Isle of the Dead – but only in a black and white replica – and adding that he then was disappointed with the color of the original, we could device the notion of the use of black and white in non visual arts and science and then start to look for other examples of art and knowledge that seem to be inspired by or connected to the same binary ideas – testing ideas around two-dimensional Penrose tiling, I Ching, the piano keys, understanding the relationship to chess and exploring the general architecture and design of other games like go and backgammon, and othello…There exists a consistency here, and you could argue the moves are more or less orginal. The move from go to othello is less original than the move from Isle of the Dead to the I Ching (and then we could go back to other attempts to compose with the I Ching in a return move to the domain of music, after which we could land with leibnizian ideas inspired by that same book. It would seem that the binary nature of the I Ching then could be an anchor point in such a game).

It quickly becomes messy. But interesting. So the first two proto-rules of the game seem to be that we need originality within consistency. As we continue to explore possible rules and ideas we will at some point have to look at if there is an underlying structure that connects them. I would be remiss if I did not also reveal that I am interested in that because I wonder if there is something akin to a deep semiotic network of symbols that could be revealed by expanding machine translation to the domain of human knowledge over all. As has been documented, machine learning now can use deep structure of language to translate between two languages through an “interlingua”. At the heart of the idea of the glass bead game is the deceptively simple idea that there is such an interlingua between all domains of human knowledge – but can that be true?

The glass bead game – and the attempt to construct one – is a powerful play thing to use to start exploring that question.

Simone Weil’s principles for automation (Man / Machine VI)

Philosopher and writer Simone Weil laid out a few principles on automation in her fascinating and often difficult book Need for Roots. Her view as positive, and she noted that among workers in factories the happiest ones seemed to be the ones that worked with machines. She had strict views on the design of these machines however, and her views can be summarized in three general principles.

First, these tools of automation need to be safe. Safety comes first, and should also be weighed when thinking about what to automate first – the idea that automation can be used to protect workers is an obvious, but sometimes neglected one.

Second, the tools of automation need to be general purpose. This is an interesting principle, and one that is not immediately obvious. Weil felt that this was important – when it came to factories – because they could then be repurposed for new social needs, and respond to changing social circumstances – most pressingly, and in her time acute, war.

Third, the machine needs to be designed so that it is used and operated by man. The idea that you would substitute man by machine she found ridiculous for several reasons, but not least because we need to work to finds purpose and meaning, and any design that eliminates us from the process of work would be socially detrimental.

All Weil’s principles are applicable and up for debate in our time. I think the safety principle is fairly accepted, but we should not that she speaks of individual safety and not our collective safety. In the cases where technology for automation could pose a challenge for broader safety concerns in different ways, Weil does not provide us with a direct answer. This need not be apocalyptic scenarios at all, but could simply be questions of systemic failures of connected automation technologies, for example. Systemic safety, individual safety, social safety are all interesting dimensions to explore here – are silicon / carbon hybrid models always safer, more robust, more resilient?

The idea about general purpose and easy to repurpose is something that I think reflects how we have seen 3d printing evolve. One idea of 3D-printing is exactly this, that we get generic factories that can manufacture anything. But the other observation that is close at hand here is that you could imagine Weil’s principle as an argument for general artificial intelligence. It should be admitted that this is taking it very far, but there is something to that, and it is that a general AI & ML model can be broadly and widely taught and we would avoid narrow guild experts emerging in our industries. That would, in turn, allow for quick learning and evolution as these technologies, needs and circumstances change. General purpose technologies for automation would allow for us to change and adapt faster to new ideas, challenges and selection pressures – and would serve us well in a quickly changing environment.

The last point is one that we will need to examine closely. Should we consider it a design imperative to design for complementarity rather than substitution? There are strong arguments for this, not least cost arguments. Any analysis of a process that we want to automate will yield a silicon – carbon cost function that gives us to cost of the process as different parts of it are performed by machines and humans. A hypothesis would be that for most processes this equation will see a distribution across the two and only for very few will we see a cost equation where the human component is zeroed out. Not least because human intelligence is produced at extraordinarily low energy cost and with great resilience. There is even a risk mitigation strategy argument here — you could argue that always including a human element, or designing for complementarity, necessarily generates more resilient and robust systems as the failure paths of AIs and human intelligence look different and are triggered by different kinds of factors. If, for any system, you can allow for different failure triggers and paths, you seem to ensure that the system self-monitors effectively and reduces risk.

Weil’s focus on automation is also interesting. Today, in many policy discussions, we see the emergence of principles on AI. One could argue that this is technology-centric principle making, and that the application of ethical and philosophical principles suit the use of a technology better and that use-centric principles are more interesting. The use-case of automation is a broad one, admittedly, but an interesting one to test this on and see if salient differences emerge. How we choose to think about principles also force us to think about the way we test them. An interesting question is to compare with other technologies that have emerged historically. How would we think about principles on electricity, computation, steam — ? Or principles on automobiles and telephones and telegraphs? Where do we effectively place principles to construct normative landscapes that benefit us as a society? Principles for driving, for communicating, for selling electricity (and using it and certifying devices etc (oh, we could actually have a long and interesting discussion about what it would mean to certify different ML models!)).

Finally, it is interesting also to think about the function of work from a moral cohesion standpoint. Weil argues that we have no rights but for the duties we assume. Work is a foundational duty that allows us to build those rights, we could add. There is a complicated and interesting argument here that ties rights to duties to human work in societies from a sociological standpoint. The discussions about universal basic income are often conducted in sociological isolation, not thinking about the network of social concepts tied up in work. If there is, as Weil assume, a connection between our work and duties and the rights a society upholds on an almost metaphysical level, we need to re-examine our assumptions here – and look carefully at complementarity design as a foundational social design imperative for just societies.

Justice, markets, dance – on computational and biological time (Man / Machine V)

Are there social institutions that work better if they are biologically bounded? What would this even mean? Here is what I am thinking about: what if, say, a market is a great way of discovering knowledge, coordinating prices and solving complex problems – but only if it consists solely of human beings and is conducted at biological speeds? What if, when we add tools and automate these markets, we also lose their balance? What if we end up destroying the equilibrium that makes them optimized social institutions?
While initially this sounds preposterous, the question is worth examining. Let’s examine the opposite hypothesis – that markets work at all speeds, wholly automated and without any human intervention. Why would this be more likely, than for there to be certain limitations on the way the market is conducted?

Is dance still dance if it is performed in ultra-high speeds by robots only? Or do we think dance is a biologically bounded institution?
It would be remarkable if we found that there are a series of things that only work in biological time, but break down in computational time. It would force us to re-examine our basic assumptions about automation and computerization, but it would not force us to abandon them.

What we would need to do is more complex. We would have to answer the question of what is to computers as markets are to humans. We would have to build new, revamped institutions that exist in computational time and we would have to understand what the key differences are that apply and need to be integrated into future designs. All in all an intriguing task.

Are there other examples?

What about justice? Is a court system a biologically bounded system? Would we accept a court system that runs in computational time, and delivers an ultra fast verdict after computing the data sets necessary? A judgment delivered by a machine, rather than a trained jurist? This is not only a question of security – it is not just a question of if we trust the machine to do what is right. We know for a fact that human judges can be biased, and that even their blood sugar levels could influence decisions. Yet, we could argue that this does not need to concern us for us to be worried here. We could argue that justice needs to unfold in biological time, because that is how we savour it. That is how it is consumed. The court does not only pass judgment, it allows all of us to see, experience, hear justice be done. We need justice to run at biological time, because we need to absorb it, consume it.

We cannot find any moral nourishment in computational justice.

Justice, markets, dance. Biological vs computational time and patterns. Just another area where we need to sort out the borders and boundaries between man and machine – but where we have not even started yet. The assumption that whatever is done by man can be done better by machine is perhaps not serving us too well here.

A note on the ethics of entropy (Man / Machine IV)

In a comment on Luciano Floridi’s The Ethics of Information Martin Falment Fultot writes (Philosophy and Computers Spring 2016 Vol 15 no 2):

“Another difficulty for Floridi’s theory of information as constituting the fundamental value comes from the sheer existence of the unilateral arrow of thermodynamic processes. The second law of thermodynamics implies that when there is a potential gradient between two systems, A and B, such that A has a higher level of order, then in time, order will be degraded until A and B are in equilibrium. The typical example is that of heat flowing inevitably from a hotter body (a source) towards a colder body (a sink), thereby dissipating free energy, i.e., reducing the overall amount of order. From the globally encompassing perspective of macroethics, this appears to be problematic since having information on planet Earth comes at the price of degrading the Sun’s own informational state. Moreover, as I will show in the next sections, the increase in Earth’s information entails an ever faster rate of solar informational degradation. The problem for Floridi’s theory of ethics is that this implies that the Earth and all its inhabitants as informational entities are actually doing the work of Evil, defined ontologically as the increase in entropy. The Sun embodies more free energy than the Earth; therefore, it should have more value. Protecting the Sun’s integrity against the entropic action of the Earth should be the norm.”

At the heart of this problem, he argues, is that Floridi defines information as something good, Fultot argues, and hence the opposite is something evil – and he takes the opposite of information and structure to be entropy (this can be discussed). But there seems to be a lot of different possibilities here, and the overall argument deserves to be examine much closer, it seems to me.

Let’s ask a very simple question. Is entropy good or evil? And more concretely: do we have a moral duty to act as to maximize or minimize the production of entropy? This question may seem silly, but it is actually quite interesting. If some of the recent surmises about how organization and life can exist in a universe that tends to disorganization and heat death are right, the reason life exists – and will be prevalent in the universe – is that there is a hitherto undiscovered law of physics that essentially states that not only does the universe evolve towards more entropy, but it organizes itself as to increase the speed with which it does so. Entropy accelerates.

Life appears, because life is the universe’s way of making entropy faster.

As a corollarium technology evolves – presumably everywhere where there is life – because technology is a good way to make entropy faster. An artificial intelligence makes entropy much faster than a human being as it becomes able to take on more and more general tasks. Maybe there is even a “law of artificial intelligence and entropy” that states that any superintelligence necessarily produces more entropy than any ordinary intelligence, and that any increase in intelligence means an increase in the production of entropy? That thought deserves to be examined closer and in more detail, and clarified (I hope to return to this in a later note — the relationship between intelligence and entropy is a fascinating subject).

Back to our simple and indeed simplistic question. Is entropy good or evil? Do we have duty to act to minimize it or to maximize it? A lot of different considerations prop up and possible theories and ideas are rich and complex. Here are a number of possible answers.

  • Yes, we need to maximize entropy, because that is in line with the nature of the universe and ethics, ultimately, is about acting in such a way that you are true to the nature and laws you obey – and indeed, you are a part of this universe and should work for its completion in heat death. (Prefer acting in accordance with natural laws)
  • No, we should slow down the production to make it possible to observe the universe for as long as possible, and perhaps find an escape from this universe before it succumbs to heat death. (Prefer low entropy states and “individual” consciousness to high entropy states).
  • Yes, because materiality and order are evil and only in heat death do we achieve harmony. (Prefer high entropy states to low).

And so on. The discussion here also leads to another interesting question, and that is if we can, indeed, have an ethics of anything else than our actions against one other individual in the particular situation and relationship we find ourselves. A situationist reply here could actually be grounded in the kind of reductio ad absurdum that many would perceive an ethics of entropy to be.

As for technology, the ethical question then becomes this: should we pursue the construction of more and more advanced machines, if that also means that they produce more and more entropy? In the environmental ethics the goal is sustainable consumption, but the reality is that from the perspective of an ethics of entropy, there are no sustainable solutions. Just solutions that slow down the depletion of organization and order. That difference is interesting to contemplate as well.

The relationship between man and machine can also be framed as one between low entropy and high entropy forms of life.

On not knowing (Man / Machine III)

Humans are not great at answering questions with “I don’t know”. They often seek to provide answers even where they know that they do not know. Yet still, one of the hallmarks of careful thinking is to acknowledge when we do not know something – and when we cannot say anything meaningful about an issue. This socratic wisdom – knowing that we do not know – becomes a key challenge as we design systems with artificial intelligence components in them.

One way to deal with this is to say that it is actually easier with machines. They can give a numeric statement of their confidence in a clustering of data, for example, so why is this an issue at all? I think this argument misses something important about what it is that we are doing when we say that we do not know. We are not simply stating that a certain question has no answers above a confidence level, we can actually be saying several different things at once.

We can be saying…
…that we believe that the question is wrong, or that the concepts in the question are ill-thought through.
…that we have no data or too little data to form a conclusion, but that we believe more data will solve the problem.
…that there is no reliable data or methods of ascertaining if something is true or not.
…that we have not thought it worthwhile to find out or that we have not been able to find out within the allotted time.
…that we believe this is intrinsically unknowable.
…that this is knowledge we should not seek.

And these are just some examples of what it is that we are possibly saying when we say “I don’t know”. Stating this simple proposition is essentially a way to force a re-examination of the entire issue to find the roots of our ignorance. Saying that we do not know something is a profound statement of epistemology and hence a complex judgment – and not a statement of confidence or probability.

A friend and colleague suggested, on discussing this, that it actually makes for a nice version of the Turing test. When a computer answers a question by saying “I don’t know” and does so embedded in the rich and complex language game of knowledge (as evidenced by it reasoning about it, I assume), it can be seen as intelligent in a human sense.
This socratic variation of the Turing test also shows the importance of the pattern of reasoning, since “I don’t know” is the easiest canned answer to code into a conversation engine.

*

There is a special category of problems related with saying “I don’t know” that have to do with search satisfaction and raise interesting issues. When do you stop looking? In Jeremy Groopman’s excellent book on How Doctors Think there is an interesting example of radiologists. The key challenge for this group of professionals, Groopman notes, is when to stop looking. You scan an x-ray, find pneumonia and … done? What if there is something else? Other anomalies that you need to look for? When do you stop looking?

For a human being that is a question of time limits imposed by biology, organization, workload and cost. The complex nature of the calculation for stopping allows for different stopping criteria over time and you can go on to really think things through when the parameters change. Groopman’s interview with a radiologist is especially interesting given that this is one field that we believe can be automated to great benefit. The radiologist notes this looming risk of search satisfaction and essentially suggests that you use a check schema – trace out the same examination irrespective of what it is that you are looking for, and then summarize the results.

The radiologist, in this scenario, becomes a general search for anomalies that are then classified, rather than a specialized pattern recognition expert that seeks out examples of cancers – and for some cases the radiologist may only be able to identify the anomaly, but without understanding it. In one of the cases in the book the radiologist finds traces of something he does not understand – weak traces – that then prompts him to do a biopsy, not based on the picture itself, but on the lack of anything on a previous x-ray.

Context, generality, search satisfaction and gestalt analysis are all complex parts of when we know and do not know something. And our reactions to a lack of knowledge are interesting. The next step in not knowing is of course questioning.

A machine that answers “I don’t know” and then follows it up with a question is an interesting scenario — but how does it generate and choose between questions? There seems to be a lot to look at here – and question generation born out of a sense of ignorance is not a small part of intelligence either.

Hannah Arendt on politics and truth – and fake news? (Notes on attention, fake news and noise #6)

Any analysis of fake news would be incomplete without a reading of Hannah Arendts magnificent essay Truth and Politics from 1967. Arendt, in this essay, examines carefully the relationship between truth and politics, and makes a few observations that remind us of why the issue of “fake news” is neither new nor uniquely digital. It is but an aspect of that greater challenge of how we reconcile truth and politics.

Arendt anchors the entire discussion solidly not only in a broader context, but she reminds us that this is a tension that has been with civilization since Socrates. “Fake news” is nothing else than yet another challenge that meets us in the gap between dialectic and rhetoric, and Socrates would be surprised and dismayed to find us thinking we had discovered a new phenomenon. The issue of truth in politics is one that has always been at the heart of our civilization and our democratic tradition.
Arendt notes this almost brutally in the beginning of her essay:

“No one has ever doubted that truth and politics are on rather bad terms with each other, and no one, as far as I know, has ever counted truthfulness among the political virtues. Lies have always been regarded as necessary and justifiable tools not only of the politician’s and the demagogue’s but also of the stateman’s trade.” (p 223)

It is interesting to think about how we read Arendt here. Today, as politics is under attack and we suffer from an increase of rhetoric and the decline of dialogue, we almost immediately become defensive. We want to say that we should not deride politics, and that politics deserve respect and that we should be careful and ensure that we do not further increase people’s loss of faith in the political system of democracy — and all of this is both correct and deeply troubling at the same time. It shows us that our faith in the robustness of the system has suffered so many blows now that we shy away from the clear-eyed realization that politics is rhetoric first and dialogue only second (and bad politics never gets to dialogue at all).

Arendt does not mean to insult our democracy, she merely recognizes a philosophical analysis that has remained constant over time. She quotes Hobbes as saying that if power depended on the sum of the angles in a triangle not being equal to the sum of two angles in a rectangle, then books of geometry would be burned by some in the streets. This is what politics is – power – and we should not expect anything else. That is why the education of our politicians is so important, and their character key. Socrates sense of urgency when he tries to educate Alcibiades is key, and any reader who read the dialogues would be aware of the price of Socrates failure in what Alcibiades became.

Arendt also makes an interesting point on the difference between what she calls rational truths – the mathematical, scientific – and the factual ones and point out that the latter are “much more vulnerable”. (p 227) And factual truth is the stuff politics are made of, she notes.

“Dominion (to speak Hobbes’ language) when it attacks rational truth oversteps, as it were, its domain while it gives battle on its own ground when it falsifies or lies away facts.” (p 227)

Facts are fair game in politics, and has always been. And Arendt then makes an observation that is key to understanding our challenges and is worth quoting in full:

“The hallmark of factual truth is that its opposite is neither error nor illusion nor opinion, not one of which reflects upon personal truthfulness, but the deliberate falsehood, or lie. Error, of course, is possible, and even common, with respect to factual truth, in which case this kind of truth is in no way different from scientific or rational truth. But the point is that with respect to facts there exists another alternative, and this alternative, the deliberate falsehood, does not belong to the same species as propositions that, whether right or mistaken, intend nor more than to say what is, or how something that is appears to me. A factual statement – Germany invaded Belgium in August 1914 – acquires political implications only by being put in an interpretative context. But the opposite proprosition, which Clemenceau, still unacquainted with the art of rewriting history, though absurd, needs no context to be of political significance. It is clearly an attempt to change the record, and as such it is a form of _action_. The same is true when the liar, lacking the power to make his falsehood stick, does not insist on the gospel truth of his statement but pretends that this is his ‘opinion’ to which he claims his constitutional right. This is frequently done by subversive groups, and in a politically immature public the resulting confusion can be considerable. The blurring of the dividing line between factual truth and opinion belongs among the many forms that lying can assume, all of which are forms of action.
While the liar is a man of action, the truthteller, whether he tells a rational or factual truth, most empathically is not.” (p 245)

Arendt is offering an analysis of our dilemma in as clear a way as can be. Lying is an action, telling the truth is most emphatically not, and the reduction of a falsehood to an opinion creates considerable confusion, to say the least. The insight that telling the truth is less powerful than lying, less of an action is potentially devastating – liars has something at stake, and truth tellers sometimes make the mistake of thinking that relaying the truth in itself is enough.

But Arendt also offers a solution and hope — and it is evident even in this rather grim quote: she speaks of a politically immature public, and as she closes the essay she takes great pains to say that these lies, these falsehoods, in no way detracts from the value of political action. In fact, she says that politics is a great endeavor and one that is worthy of our time, effort and commitment – but ultimately we also need to recognize that it is limited by truth. Our respect – as citizens – for truth is what preserves, she says, the integrity of the political realm.

As in the platonic dialogues, as in Hobbes, as everywhere in history – truth is a matter of character. Our own character, honed in dialogue and made resistant to the worst forms of rhetoric. This is not new – and it is not easy, and cannot be solved with a technical fix.

Link: https://idanlandau.files.wordpress.com/2014/12/arendt-truth-and-politics.pdf

Intelligence, life, consciousness, soul (Man / Machine II)

There is another perspective here that we may want to discuss, and that is if the dichotomy we are examining is maybe a false, or at least, less interesting one. What if we find that both man and machine can belong to a broader class of things that we may want to call “alive”? Rather than ask if something is nature or technology we may want to just ask if it lives.

The question of what life is and when it began is of course not an easy one, but if we work with simple definitions we may want to agree that something lives if it has a metabolism and the ability to reproduce. That, then, could cover both machines and humans. Humans – obviously – machines less obviously, but still solidly.

When we discuss artificial intelligence, our focus is on the question of if something can be said to have human-level intelligence. But what if we were to argue that nothing can be human-kind intelligent without also being alive? Without suffering under the same limitations and evolutionary pressures as we do?

Does this seem an arbitrary limitation? Perhaps, but it is no less arbitrary than the idea that intelligence is exhibited only through problem solving methods such as playing chess or go.

Can something be, I would ask, intelligent and not alive? In this simple question there is something fundamental captured. And if we say yes – then would it not seem awkward to imagine a robot to be intelligent but essentially dead?

This conceptual scheme – life / intelligence – is one that is being afforded far too little attention. Max Tegmark’s brilliant book on Life 3.0 is of course an exception, but even here it is just assumed that life is life even if it transcends the limitations (material and psychological) of life as we know it. Life is thought to be immanent in intelligence, and the rise of artificial intelligence is equated with the emergence of a new form of life.

But that is not a necessary relationship at all. One does not imply the other. And to make it more difficult, we could also examine the notoriously unclear concept of “consciousness” as a part of the exploration.

Can something be intelligent, dead and conscious? Can something be conscious and not live? Intelligent, but not conscious? The challenge that we face when we analyze our distinction between man and machine in this framework is that we are forced to think about the connection between life and intelligence in a new way, I think.

Man is alive, conscious and intelligent. Can a machine be all three and still be a machine?

We are scratching the surface here of a problem that Wittgenstein formulated much more clearly; in the second part of philosophical investigations he asks if we can see a man as a machine, an automaton. It is a question with some pedigree in philosophy since Descartes asked the same when we tried out his systematic doubt — looking out through his window he asked if he could doubt that the shapes he saw were fellow humans and his answer was that indeed, they could be automatons wearing clothes, mechanical men and nothing else.

Wittgenstein notes that this is a strange concept, and that we must agree that we would not call a machine thinking unless we adopted an attitude towards this machine that is essentially an attitude as if towards a soul. Thinking is not a disembodied concept. It is something we say of human beings, and a machine that could think would need to be very much like a man, so much so that we would have an attitude like that towards a soul, perhaps. Here is his observation (Philosophical Investigations part II: iv):

“Suppose I say if a friend: ‘He is not an automaton’. — What information is conveyed by this, and to whom would it be information? To a human being who meets him in ordinary circumstances? What information could it give him? (At the very most that this man always behaves like a human being and not occasionally like a machine.)

‘I believe that he is not an automaton’,  just like that, so far makes no sense.

My attitude towards him is an attitude towards a soul. I am not of the opinion that he has a soul.” (My bold).

The German makes the point even clearer, I think: “Meine Einstellung zu ihm ist eine Einstellung zur Seele. Ich habe nicht die Meinung dass er eine Seele hat.”  So for completeness we add this to our conceptual scheme: intelligence / life / consciousness / soul – and ask when a machine becomes a man?

As we widen our conceptual net, the questions around artificial intelligence become more interesting. And what Wittgenstein also adds is that for the more complex language game, there are no proper tests. At some point our attitudes change.

Now, the risk here, as Dennett points out, is that this shift comes too fast.

Notes on attention, fake news and noise #5: Are We Victims of Algorithms? On Akrasia and Technology.

Are we victims of algorithms? When we click on click bait and content that is low quality – how much of the responsibility of that click is on us and how much on the provider of the content? The way we answer that question maybe connected to an ancient debate in philosophy about Akrasia or weakness of will. Why, philosophy asks, do we do things that are not good for us?

Plato’s Socrates has a rather unforgiving answer: we do those things that are not good for us because we lack knowledge. Knowledge, he argues, is virtue. If we just know what is right we will act in the right way. When we click the low quality entertainment content and waste our time it is because we do not know better. Clearly, then, the answer from a platonic standpoint is to ensure that we enlighten each-other. We need a version of digital literacy that allows us to separate the wheat from the chaff, that helps us know better.

In fact, arguably, weakness of will did not exist for Socrates (hence why he is so forbidding, perhaps) but was merely ignorance. Once you know, you will act right.

Aristotle disagreed and his view was we may hold opinions that are short term and wrong and be affected by them, and hence do things that re not good for us. This view, later developed and adumbrated by Davidson, suggests that decisions are often made without the agent considering all possible things that may have a bearing on a choice. Davidson’s definition is something like “If someone has two choices a and b does b knowing that all things considered a would be better than b, but ends up doing b that is akrasia” (not a quote, but a rendering of Davidson). Akrasia then becomes not considering the full set of facts that should inform the choice.

Having one more beer without considering the previous ones, or having one more cookie without thinking about the plate now being empty.

The kind of akrasia we see in the technological space may be more like that. We short term pleasure visavi long term gain. A classical Kahneman / Tversky challenge. How do we govern ourselves?
So, how do we solve that? Can the fight against akrasia be outsourced? Designed in to technology? It seems trivially true that it can, and this is exactly what tools like Freedom and Stayfocusd actually try to do (there are many other versions of course). These apps block of sites or the Internet for a set amount of time, and force you back to focus on what you were doing. They eliminate the distraction of the web – but they are not clearly helping you consume high quality content.

That is a distinction worth exploring.

Could we make a distinction here between access and consumption? We can help fight akrasia at the access level, but its harder to do when it comes to consumption? Like, not buying chocolate so there is none in your fridge, or simple refraining from eating the chocolate in the fridge? It seems easier to do the first – reduce access – rather than control consumption. One is a question of availability, the either of governance. A discrete versus a continuous temptation, perhaps.

It seems easy to fight discrete akrasia, but sorting out continuous akrasia seems much harder.

*

Is it desirable to try? Assume that you could download a technology that would only show you high quality content on the web. Would you then install that? A splinternet provider that offers “qualitative Internet only – no click bait or distractions”. It would not have to be permanent, you could set hours for distraction, or allocate hours to your kids. Is that an interesting product?

The first question you would ask would probably be why you should trust this particular curator. Why should you allow someone else to determine what is high quality? Well, assume that this challenge can be met by outsourcing it to a crowd, where you self-identify values and ideas of quality and you are matched with others of the same view. Assume also, while we are at it, that you can do this without the resulting filter bubble problem, for now. Would you – even under those assumptions – trust the system?

The second question would be how such a system can reflect a dynamic in which the information production rate doubles. Collective curation models need to deal with the challenge of marking an item as ok or not ok – but the largest category will be a third: not rated. A bet on collective curation is a bet on the value of the not curated always being less than the cost of possible distraction. That is an unclear bet, it seems to me.

The third question would be what sensitivity you would have to deviations. In any collectively curated system a certain percentage of the content is till going to be what you consider low quality. How much such content would you tolerate before you ditch the system? How much of content made unavailable, but considered high quality by you, would you accept? How sensitive are you to the smoothing effects of the collective curation mechanism? Both in exclusion and inclusion? I suspect we are much more sensitive than we allow for.

Any anti-akrasia technology based on curation – even collective curation – would have to deal with those issues, at least. And probably many others.

*

Maybe it is worth also thinking about what it says about our view of human nature if we believe that solutions to akrasia need to be engineered. Are we permanently flawed, or is the fight against akrasia something that actually has corona effects in us – character building effects – that we should embrace?

Building akrasia away is different from developing the self-discipline to keep it in check, is it not?

Any problem that can be rendered as an akrasia problem – and that goes, perhaps, even for issues of fake news and similar content related conundrums – needs to be examined in the light of some of these questions, I suspect.

Man / Machine I: conceptual remarks.

How does man relate to machine? There is a series of questions here that I find fascinating and not a little difficult. I think the relationship between these two concepts also are determinative for a large set of issues that we are debating today, and so we would do well to examine this language game here.

There are, of course, many possibilities. Let’s look at a few.

First, there is the worn out “man is a lesser machine”-theme. The idea here is that machine is a perfect man, and that we should be careful with building machines that can replace us. Or that we should strive ourselves to become machines in order to survive. In this language game machine is perfection, eternity and efficiency, man is imperfection, ephemeral and inefficient. The gleaming steel and ultra-rational machine is a better version of biological man. It is curious to me that this is the conceptual picture that seems strongest right now. We worry about machines taking over, machines taking our jobs and machines turning us all into paper clips (or at least Nick Bostrom does). Because we see them as our superiors in every regard.

In many versions of this conceptual landscape evolution is also a sloppy and inefficient process, creating meat machines with many flaws and short comings — and machines are the end point. They are evolution mastered, and instead of being products of evolution machines produce it as they see fit. Nature is haphazard and technology is deliberate. Any advantage that biology has over technology is seen as easy to design in, and any notion of man’s uniqueness is quickly quashed by specific examples of the machine’s superiority: chess, jeopardy, go, driving —

The basis of this conceptual landscape is that there are individual things machines do better than man, and the conclusion is that machines must be generally better. A car drives faster than a man can run, a computer calculates faster than a man can count and so: machine is generally superior to man.

That does not, of course, follow with any logical necessity. A dog’s sense of smell is better than man, and a dog’s hearing is better than ours. Are dogs superior to man? Hardly anyone would argue that, yet still that same argumentative pattern seems to lead us astray when we talk about machines.

There is not, as far as I am concerned, wrong or right here – but I think we would do well to entertain a broad set of conceptual schemas when discussing technology and humanity, and so am wary of any specific frame being mistaken for the truth. Different frames afford us different perspectives and we should use them all.

The second, then, is that machine is imperfect man. This perspective does not come without its own dangers. The really interesting thing about Frankenstein’s monster is that there is a very real question of how we interpret the monster: as a machine or man? As superior or inferior? Clearly superior on strength, the monster is mostly thought to be stupid and inferior intellectually to its creator.
In many ways this is our secret hope. This is the conceptual schema that gives us hope in the Terminator movies: surely the machine can be beat, it has to have weaknesses that allow us to win over it with something distinctly human, like hope. The machine cannot be perfect, so it has to have a fatal flaw, an imperfection that will allow us to beat it?

The third is that machine is man and man just a machine. This is the La Mettrie view. The idea that there is a distinction between man and machine is simply wrong. We are machines and the question is just how we can be gradually upgraded and improved. There is, in this perspective, a whiff of the first perspective but with an out: we can become better machines, but we will still also be men. Augmentation and transcendence, uploading and cyborgs all inhabit this intellectual scheme.

But here we also have another, less often discussed, possibility. That indeed we are machines, but that we are what machines become when they become more advanced. Here, the old dictum from Arthur C Clarke comes back and we paraphrase: any sufficiently advanced technology is indistinguishable from biology. Biology and technology meld, nature and technology were never distinct or different – technology is just slow and less complex nature. As it becomes more complex, technology becomes alive – but not superior.

Fourth, and rarely explored, we could argue simply that machine and man are as different as man and any tool. There is no convergence, no relationship. A hammer is not a stronger hand. A computer is not a stronger mind. They are different and mixing them up is simply ridiculous. Man is of one category, machine of another and they are incommensurable.

Again: it is not a question of choosing one, but recognizing that they all matter in understanding questions of technology and humanity, I think. More to come.

Notes on attention, fake news and noise #4: Jacques Ellul and the rise of polyphonic propaganda part 1

Jacques Ellul is arguably one of the earlier and most consistent technology critics we have. His texts are due for a revival in a time when technology criticism is in demand, and even techno-optimists like myself would probably welcome that, because even if he is fierce and often caustic, he is interesting and thoughtful. Ellul had a lot to say about technology in books like The Technological Society and The Technological Bluff, but he also discussed the effects of technology on social information and news. In his bleak little work Propaganda: The Formation of Men’s Attitudes (New York 1965(1962)) he examines how propaganda draws on technology and how the propaganda apparatus shapes views and opinions in a society. There are many salient points in the book, and quotes that are worth debating.

That said, Ellul is not an easy read or an uncontroversial thinker. Here is how he connects propaganda and democracy, arguing that state propaganda is necessary to maintain democracy:

“I have tried to show elsewhere that propaganda has also become a necessity for the internal life of a democracy. Nowadays the State is forced to define an official truth. This is a change of extreme seriousness. Even when the State is not motivated to do this for reasons of actions or prestige, it is led to it when fulfilling its mission of disseminating information.

We have seen how the growth of information inevitably leads to the need for propaganda. This is truer in a democratic system than in any other.

The public will accept news if it is arranged in a comprehensive system, and if it does not speak only to the intelligence but to the ‘heart’. This means, precisely, that the public wants propaganda, and if the State does not wish to leave it to a party, which will provide explanations for everything (i.e. the truth), it must itself make propaganda. Thus, the democratic State, even if it does not want to, becomes a propagandist State because of trhe need to dispense information. This entails a profound constitutional and ideological transformation. It is, in effect, a State that must proclaim an official, general, and explicit truth. The State can no longer be objective or liberal, but is forced to bring to the overinformed people a corpus intelligentiae.”

Ellul says, in effect that in a noise society there is always propaganda – the question is who is behind it. It is a grim world view in which a State that yields the responsibility to engage in propaganda yields it to someone else.

Ellul comments, partly wryly, that the only way to avoid this is to allow citizens 3-4 hours to engage in becoming better citizens, and reduce the working day to 4 hours. A solution he agrees is simplistic and unrealistic, it seems, and it would require that citizens “master their passions and egotism”.

The view raised here is useful because it clearly states a view that sometime seems to be underlying the debate we are having – that there is a necessity for the State to become an arbiter of truth (or to designate one) or someone else will take that role. The weakness in this view is a weakness that plagues Ellul’s entire analysis, however, and in a sense our problem is worse. Ellul takes, as his object of study, propaganda from the Soviet Union and Nazi-Germany. His view of propaganda is one that is largely monophonic. Yes, technology still pushes information on citizens, but in 1965 it did so unidirectionally. Our challenge is different and perhaps more troubling: we are dealing with polyphonic propaganda. The techniques of propaganda are employed by a multitude of parties, and the net effect is not to produce truth – as Ellul would have it – but eliminate the conditions for truth. Truth no longer become viable in a set of mutually contradictory propaganda systems, it is reduced to mere feelings and emotions: “I feel this”. “This is my truth”. “This is the way I feel about it”.

In this case the idea that the state should speak too is radically different, because the state or any state-appointed arbiter of truth just adds to the polyphony of voices and provides them with another voice to enter into a polemic with. It fractures the debate even more, and allows for a special category of meta-propaganda that targets the way information is interpreted overall: the idea of a corridor of politically correct views that we have to exist within. Our challenge, however, is not the existence of such a corridor, but the fact that it is impossible to establish a coherent, shared model of reality and hence to decide what the facts are.

An epistemological community must rest on a fundamental cognitive contract, an idea about how we arrive at facts and the truth. It must contain mechanisms of arbitration that are institution in themselves, independent of political decision making or commercial interest. The lack of such a foundation means that no complex social cognition is possible. That in itself is devastating to a society, one could argue, and is what we need to think about.

It is no surprise that I take issue with Ellul’s assertion that technology is at the heart of the problem, but let me at least outline the argument I think Ellul would have to deal with if he was revising his book for our age. I would argue that in a globalized society, the only way we can establish that epistemological, basic foundation to build on is through technology and collaboration within new institutions. I have no doubt that the web could carry such institutions, just like it carries the Wikipedia.

There is an interesting observation about the web here, an observation that sometimes puzzles me. The web is simultaneously the most collaborative environment constructed by mankind and the most adversarial. The web and the Internet would not exist but for the protocol agreements that have emerged as its basis (this is examined and studied commendably in David Post’s excellent book Jefferson’s Moose). At the same time the web is a constant arms race around different uses of this collaboratively enabled technology.

Spam is not an aberration or anomaly, but can be seen as an instance of a generalized, platonic pattern in this space. A pattern that recurs through-out many different domains and has started to climb the semantic layers from simple commercial scams to the semiosphere of our societies, where memes compete for attention and propagation. And the question is not how to compete best, but how to continue to engage in institutional, collaborative and, yes, technological innovation to build stronger protections and counter-measures. What is to disinformation as spamfilters are to unwanted commercial emails? It is not mere spamfilters with new keywords, it needs to be something radically new and most likely institutional in the sense that it requires more than just technology.

Ellul’s book provides a fascinating take on propaganda and is required reading for anyone who wants to understand the issues we are working on. More on him soon.

Notes on attention, fake news and noise #3: The Noise Society 10 years later

This February it is 10 years since I defended my doctoral thesis on what I then called the Noise Society. The main idea was that the idea of an orderly, domesticated and controllable information society – modeled on the post-industrial visions of Bell and others – probably was wrongheaded, and that we would see a much wilder society characterized by an abundance of information and a lack of control, and in fact: we would see information grow to a point where the value of it actually collapsed as the information itself collapsed into noise. Noise, I felt then, was a good description not only of individual disturbances in the signal, but also the cost for signal discovery over all. A noise society would face very different challenges than an information society.

Copyright in a noise society would not be an instrument of encouraging the production of information so much as a tool for controlling and filtering information in different ways. Privacy would not be about controlling data about us as much as having the ability to consistently project a trusted identity. Free expression would not be about the right to express yourself, but about the right not to be drowned out by others. The design of filters would become key in many different ways.

Looking back now, I feel that I was right in some ways and wrong in many, but that the overall conclusion – that the increase in information and the consequences of this information wealth are at the heart of our challenges with technology – was not far off target. What I am missing the thesis is a better understanding of what information does. My focus on noise was a consequence of accepting that information was a “thing” rather than a process. Information looks like a noun, but is really a verb, however.

Revisiting these thoughts, I feel that the greatest mistake was not including Herbert Simon’s analysis of attention as a key concept in understanding information. If I had done that I would have been able to see that noise also is a process, and I would have been able to ask what noise does to a society, theorize that and think about how we would be able to frame arguments of policy in the light of attention scarcity. That would have been a better way to get at what I was trying to understand at the time.

But, luckily, thought is about progress and learning, and not about being right – so what I have been doing in my academic reading and writing for the last three years at least is to emphasize Herbert Simon’s work, and the importance of understanding his major finding that with a wealth of information comes a poverty of attention and a need to allocate attention efficiently.

I believe this can be generalized, and that the information wealth we are seeing is just one aspect of an increasing complexity in our societies. The generalized Simon-theorem is this: with a wealth of complexity comes a poverty of cognition and a need to learn efficiently. Simon, in his 1969 talk on this subject, notes that it is only by investing in artificial intelligence we can do this, and he says that it is obvious to him that the purpose of all of our technological endeavours is to ensure that we learn faster.

Learning, adapting to a society where our problems are an order of magnitude more complex, is key to survival for us as a species.
It follows that I think the current focus on digitization and technology is a mere distraction. What we should be doing is to re-organize our institutions and societies for learning more, and faster. This is where the theories of Hayek and others on knowledge coordination become helpful and important for us, and our ideological discussions should focus on if we are learning as a society or not. There is a wealth of unanswered questions here, such as how we measure the rate of learning, what the opposite of learning is, how we organize for learning, how technology can help and how it harms learning — questions we need to dig into and understand at a very basic level, I think.

So, looking back at my dissertation – what do I think?

I think I captured a key way in which we were wrong, and I captured a better model – but the model I was working with then was still fatally flawed. It focused on information as a thing not a process, and construed noise as gravel in the machinery. The focus on information also detracts from the real use cases and the purpose of all the technology we see around us. If we were, for once, to take our ambitions “to make the world a better place” seriously, we would have to think about what it is that makes the world better. What is the process that does that? It is not innovation as such, innovation can go both ways. The process that makes our worlds better – individually and as societies – is learning.

In one sense I guess this is just an exercise in conceptual modeling, and the question I seem to be answering is what conceptual model is best suited to understand and discuss issues of policy in the information society. That is fair, and a kind of criticism that I can live with: I believe concepts are crucially important and before we have clarified what we mean we are unable to move at all. But there is a risk here that I recognize as well, and that is that we get stuck in analysis-paralysis. What, then, are the recommendations that flow from this analysis?

The recommendations could be surprisingly concrete for the three policy areas we discussed, and I leave as an exercise for the reader to think about them. How would you change the data protection frameworks of the world if the key concern was to maximize learning? How would you change intellectual property rights? Free expression? All are interesting to explore and to solve in the light of that one goal. I tend to believe that the regulatory frameworks we end up with would be very different than the ones that we have today.

As one part of my research as an adjunct professor at the Royal Institute of Technology I hope to continue exploring this theme and others. More to come.