Notes on attention, fake news and noise #5: Are We Victims of Algorithms? On Akrasia and Technology.

Are we victims of algorithms? When we click on click bait and content that is low quality – how much of the responsibility of that click is on us and how much on the provider of the content? The way we answer that question maybe connected to an ancient debate in philosophy about Akrasia or weakness of will. Why, philosophy asks, do we do things that are not good for us?

Plato’s Socrates has a rather unforgiving answer: we do those things that are not good for us because we lack knowledge. Knowledge, he argues, is virtue. If we just know what is right we will act in the right way. When we click the low quality entertainment content and waste our time it is because we do not know better. Clearly, then, the answer from a platonic standpoint is to ensure that we enlighten each-other. We need a version of digital literacy that allows us to separate the wheat from the chaff, that helps us know better.

In fact, arguably, weakness of will did not exist for Socrates (hence why he is so forbidding, perhaps) but was merely ignorance. Once you know, you will act right.

Aristotle disagreed and his view was we may hold opinions that are short term and wrong and be affected by them, and hence do things that re not good for us. This view, later developed and adumbrated by Davidson, suggests that decisions are often made without the agent considering all possible things that may have a bearing on a choice. Davidson’s definition is something like “If someone has two choices a and b does b knowing that all things considered a would be better than b, but ends up doing b that is akrasia” (not a quote, but a rendering of Davidson). Akrasia then becomes not considering the full set of facts that should inform the choice.

Having one more beer without considering the previous ones, or having one more cookie without thinking about the plate now being empty.

The kind of akrasia we see in the technological space may be more like that. We short term pleasure visavi long term gain. A classical Kahneman / Tversky challenge. How do we govern ourselves?
So, how do we solve that? Can the fight against akrasia be outsourced? Designed in to technology? It seems trivially true that it can, and this is exactly what tools like Freedom and Stayfocusd actually try to do (there are many other versions of course). These apps block of sites or the Internet for a set amount of time, and force you back to focus on what you were doing. They eliminate the distraction of the web – but they are not clearly helping you consume high quality content.

That is a distinction worth exploring.

Could we make a distinction here between access and consumption? We can help fight akrasia at the access level, but its harder to do when it comes to consumption? Like, not buying chocolate so there is none in your fridge, or simple refraining from eating the chocolate in the fridge? It seems easier to do the first – reduce access – rather than control consumption. One is a question of availability, the either of governance. A discrete versus a continuous temptation, perhaps.

It seems easy to fight discrete akrasia, but sorting out continuous akrasia seems much harder.

*

Is it desirable to try? Assume that you could download a technology that would only show you high quality content on the web. Would you then install that? A splinternet provider that offers “qualitative Internet only – no click bait or distractions”. It would not have to be permanent, you could set hours for distraction, or allocate hours to your kids. Is that an interesting product?

The first question you would ask would probably be why you should trust this particular curator. Why should you allow someone else to determine what is high quality? Well, assume that this challenge can be met by outsourcing it to a crowd, where you self-identify values and ideas of quality and you are matched with others of the same view. Assume also, while we are at it, that you can do this without the resulting filter bubble problem, for now. Would you – even under those assumptions – trust the system?

The second question would be how such a system can reflect a dynamic in which the information production rate doubles. Collective curation models need to deal with the challenge of marking an item as ok or not ok – but the largest category will be a third: not rated. A bet on collective curation is a bet on the value of the not curated always being less than the cost of possible distraction. That is an unclear bet, it seems to me.

The third question would be what sensitivity you would have to deviations. In any collectively curated system a certain percentage of the content is till going to be what you consider low quality. How much such content would you tolerate before you ditch the system? How much of content made unavailable, but considered high quality by you, would you accept? How sensitive are you to the smoothing effects of the collective curation mechanism? Both in exclusion and inclusion? I suspect we are much more sensitive than we allow for.

Any anti-akrasia technology based on curation – even collective curation – would have to deal with those issues, at least. And probably many others.

*

Maybe it is worth also thinking about what it says about our view of human nature if we believe that solutions to akrasia need to be engineered. Are we permanently flawed, or is the fight against akrasia something that actually has corona effects in us – character building effects – that we should embrace?

Building akrasia away is different from developing the self-discipline to keep it in check, is it not?

Any problem that can be rendered as an akrasia problem – and that goes, perhaps, even for issues of fake news and similar content related conundrums – needs to be examined in the light of some of these questions, I suspect.