The free will to make slightly worse choices ( Man / Machine XI)

In his chapter on intelectronics, his word for what most closely resembles artificial intelligence, Stanislaw Lem suggests an insidious way in which the machine could take over. It would not be, he says, because it wants to terrorize us, but more likely because it will try to be helpful. Lem develops the idea of the control problem, and the optimization problem, decades before they are then re-discovered by Nick Bostrom and others, and he runs through the many different ways in which a benevolent machine may just manipulate us in order to get better results for us.

This, however, is not the worst scenario. At the very end of the chapter, Lem suggests something much more interesting, and – frankly – hilarious. He says that another, more credible, version of the machines taking over would look like this: we develop machines that are simply better at making decisions for us than we would be making these very same decisions ourselves.

A simple example: your personal assistant can help you book travel, and knowing your preferences, being able to weight them against those of the rest of the family, the assistant has always booked top-notch vacations for you. Now, you crave your personal freedom so you book it yourself, and naturally – since you lack the combinatorial intelligence of an AI – the result is worse. You did not enjoy it as much, and the restaurants were not as spot on as they usually are. The book stores you found were closed, and not very interesting and out of the three museums you went to, only one really captured all of the family’s interests.

But you made your own decision. You exercised your free will. But what happens, says Lem, when that free will is nothing but the free will to make decisions that are always slightly worse than the ones the machine would have made for you? When your autonomy always comes at the cost of less pleasure? That – surmises Lem – would be a tyranny as insidious as any control environment or Orwellian surveillance state.

A truly intriguing thought, is it not?

*

As we examine it closer we may want to raise objections: we could say that making our own decisions, exercising our autonomy, in fact always means that we enjoy ourselves a little bit more, and that there is utility in the choice itself – so we will never end up with a benevolent dictator machine. But does that ring true? Is it not rather the case that a lot of people feel that there is real utility in not having to choose at all, as long as they feel that could have made a choice? Have we not seen sociological studies that argue that we live in a society that imposes so many choices on us that we all feel stressed about the plethora of alternatives for us?

What if the machine could let you know what breakfast cereal out of the many hundreds in the shelf in the supermarket will taste best for you, and at the same time be healthy? Would it not be great not to have to choose?

Or is there value in self-sabotage that we are neglecting to take into account here? That thought – that there is value in making worse choices, not because we exercise our will, but because we do not like ourselves, and are happy to be unhappy – well, it seems a little stretched. For sure, there are people like this – but as a general rule I don’t find that argument credible.

Well, we could say, our preferences change so much that it is impossible for a machine to know what I will want tomorrow – so the risk is purely fictional. I am not so sure that is true. I would suggest we are much more patterned than we like to believe. We live, as Dr Ford in Westworld notes, in our little loops – just like his hosts. We are probably much more predictable than we would like to admit, for a large set – although not all – cases. It is unlikely, admittedly, that a machine would be better at making life choices around love, work and career – these are choices that are hard to establish a pattern in (in fact, we arguably only establish those patterns in retrospect when we tell ourselves autobiographical stories about our lives).

There is also the possibility that the misses would be so unpleasant that the hits would not matter. This is an interesting argument, and I think there is something to it. If you knew that your favorite candy tasted fantastically 9 out 10 cases and tasted garbage ever tenth, without any chance of predicting when that would be, would you still eat it? Where would you draw a line? Every second piece of candy? 99 out of a 100? There is such a thing as disappointment cost, and if the machine is righto in the money in 999 out of a 1000 cases — is the miss such that we would stop using it, or prefer our own slightly worse choices? In the end – probably not.

*

The free will to make slightly worse choices. That is one way in which our definition of humanity could change fundamentally in a society with thinking machines.