On not knowing (Man / Machine III)

Humans are not great at answering questions with “I don’t know”. They often seek to provide answers even where they know that they do not know. Yet still, one of the hallmarks of careful thinking is to acknowledge when we do not know something – and when we cannot say anything meaningful about an issue. This socratic wisdom – knowing that we do not know – becomes a key challenge as we design systems with artificial intelligence components in them.

One way to deal with this is to say that it is actually easier with machines. They can give a numeric statement of their confidence in a clustering of data, for example, so why is this an issue at all? I think this argument misses something important about what it is that we are doing when we say that we do not know. We are not simply stating that a certain question has no answers above a confidence level, we can actually be saying several different things at once.

We can be saying…
…that we believe that the question is wrong, or that the concepts in the question are ill-thought through.
…that we have no data or too little data to form a conclusion, but that we believe more data will solve the problem.
…that there is no reliable data or methods of ascertaining if something is true or not.
…that we have not thought it worthwhile to find out or that we have not been able to find out within the allotted time.
…that we believe this is intrinsically unknowable.
…that this is knowledge we should not seek.

And these are just some examples of what it is that we are possibly saying when we say “I don’t know”. Stating this simple proposition is essentially a way to force a re-examination of the entire issue to find the roots of our ignorance. Saying that we do not know something is a profound statement of epistemology and hence a complex judgment – and not a statement of confidence or probability.

A friend and colleague suggested, on discussing this, that it actually makes for a nice version of the Turing test. When a computer answers a question by saying “I don’t know” and does so embedded in the rich and complex language game of knowledge (as evidenced by it reasoning about it, I assume), it can be seen as intelligent in a human sense.
This socratic variation of the Turing test also shows the importance of the pattern of reasoning, since “I don’t know” is the easiest canned answer to code into a conversation engine.

*

There is a special category of problems related with saying “I don’t know” that have to do with search satisfaction and raise interesting issues. When do you stop looking? In Jeremy Groopman’s excellent book on How Doctors Think there is an interesting example of radiologists. The key challenge for this group of professionals, Groopman notes, is when to stop looking. You scan an x-ray, find pneumonia and … done? What if there is something else? Other anomalies that you need to look for? When do you stop looking?

For a human being that is a question of time limits imposed by biology, organization, workload and cost. The complex nature of the calculation for stopping allows for different stopping criteria over time and you can go on to really think things through when the parameters change. Groopman’s interview with a radiologist is especially interesting given that this is one field that we believe can be automated to great benefit. The radiologist notes this looming risk of search satisfaction and essentially suggests that you use a check schema – trace out the same examination irrespective of what it is that you are looking for, and then summarize the results.

The radiologist, in this scenario, becomes a general search for anomalies that are then classified, rather than a specialized pattern recognition expert that seeks out examples of cancers – and for some cases the radiologist may only be able to identify the anomaly, but without understanding it. In one of the cases in the book the radiologist finds traces of something he does not understand – weak traces – that then prompts him to do a biopsy, not based on the picture itself, but on the lack of anything on a previous x-ray.

Context, generality, search satisfaction and gestalt analysis are all complex parts of when we know and do not know something. And our reactions to a lack of knowledge are interesting. The next step in not knowing is of course questioning.

A machine that answers “I don’t know” and then follows it up with a question is an interesting scenario — but how does it generate and choose between questions? There seems to be a lot to look at here – and question generation born out of a sense of ignorance is not a small part of intelligence either.