Pursuant to the last note, it is interesting to ask the following question: if human discovery of a game space like the one in go centers around what could be a local maxima, and computers can help us find other maxima and so play in an ”alien” way — i.e. a way that is not anchored in human cognition and ultimately perhaps in our embodied, biological cognition — should we then not expect the same to be true for other bodies of thought?
Let’s say that a ”body of thought” is the accumulated games in any specific game space, and that we agree we have discovered that human-anchored ”bodies of thought” seem to be quietly governed by our human nature — is the same then true for philosophy? Anyone reading a history of philosophy is struck by the way concepts, ideas, arguments and methods of thinking reminds you of different games in a vast game space. We don’t even need to deploy Wittgenstein’s notion of language games to see the fruitful application of that analogy across different domains of knowledge.
Can, then, machine learning help us discover ”alien” bodies of thought in philosophy? Or is there a requirement that a game space can be reduced to a set of formalized rules? If so – imagine a machine programmed to play Herman Hesse’s glass bead game, how would that work out?
In sum: have we underestimated the limiting effect on thinking across domains that our nature has? The real risk that what we hail as human knowledge and achievement is a set of local maxima?