Even more generally, training a little classifier that is sensitive to the energy signature of type errors has dissolved most philosophical confusions.
Could you explain that? maybe even, like, attempt the explanation five times with really high human-brain “repetition penalty”? This sounds interesting but I expect to find it difficult to be sure I understood. I also expect a significant chance I already agree but don’t know what you mean, maybe bid 20%.
The ideal version of this would be ‘the little book of type errors’, a training manual similar to Polya’s How to Solve It but for philosophy instead of math.
The example Adam opens the post with is a good example, outlining a seemingly reasonable chain of thoughts and then pointing out the type error. Though, yes, in an ideal world it would be five examples before pointing it out so that the person has the opportunity to pattern complete on their own first (much more powerful than just having it explained right away).
In the Sorites paradox, the problem specification confabulates between the different practical and conceptual uses of language to create an intuition mismatch, similar to the Ship of Theseus. A kind of essentialist confusion that something could ‘really be’ a heap. The trick here is to carefully go through the problem formulation at the sentence level and identify when a referent-reference relation has changed without comment. More generally, our day-to-day use of language elides the practical domain of the match between the word and expectation of future physical states of the world. ‘That’s my water bottle’->a genie has come along and subatomically separated the water bottle, combined it with another one and then reconsituted two water bottles that superficially resemble your old one, which one is yours? ‘Yeah, um, my notion of ownership wasn’t built to handle that kind of nonsense, you sure have pulled a clever trick, doesn’t a being who can do that have better things to do with their time?’
Even more upstream, our drive towards this sort of error seems a spandrel of compression. It would be convenient to be able to say X is Y, or X=Y and therefore simplify our model by reducing the number of required entities by one. This is often successful and helpful for transfer learning, but we’re also prone to handwave lossy compression as lossless.
Could you explain that? maybe even, like, attempt the explanation five times with really high human-brain “repetition penalty”? This sounds interesting but I expect to find it difficult to be sure I understood. I also expect a significant chance I already agree but don’t know what you mean, maybe bid 20%.
The ideal version of this would be ‘the little book of type errors’, a training manual similar to Polya’s How to Solve It but for philosophy instead of math. The example Adam opens the post with is a good example, outlining a seemingly reasonable chain of thoughts and then pointing out the type error. Though, yes, in an ideal world it would be five examples before pointing it out so that the person has the opportunity to pattern complete on their own first (much more powerful than just having it explained right away).
In the Sorites paradox, the problem specification confabulates between the different practical and conceptual uses of language to create an intuition mismatch, similar to the Ship of Theseus. A kind of essentialist confusion that something could ‘really be’ a heap. The trick here is to carefully go through the problem formulation at the sentence level and identify when a referent-reference relation has changed without comment. More generally, our day-to-day use of language elides the practical domain of the match between the word and expectation of future physical states of the world. ‘That’s my water bottle’->a genie has come along and subatomically separated the water bottle, combined it with another one and then reconsituted two water bottles that superficially resemble your old one, which one is yours? ‘Yeah, um, my notion of ownership wasn’t built to handle that kind of nonsense, you sure have pulled a clever trick, doesn’t a being who can do that have better things to do with their time?’
Even more upstream, our drive towards this sort of error seems a spandrel of compression. It would be convenient to be able to say X is Y, or X=Y and therefore simplify our model by reducing the number of required entities by one. This is often successful and helpful for transfer learning, but we’re also prone to handwave lossy compression as lossless.