Firstly, I don’t really like the wording of the Koan. I feel like a more accurate statement of the fundamental problem here is “What rule could restrict our beliefs to propositions that we can usefully discuss whether or not they are true without excluding any statements for which we would like be base our behavior on whether or not they are true.” Unfortunately, on some level I do not believe that there is a satisfactory answer here. Though it is quite possible that the problem is with my wanting to base my behavior on the truth of statements whose truth cannot be meaningfully discussed.
To start with, let’s talk about the restriction about restricting to statements for which we can meaningfully discuss whether or not they are true. Given the context of the post this is relatively straightforward. If truth is an agreement between our beliefs and reality, and if reality is the thing that determines our experiences, then it is only meaningful to talk about beliefs being true if there are some sequences of possible experiences that could cause the belief to be either true or false. This is perhaps too restrictive a use of “reality”, but certainly such beliefs can be meaningfully discussed.
Unfortunately, I would like to base my actions upon beliefs that do not fall into this category. Things like “the universe will continue to exist after I die” does not have any direct implications on my lifetime experiences, and thus would be considered meaningless. Fortunately, I have found a general transformation that turns such beliefs into beliefs that often have meaning. The basic idea is to instead of asking directly about my experiences to instead use Solomonoff induction to ask the question indirectly. For example, the question above becomes (roughly) “will the simplest model of my lifetime experiences have things corresponding to objects existing at times later than anything correspond to me?” This new statement could be true (as it is with my current set of experiences), or false (if for example, I expected to die in a big crunch). Now on every statement I can think of, the above rule transforms the statement A to a statement T(A) so that my naive beliefs about A are the same as my beliefs about T(A) (if they exist). Furthermore, it seems that T(A) is still meaningless in the above sense only in cases where I naively believe A to actually be meaningless and thus not useful for determining my behavior. So in some sense, this transformation seems to work really well.
Unfortunately, things are still not quite adding up to normality for me. The thing that I actually care about is whether or not people will exist after my death, not whether certain models contain people after my death. Thus even though this hack seems to be consistently giving me the right answers to questions about whether statements are true or meaningful, it does not seem to be doing so for the right reasons.
In case you were exposing a core uncertainty you had - ‘I want a) people to exist after me more than I want b) a MODEL that people exist after me, but my thinking incorporates b) instead of a); and that means my priorities are wrong’ - and it’s still troubling you, I’d like to suggest the opposite: if you have a model that predicts what you want, that’s perfect! Your model (I think) takes your experiences, feeds them into a Bayesian algo, and predicts the future—what better way is there to think? I mean, I lack such computing power and honesty...but if an honest computer takes my experiences and says, ‘Therefore, people exist after me,’ then my best possible guess is that people exist after me, and I can improve the chance of that using my model.
OK. Here’s my best shot at it.
Firstly, I don’t really like the wording of the Koan. I feel like a more accurate statement of the fundamental problem here is “What rule could restrict our beliefs to propositions that we can usefully discuss whether or not they are true without excluding any statements for which we would like be base our behavior on whether or not they are true.” Unfortunately, on some level I do not believe that there is a satisfactory answer here. Though it is quite possible that the problem is with my wanting to base my behavior on the truth of statements whose truth cannot be meaningfully discussed.
To start with, let’s talk about the restriction about restricting to statements for which we can meaningfully discuss whether or not they are true. Given the context of the post this is relatively straightforward. If truth is an agreement between our beliefs and reality, and if reality is the thing that determines our experiences, then it is only meaningful to talk about beliefs being true if there are some sequences of possible experiences that could cause the belief to be either true or false. This is perhaps too restrictive a use of “reality”, but certainly such beliefs can be meaningfully discussed.
Unfortunately, I would like to base my actions upon beliefs that do not fall into this category. Things like “the universe will continue to exist after I die” does not have any direct implications on my lifetime experiences, and thus would be considered meaningless. Fortunately, I have found a general transformation that turns such beliefs into beliefs that often have meaning. The basic idea is to instead of asking directly about my experiences to instead use Solomonoff induction to ask the question indirectly. For example, the question above becomes (roughly) “will the simplest model of my lifetime experiences have things corresponding to objects existing at times later than anything correspond to me?” This new statement could be true (as it is with my current set of experiences), or false (if for example, I expected to die in a big crunch). Now on every statement I can think of, the above rule transforms the statement A to a statement T(A) so that my naive beliefs about A are the same as my beliefs about T(A) (if they exist). Furthermore, it seems that T(A) is still meaningless in the above sense only in cases where I naively believe A to actually be meaningless and thus not useful for determining my behavior. So in some sense, this transformation seems to work really well.
Unfortunately, things are still not quite adding up to normality for me. The thing that I actually care about is whether or not people will exist after my death, not whether certain models contain people after my death. Thus even though this hack seems to be consistently giving me the right answers to questions about whether statements are true or meaningful, it does not seem to be doing so for the right reasons.
In case you were exposing a core uncertainty you had - ‘I want a) people to exist after me more than I want b) a MODEL that people exist after me, but my thinking incorporates b) instead of a); and that means my priorities are wrong’ - and it’s still troubling you, I’d like to suggest the opposite: if you have a model that predicts what you want, that’s perfect! Your model (I think) takes your experiences, feeds them into a Bayesian algo, and predicts the future—what better way is there to think? I mean, I lack such computing power and honesty...but if an honest computer takes my experiences and says, ‘Therefore, people exist after me,’ then my best possible guess is that people exist after me, and I can improve the chance of that using my model.