Can you give an example of situations A, B, C for which your preferences are A > B, B > C, C > A? What would you do if you need to choose between A, B, C?
Sure. I’ll go to the grocery store and have three kinds of tomato sauce and I’ll look at A and B, and pick B, then B and C, pick C, and C and A, and pick A. And I’ll stare at them indecisively until my preferences shift. It’s sort of ridiculous—it can take something like a minute to decide. This is NOT the same as feeling indifferent, in which case I would just pick one and go.
I have similar experiences when choosing between entertainment options, transport, etc. My impression is that this is an experience that many people have.
It seems to me that what you’re describing are not preferences but spur of the moment decisions. A preference should be thought of as in CEV: the thing you would prefer if you thought about it long enough, knew enough, were more the person you want to be etc. The mere fact you somehow decide between the sauces in the end suggests you’re not describing a preference. Also I doubt that you have terminal values related to tomato sauce. More likely, your terminal values involve something like “experiencing pleasure” and your problem here is epistemic rather than “moral”: you’re not sure which sauce would give you more pleasure.
You are using preference to mean something other than I thought you were.
I’m not convinced that the CEV definition of preference is useful. No actual human ever has infinite time or information; we are always making decisions while we are limited computationally and informationally. You can’t just define away those limits. And I’m not at all convinced that our preferences would converge even given infinite time. That’s an assumption, not a theorem.
When buying pasta sauce, I have multiple incommensurable values: money, health, and taste. And in general, when you have multiple criteria, there’s no non-paradoxical way to do rankings. (This is basically Arrow’s theorem). And I suspect that’s the cause for my lack of preference ordering.
No actual human ever has infinite time or information
Of course. But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.
When buying pasta sauce, I have multiple incommensurable values: money, health, and taste.
Money is not a terminal value for most people. I suspect you want money because of the things it can buy you, not as a value in itself. I think health is also instrumental. We value health because illness is unpleasant, might lead to death and generally interferes with taking actions to optimize our values. The unpleasant sensations of illness might well be commensurable with the pleasant sensations of taste. For example you would probably pass up a gourmet meal if eating it implies getting cancer.
But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.
However you can not know what decisions you would make if you had infinite time and information. You can make guesses based on your ideas of convergence, but that’s about it.
I think it should be possible to compute probabilities and expectation values of absolutely anything. However to put it on a sound mathematical basis we need a theory of logical uncertainty.
I think so because conceptually a Bayesian expectation value is your “best effort” to estimate something. Since you can always do your “best effort” you can always compute the expectation value. Of course, for this to fully make sense we must take computing resource limits into account. So we need a theory of probability with limited computing resources aka a theory of logical uncertainty.
conceptually a Bayesian expectation value is your “best effort” to estimate something.
Not quite. Conceptually a Bayesian expectation is your attempt to rationally quantify your beliefs which may or may not involve best efforts. That requires these beliefs to exist. I don’t see why it isn’t possible to have no beliefs with regard to some topic.
Since you can always do your “best effort” you can always compute the expectation value.
That’s not very meaningful. You can always output some number, but so what? If you have no information you have no information and your number is going to be bogus.
If you don’t believe that the process of thought asymptotically converges to some point called “truth” (at least approximately), what does it mean to have a correct answer to any question?
Meta-remark: Whoever is downvoting all of my comments in this thread, do you really think I’m not arguing in good faith? Or are you downvoting just because you disagree? If it’s the latter, do you think it’s good practice or you just haven’t given it thought?
If you don’t believe that the process of thought asymptotically converges to some point called “truth” (at least approximately), what does it mean to have a correct answer to any question?
There is that thing called reality. Reality determines what constitutes a correct answer to a question (for that subset of questions which actually have “correct” answers).
I see no reason to believe that the process of thought converges at all, never mind asymptotically to ‘some point called “truth”’.
Interaction with reality only gives you raw sensory experiences. It doesn’t allow you to deduce anything. When you compute 12 x 12 and it turns out to be 144, you believe 144 is the one correct answer. Therefore you implicitly assume that tomorrow you won’t somehow realize the answer is 356.
Not quite. I can test whether a rock is hard by kicking it.
But this byte-sized back-and-forth doesn’t look particularly useful. I don’t understand where you are coming from—to me it seems that you consider your thought processes primary and reality secondary. Truth, say you, is whatever the thought processes converge to, regardless of reality. That doesn’t make sense to me.
When you kick the rock, all you get is a sensory experience (a quale, if you like). You interpret this experience as a sensation arising from your foot. You assume this sensation is the result of your leg undergoing something called “collision” with something called “rock”. You deduce that the rock probably has a property called “hard”. All of those are deductions you do using your model of reality. This model is generated from memories of previous experiences by a process of thought based on something like Occam’s razor.
Since the only access to truth we might have is through our own thought, if the latter doesn’t converge to truth (at least approximately) then truth is completely inaccessible.
if the latter doesn’t converge to truth (at least approximately) then truth is completely inaccessible.
Why not? Granted that we have access to reality only through mental constructs and so any approximations to “the truth” are our own thoughts, but I don’t see any problems with stating that sometimes these mental constructs adequately reflect reality (=truth) and sometimes they don’t. I don’t see where this whole idea of asymptotic convergence is coming from. There is no guarantee that more thinking will get you closer to the truth, but on the other hand sometimes the truth is right there, easily accessible.
Can you give an example of situations A, B, C for which your preferences are A > B, B > C, C > A? What would you do if you need to choose between A, B, C?
Sure. I’ll go to the grocery store and have three kinds of tomato sauce and I’ll look at A and B, and pick B, then B and C, pick C, and C and A, and pick A. And I’ll stare at them indecisively until my preferences shift. It’s sort of ridiculous—it can take something like a minute to decide. This is NOT the same as feeling indifferent, in which case I would just pick one and go.
I have similar experiences when choosing between entertainment options, transport, etc. My impression is that this is an experience that many people have.
If you google “intransitive preference” you get a bunch of references—this one has cites to the original experiements: http://www.stanford.edu/class/symbsys170/Preference.pdf
It seems to me that what you’re describing are not preferences but spur of the moment decisions. A preference should be thought of as in CEV: the thing you would prefer if you thought about it long enough, knew enough, were more the person you want to be etc. The mere fact you somehow decide between the sauces in the end suggests you’re not describing a preference. Also I doubt that you have terminal values related to tomato sauce. More likely, your terminal values involve something like “experiencing pleasure” and your problem here is epistemic rather than “moral”: you’re not sure which sauce would give you more pleasure.
You are using preference to mean something other than I thought you were.
I’m not convinced that the CEV definition of preference is useful. No actual human ever has infinite time or information; we are always making decisions while we are limited computationally and informationally. You can’t just define away those limits. And I’m not at all convinced that our preferences would converge even given infinite time. That’s an assumption, not a theorem.
When buying pasta sauce, I have multiple incommensurable values: money, health, and taste. And in general, when you have multiple criteria, there’s no non-paradoxical way to do rankings. (This is basically Arrow’s theorem). And I suspect that’s the cause for my lack of preference ordering.
Of course. But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.
Money is not a terminal value for most people. I suspect you want money because of the things it can buy you, not as a value in itself. I think health is also instrumental. We value health because illness is unpleasant, might lead to death and generally interferes with taking actions to optimize our values. The unpleasant sensations of illness might well be commensurable with the pleasant sensations of taste. For example you would probably pass up a gourmet meal if eating it implies getting cancer.
However you can not know what decisions you would make if you had infinite time and information. You can make guesses based on your ideas of convergence, but that’s about it.
A Bayesian never “knows” anything. She can only compute probabilities and expectation values.
Can she compute probabilities and expectation values with respect to decisions she would make if she had infinite time and information?
I think it should be possible to compute probabilities and expectation values of absolutely anything. However to put it on a sound mathematical basis we need a theory of logical uncertainty.
On the basis of what do you think so? And what entity will be doing the computing?
I think so because conceptually a Bayesian expectation value is your “best effort” to estimate something. Since you can always do your “best effort” you can always compute the expectation value. Of course, for this to fully make sense we must take computing resource limits into account. So we need a theory of probability with limited computing resources aka a theory of logical uncertainty.
Not quite. Conceptually a Bayesian expectation is your attempt to rationally quantify your beliefs which may or may not involve best efforts. That requires these beliefs to exist. I don’t see why it isn’t possible to have no beliefs with regard to some topic.
That’s not very meaningful. You can always output some number, but so what? If you have no information you have no information and your number is going to be bogus.
If you don’t believe that the process of thought asymptotically converges to some point called “truth” (at least approximately), what does it mean to have a correct answer to any question?
Meta-remark: Whoever is downvoting all of my comments in this thread, do you really think I’m not arguing in good faith? Or are you downvoting just because you disagree? If it’s the latter, do you think it’s good practice or you just haven’t given it thought?
There is that thing called reality. Reality determines what constitutes a correct answer to a question (for that subset of questions which actually have “correct” answers).
I see no reason to believe that the process of thought converges at all, never mind asymptotically to ‘some point called “truth”’.
How do you know anything about reality if not through your own thought process?
Through interaction with reality. Are you arguing from a brain-in-the-vat position?
Interaction with reality only gives you raw sensory experiences. It doesn’t allow you to deduce anything. When you compute 12 x 12 and it turns out to be 144, you believe 144 is the one correct answer. Therefore you implicitly assume that tomorrow you won’t somehow realize the answer is 356.
And what does that have to do with knowing anything about reality? Your thought process is not a criterion of whether anything is true.
But it is the only criterion you are able to apply.
Not quite. I can test whether a rock is hard by kicking it.
But this byte-sized back-and-forth doesn’t look particularly useful. I don’t understand where you are coming from—to me it seems that you consider your thought processes primary and reality secondary. Truth, say you, is whatever the thought processes converge to, regardless of reality. That doesn’t make sense to me.
When you kick the rock, all you get is a sensory experience (a quale, if you like). You interpret this experience as a sensation arising from your foot. You assume this sensation is the result of your leg undergoing something called “collision” with something called “rock”. You deduce that the rock probably has a property called “hard”. All of those are deductions you do using your model of reality. This model is generated from memories of previous experiences by a process of thought based on something like Occam’s razor.
OK, and how do we get from that to ‘the process of thought asymptotically converges to some point called “truth”’?
Since the only access to truth we might have is through our own thought, if the latter doesn’t converge to truth (at least approximately) then truth is completely inaccessible.
Why not? Granted that we have access to reality only through mental constructs and so any approximations to “the truth” are our own thoughts, but I don’t see any problems with stating that sometimes these mental constructs adequately reflect reality (=truth) and sometimes they don’t. I don’t see where this whole idea of asymptotic convergence is coming from. There is no guarantee that more thinking will get you closer to the truth, but on the other hand sometimes the truth is right there, easily accessible.
I apologize but this discussion seems to be going nowhere.
Agreed.