Can someone please post a link to a paper on mathematics, philosophy, anything, that explains why there’s this huge disconnect between “one-off choices” and “choices over repeated trials”? Lee?
Here’s the way across the philosophical “chasm”: write down the utility of the possible outcomes of your action. Use probability to find the expected utility. Do it for all your actions. Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.
You might have a point if there existed a preference effector with incoherent preferences that could only ever effect one preference. I haven’t thought a lot about that one. But since your incoherent preferences will show up in lots of decisions, I don’t care if this specific decision will be “repeated” (note: none are ever really repeated exactly) or not. The point is that you’ll just keep losing those pennies every time you make a decision.
Save 400 lives, with certainty.
Save 500 lives, with 90% probability; save no lives, 10% probability.
What are the outcomes? U(400 alive, 100 dead, I chose choice 1) = A, U(500 alive, 0 dead, I chose choice 2) = B, and U(0 alive, 500 dead, I chose choice 2) = C.
Remember that probability is a measure of what we don’t know. The plausibility that a given situation is (will be) the case. If 1.0A > 0.9B + 0.1*C, then I prefer choice 1. Otherwise 2. Can you tell me what’s left out here, or thrown in that shouldn’t be? Which part of this do you have a disagreement with?
Can someone please post a link to a paper on mathematics, philosophy, anything, that explains why there’s this huge disconnect between “one-off choices” and “choices over repeated trials”? Lee?
Here’s the way across the philosophical “chasm”: write down the utility of the possible outcomes of your action. Use probability to find the expected utility. Do it for all your actions. Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.
You might have a point if there existed a preference effector with incoherent preferences that could only ever effect one preference. I haven’t thought a lot about that one. But since your incoherent preferences will show up in lots of decisions, I don’t care if this specific decision will be “repeated” (note: none are ever really repeated exactly) or not. The point is that you’ll just keep losing those pennies every time you make a decision.
Save 400 lives, with certainty.
Save 500 lives, with 90% probability; save no lives, 10% probability. What are the outcomes? U(400 alive, 100 dead, I chose choice 1) = A, U(500 alive, 0 dead, I chose choice 2) = B, and U(0 alive, 500 dead, I chose choice 2) = C.
Remember that probability is a measure of what we don’t know. The plausibility that a given situation is (will be) the case. If 1.0A > 0.9B + 0.1*C, then I prefer choice 1. Otherwise 2. Can you tell me what’s left out here, or thrown in that shouldn’t be? Which part of this do you have a disagreement with?
http://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_iterated_prisoners.27_dilemma
(just an example of such a disconnect, not a general defence of disconects)