Problem with that in human practice is that it leads to people defending their ruined plans, saying, “But my expected performance was great!”
It’s true that people make this kind of response, but that doesn’t make it valid, or mean that we have to throw away the notion of rationality as maximizing expected performance, rather than actual performance.
In the case of failed trading companies, can’t we just say that despite their fantasies, their expected performance shouldn’t have been so great as they thought? And the fact that their actual results differed from their expected results should cast suspicion on their expectations.
Perhaps we can say that expectations about performance be epistemically rational, and only then can an agent who maximizes their expected performance be instrumentally rational.
Achieving a win is much harder than achieving an expectation of winning (i.e. something that it seems you could defend as a good try).
Some expectations win. Some expectations lose. Yet not all expectations are created equal. Non-accidental winning starts with something that seems good to try (can accidental winning be rational?). At least, there is some link between expectations and rationality, such that we can call some expectations more rational than others, regardless of whether they actually win or lose.
An example SoullessAutomaton made was that we shouldn’t consider lottery winners rational, even though they won, because they should not have expected to. Conversely, all sorts of inductive expectations can be rational, even though sometimes they will fail due to the problem of induction. For instance, it’s rational to expect that the sun will rise tomorrow. If Omega decides to blow up the sun, my expectation will still have been rational, even though I turned out to be wrong.
Yet not all expectations are created equal. Non-accidental winning starts with something that seems good to try (can accidental winning be rational?).
In the real world, of course, most things are some mixture of controllable and randomized. Depending on your definition of accidental, it can be rational to make low-cost steps to position yourself to take advantage of possible events you have no control over. I wouldn’t call this accidental, however, because the average expected gain should be net positive, even if one expects (id est, with confidence greater than 50%) to lose.
I used the lottery as an example because it’s generally a clear-cut case where the expected gain minus the cost of participating is net negative and the controllable factor (how many tickets you buy) has extremely small impact.
Yes, and I liked your example for exactly this reason: the expected value of buying lottery tickets is negative.
I think that this shows that it is irrational to take an action where it’s clear-cut that the expected value is negative, even though due to chance, one iteration of that action might produce a positive result. You are using accidental the same way I am: winning from an action with a negative expected value is what I would call accidental, and winning with a positive expected value is non-accidental.
Things are a bit more complicated when we don’t know the expected value of an action. For example, in Eliezer’s examples of failed trading companies, we don’t know the correct expected value of their trading strategies, or whether they were positive or negative.
In cases where the expected value of an action is unknown, perhaps the instrumental rationality of the action is contingent on the epistemic rationality of our estimation of its expected value.
I like your definition of an accidental win, it matches my intuitive definition and is stated more clearly than I would have been able to.
In cases where the expected value of an action is unknown, perhaps the instrumental rationality of the action is contingent on the epistemic rationality of our estimation of its expected value.
Yes. Actually, I think the “In cases where the expected value of an action is unknown” clause is likely unnecessary, because the accuracy of an expected value calculation is always at least slightly uncertain.
Furthermore, the second-order calculation of the expected value of expending resources to increase epistemological rationality should be possible; and in the case that acting on a proposition is irrational due to low certainty, and the second-order value of increasing certainty is negative, the rational thing to do is shrug and move on.
It’s true that people make this kind of response, but that doesn’t make it valid, or mean that we have to throw away the notion of rationality as maximizing expected performance, rather than actual performance.
In the case of failed trading companies, can’t we just say that despite their fantasies, their expected performance shouldn’t have been so great as they thought? And the fact that their actual results differed from their expected results should cast suspicion on their expectations.
Perhaps we can say that expectations about performance be epistemically rational, and only then can an agent who maximizes their expected performance be instrumentally rational.
Some expectations win. Some expectations lose. Yet not all expectations are created equal. Non-accidental winning starts with something that seems good to try (can accidental winning be rational?). At least, there is some link between expectations and rationality, such that we can call some expectations more rational than others, regardless of whether they actually win or lose.
An example SoullessAutomaton made was that we shouldn’t consider lottery winners rational, even though they won, because they should not have expected to. Conversely, all sorts of inductive expectations can be rational, even though sometimes they will fail due to the problem of induction. For instance, it’s rational to expect that the sun will rise tomorrow. If Omega decides to blow up the sun, my expectation will still have been rational, even though I turned out to be wrong.
In the real world, of course, most things are some mixture of controllable and randomized. Depending on your definition of accidental, it can be rational to make low-cost steps to position yourself to take advantage of possible events you have no control over. I wouldn’t call this accidental, however, because the average expected gain should be net positive, even if one expects (id est, with confidence greater than 50%) to lose.
I used the lottery as an example because it’s generally a clear-cut case where the expected gain minus the cost of participating is net negative and the controllable factor (how many tickets you buy) has extremely small impact.
Yes, and I liked your example for exactly this reason: the expected value of buying lottery tickets is negative.
I think that this shows that it is irrational to take an action where it’s clear-cut that the expected value is negative, even though due to chance, one iteration of that action might produce a positive result. You are using accidental the same way I am: winning from an action with a negative expected value is what I would call accidental, and winning with a positive expected value is non-accidental.
Things are a bit more complicated when we don’t know the expected value of an action. For example, in Eliezer’s examples of failed trading companies, we don’t know the correct expected value of their trading strategies, or whether they were positive or negative.
In cases where the expected value of an action is unknown, perhaps the instrumental rationality of the action is contingent on the epistemic rationality of our estimation of its expected value.
I like your definition of an accidental win, it matches my intuitive definition and is stated more clearly than I would have been able to.
Yes. Actually, I think the “In cases where the expected value of an action is unknown” clause is likely unnecessary, because the accuracy of an expected value calculation is always at least slightly uncertain.
Furthermore, the second-order calculation of the expected value of expending resources to increase epistemological rationality should be possible; and in the case that acting on a proposition is irrational due to low certainty, and the second-order value of increasing certainty is negative, the rational thing to do is shrug and move on.