Are you familiar with Richard Wiseman, who has found that “luck” (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?
That’s an interesting result! It doesn’t surprise me that people frequently confuse which complex outcomes they can and can’t control, though. Do you think I’m wrong about the intension of “luck”? Or do you think most people are just wrong about its extension?
I think the definition of ‘luck’ as ‘complex outcomes I have only minor control over’ is useful, as well as the definition of ‘luck’ as ‘the resolution of uncertain outcomes.’ For both of them, I think there’s meat to the sentence “rationalists should not be predictably unlucky”: in the first, it means rationalists should exert a level of effort justified by the system they’re dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).
Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of ‘luck’ that might avoid triviality.
“Predictable” and “changeable” have limits, but people generally don’t know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.
Or what looks like a good strategy for making an improvement to one person might looking like knocking one’s head against a wall to another.
The point you and Eliezer (and possibly Vaniver) seem to be making is that “perfectly rational agents are allowed to get unlucky” isn’t a useful meme, either because we tend to misjudge which things are out of our control or because it’s just not useful to pay any attention to those things.
Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?
ETA: Would “rationality doesn’t require omnipotence” suit you better?
Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.
Rationalists should not be predictably unlucky.
Yes, if it’s both predictable and changeable. Though I’m not sure why we’d call something that meets both those conditions ‘luck’.
Are you familiar with Richard Wiseman, who has found that “luck” (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?
That’s an interesting result! It doesn’t surprise me that people frequently confuse which complex outcomes they can and can’t control, though. Do you think I’m wrong about the intension of “luck”? Or do you think most people are just wrong about its extension?
I think the definition of ‘luck’ as ‘complex outcomes I have only minor control over’ is useful, as well as the definition of ‘luck’ as ‘the resolution of uncertain outcomes.’ For both of them, I think there’s meat to the sentence “rationalists should not be predictably unlucky”: in the first, it means rationalists should exert a level of effort justified by the system they’re dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).
Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of ‘luck’ that might avoid triviality.
“Predictable” and “changeable” have limits, but people generally don’t know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.
Or what looks like a good strategy for making an improvement to one person might looking like knocking one’s head against a wall to another.
The point you and Eliezer (and possibly Vaniver) seem to be making is that “perfectly rational agents are allowed to get unlucky” isn’t a useful meme, either because we tend to misjudge which things are out of our control or because it’s just not useful to pay any attention to those things.
Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?
ETA: Would “rationality doesn’t require omnipotence” suit you better?
Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.