Randomization only maximizes diversity if you have to make decisions under amnesia or coordinate without communication or some similar perverse situation. In any normal case, you’re better off choosing a deterministic sequence that’s definitely diverse, rather than leaving it to randomness and only probably getting a diverse set of outcomes.
Sure—but that seems rather tangential to the main point here.
The options were , - or a more expensive random choice. A random diet may not be perfect—but it was probably the best one on offer in the case of this example.
If the agent already has a penny (which they must if they can afford to choose the third box), they could just flip the penny to decide which of the first two boxes to take and save themselves the money.
Unless you’re being a devil’s advocate, I don’t see any reason to justify a completely rational agent choosing the random box.
Then choice C isn’t a random mixture of choice A and choice B.
Preferring that there be randomness at a point where you otherwise wouldn’t get a decision at all, is fine. What doesn’t happen is preferring one coin-flip in place of one decision.
Not to be crass, but given the assumption that Wei_Dai is not saying something utterly asinine, does your interpretation of the hypothetical actually follow?
Hang on! My last comment was a reply to your question about when it could be rational to select the third box. I have already said that the original example was unclear. It certainly didn’t suggest an infinite sequence—and I wasn’t trying to suggest that.
The example specified that choosing the third box was the correct answer—under the author’s own proposed decision theory. Surely interpretations of what it was supposed to mean should bear that in mind.
I don’t believe we’re actually arguing about anything worth caring about. My understanding was that Wei_Dai was illustrating a problem with UDT1 - in which case a single scenario in which UDT1 gives an unambiguously wrong answer suffices. To disprove Wei_Dai’s assertion requires demonstrating that no scenario of the kind proposed makes UDT1 give the wrong answer, not showing that not every scenario of the kind proposed makes UDT1 give the wrong answer.
Are you sure you are taking the fact that he is UDT’s inventor and biggest fan into account? He certainly didn’t claim that he was illustrating a problem with UDT.
Randomization only maximizes diversity if you have to make decisions under amnesia or coordinate without communication or some similar perverse situation. In any normal case, you’re better off choosing a deterministic sequence that’s definitely diverse, rather than leaving it to randomness and only probably getting a diverse set of outcomes.
Sure—but that seems rather tangential to the main point here.
The options were , - or a more expensive random choice. A random diet may not be perfect—but it was probably the best one on offer in the case of this example.
If the agent already has a penny (which they must if they can afford to choose the third box), they could just flip the penny to decide which of the first two boxes to take and save themselves the money.
Unless you’re being a devil’s advocate, I don’t see any reason to justify a completely rational agent choosing the random box.
What—never? Say they can only make the choice once—and their answer determines which box they will get on all future occasions.
Then choice C isn’t a random mixture of choice A and choice B.
Preferring that there be randomness at a point where you otherwise wouldn’t get a decision at all, is fine. What doesn’t happen is preferring one coin-flip in place of one decision.
Not to be crass, but given the assumption that Wei_Dai is not saying something utterly asinine, does your interpretation of the hypothetical actually follow?
Hang on! My last comment was a reply to your question about when it could be rational to select the third box. I have already said that the original example was unclear. It certainly didn’t suggest an infinite sequence—and I wasn’t trying to suggest that.
The example specified that choosing the third box was the correct answer—under the author’s own proposed decision theory. Surely interpretations of what it was supposed to mean should bear that in mind.
I don’t believe we’re actually arguing about anything worth caring about. My understanding was that Wei_Dai was illustrating a problem with UDT1 - in which case a single scenario in which UDT1 gives an unambiguously wrong answer suffices. To disprove Wei_Dai’s assertion requires demonstrating that no scenario of the kind proposed makes UDT1 give the wrong answer, not showing that not every scenario of the kind proposed makes UDT1 give the wrong answer.
Are you sure you are taking the fact that he is UDT’s inventor and biggest fan into account? He certainly didn’t claim that he was illustrating a problem with UDT.
...you’re right, I’m misreading. I’ll shut up now.