I don’t mind the downvote—but consider reversing it if my theory is proven right next chapter. :-)
If I know Vladimir at all then he will not—because to do so would be an error. Overconfidence is a function of your confidence and the information that you have available at the time. Vladimir finding out that it so happens that Eliezer writes the same solution that you do does not significantly alter his perception of how much information you had at the time you wrote that comment.
Even if you win a lottery buying the lottery ticket was still a bad decision.
I understand your point, but I’m not sure the analogy is quite correct. In the case of the lottery, where the probabilities are well known, to make a bad bet is just bad (even if chances goes your way).
In this case however, our estimated probabilities derive ultimately from our models of Eliezer in his authoring capacity. If Vladimir derives a lower probability than the one I derived on Harry using the solution I stated, and it ends up my theory is indeed correct, that is evidence that his model of Eliezer is worse than mine. So he should update his model accordingly, and indeed reconsider whether I was actually overconfident or not. (Ofcourse he may reach the conclusion that even with his updated model, I was still overconfident)
I think Eliezer’s policy as expressed here is better.
And, looking at the context, not particularly relevant.
When they are not yet shown to be right downvoting is perfectly reasonable. Changing your votes retrospectively is not always correct.
Unless Eliezer believes the information available to AK is sufficient to justify being ‘Very Sure’ I do not believe Eliezer’s actual or expressed policy suggests reversing votes if he is lucky. In fact my comment about lottery mistakes is a massively understated reference to what he has written on the subject (if I recall correctly).
Not that I advocate deferring to Eliezer here. If he thinks you can’t be overconfident and right at the same time he is just plain wrong. This is one of the most prevalent human biases.
I believe Eliezer’s policy is to criticize people when they’re wrong. If they say something right for the wrong reason, wait; they’ll say something wrong soon enough.
A number of reviewers said they learned important lessons in rationality from the exercise, seeing the reasoning that got it right contrasted to the reasoning that got it wrong. Did you?
A number of reviewers said they learned important lessons in rationality from the exercise, seeing the reasoning that got it right contrasted to the reasoning that got it wrong. Did you?
What do you mean by ‘right’ here? Do you mean “made correct predictions about which decisions Eliezer would choose for Harry?” While exploring the solutions I am rather careful to keep evaluations of how practical, rational (and, I’ll admit, “how awesome”) a solution is completely distinct from predictions about which particular practical, rational and possibly awesome solution an author will choose. I tend to focus on the former far more because I hate guessing passwords.
I’ll respond again when I’ve had a chance to do more than skim the chapter and evaluate the reasoning properly.
Even if you win a lottery buying the lottery ticket was still a bad decision.
Nonsense. That’s like saying that two-boxing Newcomb’s problem is “right”. If you win, you made the right decision. Your decision-making method may be garbage, but it’s garbage that did a good job that one time, and that’s enough to not regret it.
Actually, its a bad decision with respect to the information you had when you made it, unlike one-boxing instead of two-boxing, you can’t have expected to win the lottery.
I distinguish between the decision itself and the decision-making process. If you win, you made the right decision, and if you lose, you made the wrong one, and that is true without reference to which decision made the most sense at the time. The decision-making algorithm’s job is to give you the highest chance of making the right decision given your prior knowledge, but any such algorithm is imperfect when applied to a vague future. It’s perfectly possible to get the right decision from a bad algorithm or the wrong decision from a good algorithm.
Also, when we’re discussing things as vague as the intention of an author who is foreshadowing heavily, there’s an immense amount of room for judgement calls and intuition, because it’s not like we can actually put concrete values on our probabilities. The measure of a person’s judgement of such things is how often they’re ultimately right, so if he gets it right then I’d have to say that’s evidence that he’s doing his guessing well. How else are we supposed to judge a predictor? If he’s good then he’s allowed to put tight confidence intervals on, and if he’s bad then he’s not. We’ll get some evidence about how good he is on Tuesday.
But you are ignorant—you know the probabilities well enough, but you’re ignorant of which numbers will be drawn, which is the most important part of the whole operation. If I said for whatever reason “If I ever buy a lottery ticket, my numbers will be 5, 11, 17, 33, 36, and 42”, and those numbers come up next Friday, you will have been retrospectively wrong not to have bought, even if “Never buy a ticket” is statistically the best strategy. We cannot make decisions retrospectively, of course, but if you randomly took a flier and bought a ticket for Friday’s draw, then...well, I’d sound pretty stupid if I made fun of you for it, you know?
you will have been retrospectively wrong not to have bought
Not really; Before you know the outcome, saying “my numbers will be 5, 11, 17, 33, 36, and 42” is privileging the hypothesis. (unless you had other information which allowed you to select that specific combination)
And even if those numbers, by pure chance, were correct, there is still a reason it was a bad decision (in the ‘maximizing expected utility’ sense) to buy a ticket. Which is what I meant when I said that you can’t have expected to win.
I just needed an example using definite numbers(so you can judge retrospectively), and not a sequence that millions of people would pick like 1,2,3,4,5,6. For sake of argument, assume I found them on the back of a fortune cookie. Or better yet, just stick a WLOG at the front of my sentence.
And I agree, buying lottery tickets implies a bad way to make decisions, even if you wind up winning. I’m hardly trying to shill for Powerball here. Just saying winning the lottery is always a good thing, even if playing it isn’t.
I think my problem is with this “Judge Retrospectively” thing. Here’s what I think:
Decisions are what’s to be judged, not outcomes. And decisions should be judged relative to the information you had at the time of making them.
In the lottery example, assuming you didn’t know what number would win, the decision to buy a ticket is Bad regardless of whether you won or not.
What I got from this:
you will have been retrospectively wrong not to have bought
Is that you think that if you had a (presumably random) number in mind, but did not buy a ticket, and that number ended up winning, then your decision of not buying the ticket was Wrong and that you should Regret it.
My problem is that this doesn’t make sense: We agree that playing a lottery is Bad (Negative sum game and all that), and we don’t seem to regret not heaving played with the specific number that happened to have won. Which is good, since (to me at least) Regretting decisions made in full knowledge you had at the time of decision seems Wrong.
If this is not what you meant and I’m just bashing a Straw Man, please tell me.
I think there’s a difference between a decision made badly and a bad decision. Playing the lottery is a decision made badly, because you have no special information and it’s -EV. But if you win, it’s a good decision, no matter how badly made it was—the correct response is “That was kind of dumb, I guess, but who cares?”.
Of course, the lottery example is cold math, so there’s no room for disagreement about probabilities. It’s rather different in the case of things like literary analysis, to get back to where we started.
I will not argue about the definition of ‘right decision’, that is at least ambiguous. Yet when it comes to overconfidence in a given prediction that is a property of the comment itself and the information on which it was based upon. New information doesn’t change it.
If I know Vladimir at all then he will not—because to do so would be an error. Overconfidence is a function of your confidence and the information that you have available at the time. Vladimir finding out that it so happens that Eliezer writes the same solution that you do does not significantly alter his perception of how much information you had at the time you wrote that comment.
Even if you win a lottery buying the lottery ticket was still a bad decision.
I understand your point, but I’m not sure the analogy is quite correct. In the case of the lottery, where the probabilities are well known, to make a bad bet is just bad (even if chances goes your way).
In this case however, our estimated probabilities derive ultimately from our models of Eliezer in his authoring capacity. If Vladimir derives a lower probability than the one I derived on Harry using the solution I stated, and it ends up my theory is indeed correct, that is evidence that his model of Eliezer is worse than mine. So he should update his model accordingly, and indeed reconsider whether I was actually overconfident or not. (Ofcourse he may reach the conclusion that even with his updated model, I was still overconfident)
I think Eliezer’s policy as expressed here is better.
And, looking at the context, not particularly relevant.
When they are not yet shown to be right downvoting is perfectly reasonable. Changing your votes retrospectively is not always correct.
Unless Eliezer believes the information available to AK is sufficient to justify being ‘Very Sure’ I do not believe Eliezer’s actual or expressed policy suggests reversing votes if he is lucky. In fact my comment about lottery mistakes is a massively understated reference to what he has written on the subject (if I recall correctly).
Not that I advocate deferring to Eliezer here. If he thinks you can’t be overconfident and right at the same time he is just plain wrong. This is one of the most prevalent human biases.
I believe Eliezer’s policy is to criticize people when they’re wrong. If they say something right for the wrong reason, wait; they’ll say something wrong soon enough.
A number of reviewers said they learned important lessons in rationality from the exercise, seeing the reasoning that got it right contrasted to the reasoning that got it wrong. Did you?
What do you mean by ‘right’ here? Do you mean “made correct predictions about which decisions Eliezer would choose for Harry?” While exploring the solutions I am rather careful to keep evaluations of how practical, rational (and, I’ll admit, “how awesome”) a solution is completely distinct from predictions about which particular practical, rational and possibly awesome solution an author will choose. I tend to focus on the former far more because I hate guessing passwords.
I’ll respond again when I’ve had a chance to do more than skim the chapter and evaluate the reasoning properly.
Nonsense. That’s like saying that two-boxing Newcomb’s problem is “right”. If you win, you made the right decision. Your decision-making method may be garbage, but it’s garbage that did a good job that one time, and that’s enough to not regret it.
Actually, its a bad decision with respect to the information you had when you made it, unlike one-boxing instead of two-boxing, you can’t have expected to win the lottery.
I distinguish between the decision itself and the decision-making process. If you win, you made the right decision, and if you lose, you made the wrong one, and that is true without reference to which decision made the most sense at the time. The decision-making algorithm’s job is to give you the highest chance of making the right decision given your prior knowledge, but any such algorithm is imperfect when applied to a vague future. It’s perfectly possible to get the right decision from a bad algorithm or the wrong decision from a good algorithm.
Also, when we’re discussing things as vague as the intention of an author who is foreshadowing heavily, there’s an immense amount of room for judgement calls and intuition, because it’s not like we can actually put concrete values on our probabilities. The measure of a person’s judgement of such things is how often they’re ultimately right, so if he gets it right then I’d have to say that’s evidence that he’s doing his guessing well. How else are we supposed to judge a predictor? If he’s good then he’s allowed to put tight confidence intervals on, and if he’s bad then he’s not. We’ll get some evidence about how good he is on Tuesday.
I agree with the principle, but lottery is a really poor example of this, since it implies ignorance.
But you are ignorant—you know the probabilities well enough, but you’re ignorant of which numbers will be drawn, which is the most important part of the whole operation. If I said for whatever reason “If I ever buy a lottery ticket, my numbers will be 5, 11, 17, 33, 36, and 42”, and those numbers come up next Friday, you will have been retrospectively wrong not to have bought, even if “Never buy a ticket” is statistically the best strategy. We cannot make decisions retrospectively, of course, but if you randomly took a flier and bought a ticket for Friday’s draw, then...well, I’d sound pretty stupid if I made fun of you for it, you know?
Not really; Before you know the outcome, saying “my numbers will be 5, 11, 17, 33, 36, and 42” is privileging the hypothesis. (unless you had other information which allowed you to select that specific combination)
And even if those numbers, by pure chance, were correct, there is still a reason it was a bad decision (in the ‘maximizing expected utility’ sense) to buy a ticket. Which is what I meant when I said that you can’t have expected to win.
I just needed an example using definite numbers(so you can judge retrospectively), and not a sequence that millions of people would pick like 1,2,3,4,5,6. For sake of argument, assume I found them on the back of a fortune cookie. Or better yet, just stick a WLOG at the front of my sentence.
And I agree, buying lottery tickets implies a bad way to make decisions, even if you wind up winning. I’m hardly trying to shill for Powerball here. Just saying winning the lottery is always a good thing, even if playing it isn’t.
I think my problem is with this “Judge Retrospectively” thing. Here’s what I think:
Decisions are what’s to be judged, not outcomes. And decisions should be judged relative to the information you had at the time of making them.
In the lottery example, assuming you didn’t know what number would win, the decision to buy a ticket is Bad regardless of whether you won or not.
What I got from this:
Is that you think that if you had a (presumably random) number in mind, but did not buy a ticket, and that number ended up winning, then your decision of not buying the ticket was Wrong and that you should Regret it.
My problem is that this doesn’t make sense: We agree that playing a lottery is Bad (Negative sum game and all that), and we don’t seem to regret not heaving played with the specific number that happened to have won. Which is good, since (to me at least) Regretting decisions made in full knowledge you had at the time of decision seems Wrong.
If this is not what you meant and I’m just bashing a Straw Man, please tell me.
I think there’s a difference between a decision made badly and a bad decision. Playing the lottery is a decision made badly, because you have no special information and it’s -EV. But if you win, it’s a good decision, no matter how badly made it was—the correct response is “That was kind of dumb, I guess, but who cares?”.
Of course, the lottery example is cold math, so there’s no room for disagreement about probabilities. It’s rather different in the case of things like literary analysis, to get back to where we started.
I will not argue about the definition of ‘right decision’, that is at least ambiguous. Yet when it comes to overconfidence in a given prediction that is a property of the comment itself and the information on which it was based upon. New information doesn’t change it.