I’m pretty sure the solution is as follows (I’ve already posted it in TV tropes forum). I’m ROT13, if anyone still wants to figure it out:
Yhpvhf Znysbl pynvzrq gb unir orra haqre Vzcrevhf ol Ibyqrzbeg. Ibyqrzbeg jnf qrsrngrq ol Uneel Cbggre. Sebz Serq & Trbetr’f cenax jr xabj gung xvyyvat gur jvmneq gung unf lbh haqre gur Vzcrevhf phefr perngrf n qrog. Erfhyg: Yhpvhf Znysbl naq rirel bgure Qrngu rngre pynvzvat gb unir orra vzcrevbfrq ner abj haqre yvsr qrog gb Uneel Cbggre. Ur pna fgneg erqrrzvat.
Point of order: vg whfg fnlf “n qrog”, abg n yvsr-qrog.
“Vg jbhyq frrz,” fnvq Uneel, njr va uvf ibvpr, “gung bar Ze. Neguhe Jrnfyrl jnf cynprq haqre gur Vzcrevhf Phefr ol n Qrngu Rngre jubz zl sngure xvyyrq, guhf perngvat n qrog gb gur Aboyr Ubhfr bs Cbggre, juvpu zl sngure qrznaqrq or ercnvq ol gur unaq va zneevntr bs gur erpragyl obea Tvarien Jrnfyrl.
Also, it would need to be explained why no one ever thought of this before.
Also, it would need to be explained why no one ever thought of this before.
Yeah, I was going ‘wow, that might actually work’ and then it occurred to me that they already discussed whether they had any debts from Lucius they could call in. So unless this is so subtle that no one has ever called in such a debt before, someone must have been holding an idiotball.
EDIT: Logos01 suggests that the debt be invoked of all the Wizengamot members who also claimed to be Imperiused, to swing the vote on whether or not to convict. This might work, but I would personally dislike it as we have no idea how many such people there are.
Still unusually speculative; we’re told previously that an Imperius debt is not a life-debt, so it already has a burden of improbability (did they misspeak or simply mean to imply that a debt of some sort is created without reference to how heavy it is?).
And the latter suggestion, while very clever, has the problem that it requires the numbers to work out, so we couldn’t conclude that it will work without numbers, so a fair author will not expect us to work it out without numbers, Eliezer is a fair author, and Eliezer hasn’t given us the numbers. (We don’t know what the margin for conviction is, or how much of the margin is former Death Eaters who used the Imperius defense, or that they all said it was Voldemort who Imperiused them and not, say, an unknown Death Eater whom Harry did not defeat.)
Technically, the numbers don’t have to work out—Lucius is the one on who’s request the trial be held, If his debt can make him withdraw charges or clear Hermione’s debt, that alone should suffice.
Still, while this is a clever idea, it doesn’t sound very “Taboo Trade-off” or “Think of the Wizengamot as individuals instead of wallpaper”.
The great thing about being the author is that you get to go “BUURRRNNN” seven days before everyone else.
More seriously—I don’t think Aris Katsaris was being overconfident. Methods is meant to be solvable; correct solutions should snap firmly into place. The vast amount of overcomplication and hint-denial and stretching that goes on elsewhere shouldn’t make people less confident if they’re perceiving actual solutions, because those still snap just as firmly into place.
Your concern is reasonable. The only person on these forums who has any reason to trust me with money is Mitchell_Porter. Would his word be sufficient?
If Mitchell vouches for you, I’m willing to make a bet specified as follows:
I’m willing to bet 7 of my dollars to every 3 of yours (to provide me with sufficient margin to make the bet profitable for me, including any uncertainty of followthrough) from a minimum of $35 of mine ($15 of yours), up to a maximum of $210 of mine (90$ of yours)
If invoking the debt Lucius owes to Harry is only part of Harry’s solution, that still counts as a successful prediction for me. It also doesn’t need be called a “life-debt”, if it’s a lesser type of debt, that still counts. If Harry only threatens to invoke or redeem it, but doesn’t actually officially “invoke” or “redeem”, that still counts. If Harry claims it for a debt but the Wizengamot disagrees it is one, that still counts. (And if Eliezer states outright I figured it out, ofcourse I win then too)
Paypal would be my preferred method of money transfer.
I will take this bet, with the following stipulations:
I’m putting up $30 against your $70.
If Harry merely mentions the debt, you don’t win—it must be a significant part of the solution. (If necessary, “significant” can be decided by a mutually agreed-upon third party.)
If Eliezer congratulates you for thinking of a better solution than Harry’s, you don’t win.
If for some reason Mitchell doesn’t vouch for me, no one owes anyone anything.
You’re obviously a sock puppet (not a bad one, just an anonymous one.) So I just pictured Eliezer making a sock puppet account specifically to take bets on what’s going to happen in HPMoR.
My model of EY says that isn’t something he would do, but I find the concept hilarious, nonetheless. (And had many giggles while imagining scheming!Eliezer posting good plot ideas he DIDN’T use under a sock account, and then swooping in as another sock to offer bets on said idea, while laughing evilly (can’t ignore the Evil Laugh), and raking in the dough :P)
At Anna and Carl’s wedding, I advanced a MoR prediction, which Eliezer offered to confirm/deny iff I first made bets with all present, and I won something like $50 =)
I don’t mind the downvote—but consider reversing it if my theory is proven right next chapter. :-)
If I know Vladimir at all then he will not—because to do so would be an error. Overconfidence is a function of your confidence and the information that you have available at the time. Vladimir finding out that it so happens that Eliezer writes the same solution that you do does not significantly alter his perception of how much information you had at the time you wrote that comment.
Even if you win a lottery buying the lottery ticket was still a bad decision.
I understand your point, but I’m not sure the analogy is quite correct. In the case of the lottery, where the probabilities are well known, to make a bad bet is just bad (even if chances goes your way).
In this case however, our estimated probabilities derive ultimately from our models of Eliezer in his authoring capacity. If Vladimir derives a lower probability than the one I derived on Harry using the solution I stated, and it ends up my theory is indeed correct, that is evidence that his model of Eliezer is worse than mine. So he should update his model accordingly, and indeed reconsider whether I was actually overconfident or not. (Ofcourse he may reach the conclusion that even with his updated model, I was still overconfident)
I think Eliezer’s policy as expressed here is better.
And, looking at the context, not particularly relevant.
When they are not yet shown to be right downvoting is perfectly reasonable. Changing your votes retrospectively is not always correct.
Unless Eliezer believes the information available to AK is sufficient to justify being ‘Very Sure’ I do not believe Eliezer’s actual or expressed policy suggests reversing votes if he is lucky. In fact my comment about lottery mistakes is a massively understated reference to what he has written on the subject (if I recall correctly).
Not that I advocate deferring to Eliezer here. If he thinks you can’t be overconfident and right at the same time he is just plain wrong. This is one of the most prevalent human biases.
I believe Eliezer’s policy is to criticize people when they’re wrong. If they say something right for the wrong reason, wait; they’ll say something wrong soon enough.
A number of reviewers said they learned important lessons in rationality from the exercise, seeing the reasoning that got it right contrasted to the reasoning that got it wrong. Did you?
A number of reviewers said they learned important lessons in rationality from the exercise, seeing the reasoning that got it right contrasted to the reasoning that got it wrong. Did you?
What do you mean by ‘right’ here? Do you mean “made correct predictions about which decisions Eliezer would choose for Harry?” While exploring the solutions I am rather careful to keep evaluations of how practical, rational (and, I’ll admit, “how awesome”) a solution is completely distinct from predictions about which particular practical, rational and possibly awesome solution an author will choose. I tend to focus on the former far more because I hate guessing passwords.
I’ll respond again when I’ve had a chance to do more than skim the chapter and evaluate the reasoning properly.
Even if you win a lottery buying the lottery ticket was still a bad decision.
Nonsense. That’s like saying that two-boxing Newcomb’s problem is “right”. If you win, you made the right decision. Your decision-making method may be garbage, but it’s garbage that did a good job that one time, and that’s enough to not regret it.
Actually, its a bad decision with respect to the information you had when you made it, unlike one-boxing instead of two-boxing, you can’t have expected to win the lottery.
I distinguish between the decision itself and the decision-making process. If you win, you made the right decision, and if you lose, you made the wrong one, and that is true without reference to which decision made the most sense at the time. The decision-making algorithm’s job is to give you the highest chance of making the right decision given your prior knowledge, but any such algorithm is imperfect when applied to a vague future. It’s perfectly possible to get the right decision from a bad algorithm or the wrong decision from a good algorithm.
Also, when we’re discussing things as vague as the intention of an author who is foreshadowing heavily, there’s an immense amount of room for judgement calls and intuition, because it’s not like we can actually put concrete values on our probabilities. The measure of a person’s judgement of such things is how often they’re ultimately right, so if he gets it right then I’d have to say that’s evidence that he’s doing his guessing well. How else are we supposed to judge a predictor? If he’s good then he’s allowed to put tight confidence intervals on, and if he’s bad then he’s not. We’ll get some evidence about how good he is on Tuesday.
But you are ignorant—you know the probabilities well enough, but you’re ignorant of which numbers will be drawn, which is the most important part of the whole operation. If I said for whatever reason “If I ever buy a lottery ticket, my numbers will be 5, 11, 17, 33, 36, and 42”, and those numbers come up next Friday, you will have been retrospectively wrong not to have bought, even if “Never buy a ticket” is statistically the best strategy. We cannot make decisions retrospectively, of course, but if you randomly took a flier and bought a ticket for Friday’s draw, then...well, I’d sound pretty stupid if I made fun of you for it, you know?
you will have been retrospectively wrong not to have bought
Not really; Before you know the outcome, saying “my numbers will be 5, 11, 17, 33, 36, and 42” is privileging the hypothesis. (unless you had other information which allowed you to select that specific combination)
And even if those numbers, by pure chance, were correct, there is still a reason it was a bad decision (in the ‘maximizing expected utility’ sense) to buy a ticket. Which is what I meant when I said that you can’t have expected to win.
I just needed an example using definite numbers(so you can judge retrospectively), and not a sequence that millions of people would pick like 1,2,3,4,5,6. For sake of argument, assume I found them on the back of a fortune cookie. Or better yet, just stick a WLOG at the front of my sentence.
And I agree, buying lottery tickets implies a bad way to make decisions, even if you wind up winning. I’m hardly trying to shill for Powerball here. Just saying winning the lottery is always a good thing, even if playing it isn’t.
I think my problem is with this “Judge Retrospectively” thing. Here’s what I think:
Decisions are what’s to be judged, not outcomes. And decisions should be judged relative to the information you had at the time of making them.
In the lottery example, assuming you didn’t know what number would win, the decision to buy a ticket is Bad regardless of whether you won or not.
What I got from this:
you will have been retrospectively wrong not to have bought
Is that you think that if you had a (presumably random) number in mind, but did not buy a ticket, and that number ended up winning, then your decision of not buying the ticket was Wrong and that you should Regret it.
My problem is that this doesn’t make sense: We agree that playing a lottery is Bad (Negative sum game and all that), and we don’t seem to regret not heaving played with the specific number that happened to have won. Which is good, since (to me at least) Regretting decisions made in full knowledge you had at the time of decision seems Wrong.
If this is not what you meant and I’m just bashing a Straw Man, please tell me.
I think there’s a difference between a decision made badly and a bad decision. Playing the lottery is a decision made badly, because you have no special information and it’s -EV. But if you win, it’s a good decision, no matter how badly made it was—the correct response is “That was kind of dumb, I guess, but who cares?”.
Of course, the lottery example is cold math, so there’s no room for disagreement about probabilities. It’s rather different in the case of things like literary analysis, to get back to where we started.
I will not argue about the definition of ‘right decision’, that is at least ambiguous. Yet when it comes to overconfidence in a given prediction that is a property of the comment itself and the information on which it was based upon. New information doesn’t change it.
I’m confused. “I’m pretty sure” is extremely vague. I would not expect to be able to confidently call something like that “overconfidence”. Is there some formalization of such terms that I’m missing?
Interesting…
Ohg jbhyq univat n phefr erobhaq bss lbh ernyyl ubyq hc nf “qrsrngvat” va n pbheg bs Ynj? Fancr’f nanybtl bs n zna gevccvat ba n onol pbzrf gb zvaq. Fgvyy, vg zvtug yrnq gb na vairfgvtngvba vagb gur znggre, juvpu pbhyq fgnyy guvatf.
I’m pretty sure the solution is as follows (I’ve already posted it in TV tropes forum). I’m ROT13, if anyone still wants to figure it out: Yhpvhf Znysbl pynvzrq gb unir orra haqre Vzcrevhf ol Ibyqrzbeg. Ibyqrzbeg jnf qrsrngrq ol Uneel Cbggre. Sebz Serq & Trbetr’f cenax jr xabj gung xvyyvat gur jvmneq gung unf lbh haqre gur Vzcrevhf phefr perngrf n qrog. Erfhyg: Yhpvhf Znysbl naq rirel bgure Qrngu rngre pynvzvat gb unir orra vzcrevbfrq ner abj haqre yvsr qrog gb Uneel Cbggre. Ur pna fgneg erqrrzvat.
Point of order: vg whfg fnlf “n qrog”, abg n yvsr-qrog.
Also, it would need to be explained why no one ever thought of this before.
Yeah, I was going ‘wow, that might actually work’ and then it occurred to me that they already discussed whether they had any debts from Lucius they could call in. So unless this is so subtle that no one has ever called in such a debt before, someone must have been holding an idiotball.
EDIT: Logos01 suggests that the debt be invoked of all the Wizengamot members who also claimed to be Imperiused, to swing the vote on whether or not to convict. This might work, but I would personally dislike it as we have no idea how many such people there are.
Gurl qvfphffrq gur npghny qrogf, ohg gurl qvqa’g qvfphff guvf bar, abg rira nf n cbgragvnyvgl, fb V guvax vg qvq whfg fyvc gurve zvaqf, orpnhfr Uneel naq Qhzoyrqber qba’g oryvrir Yhpvhf gb unir orra haqre Vzcrevhf naq guhf gurl pbafvqre Ibyqrzbeg’f qrsrng gb or n oybj ntnvafg Yhpvhf, abg n snibhe gb Yhpvhf perngvat n qrog. Fb, lrnu, V guvax vg whfg qvqa’g pebff gurve zvaqf. Vg qvqa’g pebff zl zvaq rvgure gur jubyr cnfg jrrx, naq V jnf yrff ohfl (gubhtu yrff qrfcrengr sbe n fbyhgvba) guna Uneel be Nyohf jrer.
Lrnu, vg qvq gnxr zr abj bayl 10-15 zvahgrf be fb sbe zr gb pbzr hc jvgu vg, ohg V unq gur fvtavsvpnag nqinagntr bs xabjvat gurer rkvfgrq n fbyhgvba, gung V unq orra tvira fhssvpvrag vasbezngvba fhssvpvragyl sberfunqbjrq, naq gung gur fbyhgvba zbfg yvxryl qrcraqrq ba gur ynjf naq phfgbzf bs zntvpny Oevgnva, nf gur ynfg cnentencu bs gur puncgre vzcyvrf.
Still unusually speculative; we’re told previously that an Imperius debt is not a life-debt, so it already has a burden of improbability (did they misspeak or simply mean to imply that a debt of some sort is created without reference to how heavy it is?).
And the latter suggestion, while very clever, has the problem that it requires the numbers to work out, so we couldn’t conclude that it will work without numbers, so a fair author will not expect us to work it out without numbers, Eliezer is a fair author, and Eliezer hasn’t given us the numbers. (We don’t know what the margin for conviction is, or how much of the margin is former Death Eaters who used the Imperius defense, or that they all said it was Voldemort who Imperiused them and not, say, an unknown Death Eater whom Harry did not defeat.)
Well, we’ll see in a few days.
Not quite. We’re told it’s a debt, we don’t know what sort of debt it is.
Technically, the numbers don’t have to work out—Lucius is the one on who’s request the trial be held, If his debt can make him withdraw charges or clear Hermione’s debt, that alone should suffice.
Still, while this is a clever idea, it doesn’t sound very “Taboo Trade-off” or “Think of the Wizengamot as individuals instead of wallpaper”.
You misunderstand, the point is there are 2 possible debt strategies; for one of them, the numbers do have to work out.
I’d say Logos01′s strategy exemplifies thinking of them as individuals, actually...
How about: invoke Lucius’s life debt. Trade it for Hermione’s.
Great idea, but where’s the Taboo Trade off?
Congratulations on correctly guessing (most of) the solution.
Downvoted for the overconfident “I’m pretty sure”.
I don’t mind the downvote—but consider reversing it if my theory is proven right next chapter. :-)
The great thing about being the author is that you get to go “BUURRRNNN” seven days before everyone else.
More seriously—I don’t think Aris Katsaris was being overconfident. Methods is meant to be solvable; correct solutions should snap firmly into place. The vast amount of overcomplication and hint-denial and stretching that goes on elsewhere shouldn’t make people less confident if they’re perceiving actual solutions, because those still snap just as firmly into place.
How sure are you?
85%
Bet?
I don’t know you. Can you get someone whose word I reasonably trust, like Alicorn or Nancylebov or Yvain or Eliezer to vouch for you?
Your concern is reasonable. The only person on these forums who has any reason to trust me with money is Mitchell_Porter. Would his word be sufficient?
If Mitchell vouches for you, I’m willing to make a bet specified as follows:
I’m willing to bet 7 of my dollars to every 3 of yours (to provide me with sufficient margin to make the bet profitable for me, including any uncertainty of followthrough) from a minimum of $35 of mine ($15 of yours), up to a maximum of $210 of mine (90$ of yours)
If invoking the debt Lucius owes to Harry is only part of Harry’s solution, that still counts as a successful prediction for me. It also doesn’t need be called a “life-debt”, if it’s a lesser type of debt, that still counts. If Harry only threatens to invoke or redeem it, but doesn’t actually officially “invoke” or “redeem”, that still counts. If Harry claims it for a debt but the Wizengamot disagrees it is one, that still counts. (And if Eliezer states outright I figured it out, ofcourse I win then too)
Paypal would be my preferred method of money transfer.
I will take this bet, with the following stipulations:
I’m putting up $30 against your $70.
If Harry merely mentions the debt, you don’t win—it must be a significant part of the solution. (If necessary, “significant” can be decided by a mutually agreed-upon third party.)
If Eliezer congratulates you for thinking of a better solution than Harry’s, you don’t win.
If for some reason Mitchell doesn’t vouch for me, no one owes anyone anything.
Done.
Please PM paypal info.
The money has been received, thank you!
Awesome
You’re obviously a sock puppet (not a bad one, just an anonymous one.) So I just pictured Eliezer making a sock puppet account specifically to take bets on what’s going to happen in HPMoR.
My model of EY says that isn’t something he would do, but I find the concept hilarious, nonetheless. (And had many giggles while imagining scheming!Eliezer posting good plot ideas he DIDN’T use under a sock account, and then swooping in as another sock to offer bets on said idea, while laughing evilly (can’t ignore the Evil Laugh), and raking in the dough :P)
At Anna and Carl’s wedding, I advanced a MoR prediction, which Eliezer offered to confirm/deny iff I first made bets with all present, and I won something like $50 =)
I was present and permitted to not-bet.
I vouch. :-)
Voting up all comments in this exchange for being virtuous.
If I know Vladimir at all then he will not—because to do so would be an error. Overconfidence is a function of your confidence and the information that you have available at the time. Vladimir finding out that it so happens that Eliezer writes the same solution that you do does not significantly alter his perception of how much information you had at the time you wrote that comment.
Even if you win a lottery buying the lottery ticket was still a bad decision.
I understand your point, but I’m not sure the analogy is quite correct. In the case of the lottery, where the probabilities are well known, to make a bad bet is just bad (even if chances goes your way).
In this case however, our estimated probabilities derive ultimately from our models of Eliezer in his authoring capacity. If Vladimir derives a lower probability than the one I derived on Harry using the solution I stated, and it ends up my theory is indeed correct, that is evidence that his model of Eliezer is worse than mine. So he should update his model accordingly, and indeed reconsider whether I was actually overconfident or not. (Ofcourse he may reach the conclusion that even with his updated model, I was still overconfident)
I think Eliezer’s policy as expressed here is better.
And, looking at the context, not particularly relevant.
When they are not yet shown to be right downvoting is perfectly reasonable. Changing your votes retrospectively is not always correct.
Unless Eliezer believes the information available to AK is sufficient to justify being ‘Very Sure’ I do not believe Eliezer’s actual or expressed policy suggests reversing votes if he is lucky. In fact my comment about lottery mistakes is a massively understated reference to what he has written on the subject (if I recall correctly).
Not that I advocate deferring to Eliezer here. If he thinks you can’t be overconfident and right at the same time he is just plain wrong. This is one of the most prevalent human biases.
I believe Eliezer’s policy is to criticize people when they’re wrong. If they say something right for the wrong reason, wait; they’ll say something wrong soon enough.
A number of reviewers said they learned important lessons in rationality from the exercise, seeing the reasoning that got it right contrasted to the reasoning that got it wrong. Did you?
What do you mean by ‘right’ here? Do you mean “made correct predictions about which decisions Eliezer would choose for Harry?” While exploring the solutions I am rather careful to keep evaluations of how practical, rational (and, I’ll admit, “how awesome”) a solution is completely distinct from predictions about which particular practical, rational and possibly awesome solution an author will choose. I tend to focus on the former far more because I hate guessing passwords.
I’ll respond again when I’ve had a chance to do more than skim the chapter and evaluate the reasoning properly.
Nonsense. That’s like saying that two-boxing Newcomb’s problem is “right”. If you win, you made the right decision. Your decision-making method may be garbage, but it’s garbage that did a good job that one time, and that’s enough to not regret it.
Actually, its a bad decision with respect to the information you had when you made it, unlike one-boxing instead of two-boxing, you can’t have expected to win the lottery.
I distinguish between the decision itself and the decision-making process. If you win, you made the right decision, and if you lose, you made the wrong one, and that is true without reference to which decision made the most sense at the time. The decision-making algorithm’s job is to give you the highest chance of making the right decision given your prior knowledge, but any such algorithm is imperfect when applied to a vague future. It’s perfectly possible to get the right decision from a bad algorithm or the wrong decision from a good algorithm.
Also, when we’re discussing things as vague as the intention of an author who is foreshadowing heavily, there’s an immense amount of room for judgement calls and intuition, because it’s not like we can actually put concrete values on our probabilities. The measure of a person’s judgement of such things is how often they’re ultimately right, so if he gets it right then I’d have to say that’s evidence that he’s doing his guessing well. How else are we supposed to judge a predictor? If he’s good then he’s allowed to put tight confidence intervals on, and if he’s bad then he’s not. We’ll get some evidence about how good he is on Tuesday.
I agree with the principle, but lottery is a really poor example of this, since it implies ignorance.
But you are ignorant—you know the probabilities well enough, but you’re ignorant of which numbers will be drawn, which is the most important part of the whole operation. If I said for whatever reason “If I ever buy a lottery ticket, my numbers will be 5, 11, 17, 33, 36, and 42”, and those numbers come up next Friday, you will have been retrospectively wrong not to have bought, even if “Never buy a ticket” is statistically the best strategy. We cannot make decisions retrospectively, of course, but if you randomly took a flier and bought a ticket for Friday’s draw, then...well, I’d sound pretty stupid if I made fun of you for it, you know?
Not really; Before you know the outcome, saying “my numbers will be 5, 11, 17, 33, 36, and 42” is privileging the hypothesis. (unless you had other information which allowed you to select that specific combination)
And even if those numbers, by pure chance, were correct, there is still a reason it was a bad decision (in the ‘maximizing expected utility’ sense) to buy a ticket. Which is what I meant when I said that you can’t have expected to win.
I just needed an example using definite numbers(so you can judge retrospectively), and not a sequence that millions of people would pick like 1,2,3,4,5,6. For sake of argument, assume I found them on the back of a fortune cookie. Or better yet, just stick a WLOG at the front of my sentence.
And I agree, buying lottery tickets implies a bad way to make decisions, even if you wind up winning. I’m hardly trying to shill for Powerball here. Just saying winning the lottery is always a good thing, even if playing it isn’t.
I think my problem is with this “Judge Retrospectively” thing. Here’s what I think:
Decisions are what’s to be judged, not outcomes. And decisions should be judged relative to the information you had at the time of making them.
In the lottery example, assuming you didn’t know what number would win, the decision to buy a ticket is Bad regardless of whether you won or not.
What I got from this:
Is that you think that if you had a (presumably random) number in mind, but did not buy a ticket, and that number ended up winning, then your decision of not buying the ticket was Wrong and that you should Regret it.
My problem is that this doesn’t make sense: We agree that playing a lottery is Bad (Negative sum game and all that), and we don’t seem to regret not heaving played with the specific number that happened to have won. Which is good, since (to me at least) Regretting decisions made in full knowledge you had at the time of decision seems Wrong.
If this is not what you meant and I’m just bashing a Straw Man, please tell me.
I think there’s a difference between a decision made badly and a bad decision. Playing the lottery is a decision made badly, because you have no special information and it’s -EV. But if you win, it’s a good decision, no matter how badly made it was—the correct response is “That was kind of dumb, I guess, but who cares?”.
Of course, the lottery example is cold math, so there’s no room for disagreement about probabilities. It’s rather different in the case of things like literary analysis, to get back to where we started.
I will not argue about the definition of ‘right decision’, that is at least ambiguous. Yet when it comes to overconfidence in a given prediction that is a property of the comment itself and the information on which it was based upon. New information doesn’t change it.
I’m confused. “I’m pretty sure” is extremely vague. I would not expect to be able to confidently call something like that “overconfidence”. Is there some formalization of such terms that I’m missing?
Interesting… Ohg jbhyq univat n phefr erobhaq bss lbh ernyyl ubyq hc nf “qrsrngvat” va n pbheg bs Ynj? Fancr’f nanybtl bs n zna gevccvat ba n onol pbzrf gb zvaq. Fgvyy, vg zvtug yrnq gb na vairfgvtngvba vagb gur znggre, juvpu pbhyq fgnyy guvatf.
V qba’g guvax gung Uneel’f ntr vf eryrinag urer. Abobql qvfhchgrf Ibyqrzbeg’f qrngu jnf qhr gb uvf nggnpx ba Yvyl, Wnzrf, naq Uneel Cbggre; guhf, gur qrog jbhyq or gb gur Aboyr Ubhfr bs Cbggre, bs juvpu Uneel vf gur bayl yvivat zrzore. N qrog gb uvf Ubhfr jbhyq gurersber or n qrog gb uvz.
motherofgod.jpg
I think you’ve hit on it. Well done.