That’s remarkably confident. This doesn’t really read like Newsome to me (and how would one find out with sufficient certainty to decide a bet for that much?).
Just how confident is it? It’s a large figure and colloquially people tend to confuse size of bet with degree of confidence—saying a bigger number is more of a dramatic social move. But ultimately to make a bet at even odds all Mitchell needs is to be confident that if someone takes him up on the bet then he has 50% or more chance of being correct. The size of the bet only matters indirectly as an incentive for others to do more research before betting.
Mitchell’s actual confidence is some unspecified figure between 0.5 and 1 and is heavily influenced by how overconfident he expects others to be.
But ultimately to make a bet at even odds all Mitchell needs is to be confident that if someone takes him up on the bet then he has 50% or more chance of being correct. The size of the bet only matters indirectly as an incentive for others to do more research before betting.
This would only be true if money had linear utility value [1]. I, for example, would not take a $1000 bet at even odds even if I had 75% confidence of winning, because with my present financial status I just can’t afford to lose $1000. But I would take such a bet of $100.
The utility of winning $1000 is not the negative of the utility of losing $1000.
[1] or, to be precise, if it were approximately linear in the range of current net assets +/- $1000
In a case with extremely asymmetric information like this one they actually are almost the same thing, since the only payoff you can reasonably expect is the rhetorical effect of offering the bet. Offering bets the other party can refuse and the other party has effectively perfect information about can only lose money (if money is the only thing the other party cares about and they act at least vaguely rationally).
Risk aversion and other considerations like gambler’s ruin usually mean that people insist on substantial edges over just >50%. This can be ameliorated by wealth, but as far as I know, Porter is at best middle-class and not, say, a millionaire.
Agree on a trusted third party (gwern, Alicorn, NancyLebowitz … high-karma longtimers who showed up in this thread), and have AK call them on the phone, confirming details, then have the third party confirm that it’s not Will_Newsome.
… though the main problem would be, do people agree to bet before or after AK agrees to such a scheme?
How would gwern, Alicorn or NancyLebowitz confirm that anything I said by phone meant AspiringKnitter isn’t Will Newsome? They could confirm that they talked to a person. How could they confirm that that person had made AspiringKnitter’s posts? How could they determine that that person had not made Will Newsome’s posts?
At the very least, they could dictate an arbitrary passage (or an MD5 hash) to this person who claims to be AK, and ask them to post this passage as a comment on this thread, coming from AK’s account. This would not definitively prove that the person is AK, but it might serve as a strong piece of supporting evidence.
In addition, once the “AK” persona and the “WillNewsome” persona each post a sufficiently large corpus of text, we could run some textual analysis algorithms on it to determine if their writing styles are similar; Markov Chains are surprisingly good at this (considering how simple they are to implement).
The problem of determining a person’s identity on the Internet, and doing so in a reasonably safe way, is an interesting challenge. But in practice, I don’t really think it matters that much, in this case. I care about what the “AK” persona writes, not about who they are pretending not to be.
In addition, once the “AK” persona and the “WillNewsome” persona each post a sufficiently large corpus of text, we could run some textual analysis algorithms on it to determine if their writing styles are similar; Markov Chains are surprisingly good at this (considering how simple they are to implement).
How about doing this already, with all the stuff they’ve written before the original bet?
I know Will Newsome in real life. If a means of arbitrating this bet is invented, I will identify AspiringKnitter as being him or not by visual or voice for a small cut of the stakes. (If it doesn’t involve using Skype, telephone, or an equivalent, and it’s not dreadfully inconvenient, I’ll do it for free.)
A sidetrack: People seem to be conflating AspiringKnitter’s identity as a Christian and a woman. Female is an important part of not being Will Newsome, but suppose that AspiringKnitter were a male Christian and not Will Newsome. Would that make a difference to any part of this discussion?
More identity issues: My name is Nancy Lebovitz with a v, not a w.
Sorry ’bout the spelling of your name, I wonder if I didn’t make the same mistake before …
Well, the biggest thing AK being a male non-Will Christian would change, is that he would lose an easy way to prove to a third party that he’s not Will Newsome and thus win a thousand bucks (though the important part is not exactly being female, it’s having a recognizably female voice on the phone, which is still pretty highly correlated).
Rationalist lesson that I’ve derived from the frequency that people get my name wrong: It’s typical for people to get it wrong even if I say it more than once, spell it for them, and show it to them in writing. I’m flattered if any of my friends start getting it right in less than a year.
Correct spelling and pronunciation of my name is a simple, well-defined, objective matter, and I’m in there advocating for it, though I cut people slack if they’re emotionally stressed.
This situation suggests that a tremendous amount of what seems like accurate perception is actually sloppy filling in of blanks. Less Wrong has a lot about cognitive biases, but not so much about perceptual biases.
This situation suggests that a tremendous amount of what seems like accurate perception is actually sloppy filling in of blanks.
This is a feature, not a bug. Natural language has lots of redundancy, and if we read one letter at a time rather than in word-sized chunks we would read much more slowly.
I think you have causality reversed here. It’s the redundancy of our languages that’s the “feature”—or, more precisely, the workaround for the previously existing hardware limitation. If our perceptual systems did less “filling in of blanks,” it seems likely that our languages would be less redundant—at least in certain ways.
I think redundancy was originally there to counteract noise, of which there was likely a lot more in the ancestral environment, and as a result there’s more-than-enough of it in such environments as reading text written in a decent typeface one foot away from your face, and the brain can then afford to use it to read much faster. (It’s not that hard to read at 600 words per minute with nearly complete understanding in good conditions, but if someone was able to speak that fast in a not-particularly-quiet environment, I doubt I’d be able to understand much.)
That’s remarkably confident. This doesn’t really read like Newsome to me (and how would one find out with sufficient certainty to decide a bet for that much?).
Just how confident is it? It’s a large figure and colloquially people tend to confuse size of bet with degree of confidence—saying a bigger number is more of a dramatic social move. But ultimately to make a bet at even odds all Mitchell needs is to be confident that if someone takes him up on the bet then he has 50% or more chance of being correct. The size of the bet only matters indirectly as an incentive for others to do more research before betting.
Mitchell’s actual confidence is some unspecified figure between 0.5 and 1 and is heavily influenced by how overconfident he expects others to be.
This would only be true if money had linear utility value [1]. I, for example, would not take a $1000 bet at even odds even if I had 75% confidence of winning, because with my present financial status I just can’t afford to lose $1000. But I would take such a bet of $100.
The utility of winning $1000 is not the negative of the utility of losing $1000.
[1] or, to be precise, if it were approximately linear in the range of current net assets +/- $1000
From what I have inferred about Michael’s financial status the approximation seemed safe enough.
Fair enough in this case, but it’s important to avoid assuming that the approximation is universally applicable.
In a case with extremely asymmetric information like this one they actually are almost the same thing, since the only payoff you can reasonably expect is the rhetorical effect of offering the bet. Offering bets the other party can refuse and the other party has effectively perfect information about can only lose money (if money is the only thing the other party cares about and they act at least vaguely rationally).
Risk aversion and other considerations like gambler’s ruin usually mean that people insist on substantial edges over just >50%. This can be ameliorated by wealth, but as far as I know, Porter is at best middle-class and not, say, a millionaire.
So your points are true and irrelevant.
We obviously use the term ‘irrelevant’ to mean different things.
I have no idea who this Newsome character is, but I bet US$1 that there’s no easy way to implement the answer to the question,
without invading someone’s privacy, so I’m not going to play.
Agree on a trusted third party (gwern, Alicorn, NancyLebowitz … high-karma longtimers who showed up in this thread), and have AK call them on the phone, confirming details, then have the third party confirm that it’s not Will_Newsome.
… though the main problem would be, do people agree to bet before or after AK agrees to such a scheme?
How would gwern, Alicorn or NancyLebowitz confirm that anything I said by phone meant AspiringKnitter isn’t Will Newsome? They could confirm that they talked to a person. How could they confirm that that person had made AspiringKnitter’s posts? How could they determine that that person had not made Will Newsome’s posts?
At the very least, they could dictate an arbitrary passage (or an MD5 hash) to this person who claims to be AK, and ask them to post this passage as a comment on this thread, coming from AK’s account. This would not definitively prove that the person is AK, but it might serve as a strong piece of supporting evidence.
In addition, once the “AK” persona and the “WillNewsome” persona each post a sufficiently large corpus of text, we could run some textual analysis algorithms on it to determine if their writing styles are similar; Markov Chains are surprisingly good at this (considering how simple they are to implement).
The problem of determining a person’s identity on the Internet, and doing so in a reasonably safe way, is an interesting challenge. But in practice, I don’t really think it matters that much, in this case. I care about what the “AK” persona writes, not about who they are pretending not to be.
How about doing this already, with all the stuff they’ve written before the original bet?
I know Will Newsome in real life. If a means of arbitrating this bet is invented, I will identify AspiringKnitter as being him or not by visual or voice for a small cut of the stakes. (If it doesn’t involve using Skype, telephone, or an equivalent, and it’s not dreadfully inconvenient, I’ll do it for free.)
A sidetrack: People seem to be conflating AspiringKnitter’s identity as a Christian and a woman. Female is an important part of not being Will Newsome, but suppose that AspiringKnitter were a male Christian and not Will Newsome. Would that make a difference to any part of this discussion?
More identity issues: My name is Nancy Lebovitz with a v, not a w.
Sorry ’bout the spelling of your name, I wonder if I didn’t make the same mistake before …
Well, the biggest thing AK being a male non-Will Christian would change, is that he would lose an easy way to prove to a third party that he’s not Will Newsome and thus win a thousand bucks (though the important part is not exactly being female, it’s having a recognizably female voice on the phone, which is still pretty highly correlated).
Rationalist lesson that I’ve derived from the frequency that people get my name wrong: It’s typical for people to get it wrong even if I say it more than once, spell it for them, and show it to them in writing. I’m flattered if any of my friends start getting it right in less than a year.
Correct spelling and pronunciation of my name is a simple, well-defined, objective matter, and I’m in there advocating for it, though I cut people slack if they’re emotionally stressed.
This situation suggests that a tremendous amount of what seems like accurate perception is actually sloppy filling in of blanks. Less Wrong has a lot about cognitive biases, but not so much about perceptual biases.
This is a feature, not a bug. Natural language has lots of redundancy, and if we read one letter at a time rather than in word-sized chunks we would read much more slowly.
I think you have causality reversed here. It’s the redundancy of our languages that’s the “feature”—or, more precisely, the workaround for the previously existing hardware limitation. If our perceptual systems did less “filling in of blanks,” it seems likely that our languages would be less redundant—at least in certain ways.
I think redundancy was originally there to counteract noise, of which there was likely a lot more in the ancestral environment, and as a result there’s more-than-enough of it in such environments as reading text written in a decent typeface one foot away from your face, and the brain can then afford to use it to read much faster. (It’s not that hard to read at 600 words per minute with nearly complete understanding in good conditions, but if someone was able to speak that fast in a not-particularly-quiet environment, I doubt I’d be able to understand much.)
Yeah, I agree with that.