Because crazy smart people don’t consistently reach solutions. It’s not surprising when they’re right, but it’s not surprising when they’re wrong, either. There are very few people I know such that I’m surprised when they seem to get something wrong, and the key factor in that judgment is high sanity, more than high intelligence.
I’m also beginning to have a very strange thought that a reddit-derived blog system with comment upvoting and karma is just a vastly more effective way of researching decision-theory problems than publication in peer-reviewed journals.
They really ought to be, what’s the rational value in putting the time and effort into chess to become a world champion at it.
I played it semi-seriously when I was young, but gave it up when in order to get to the next level I’d have to study more than play. Most of the people I know who were good at a competitive intellectual game dropped out of school to pursue it, because they couldn’t handle studying at that level for both.
I find it rather difficult to believe that pursuing chess over school is the rationally optimal choice, so I wouldn’t be remotely surprised to find that those who get to that level are irrational or superstitious when it comes to non-chess problems.
Chess World Champions are sometimes notoriously superstitious, you can still rely on the consistency of their chess moves.
No, you can’t. In 2006, world chess champion Vladimir Kramnik accidentally left himself open to mate in one when playing against computer program Deep Fritz (http://www.chessbase.com/newsdetail.asp?newsid=3509). Even the very best individual humans are all subject to simple mistakes of types that computers simply don’t ever make.
The original question was not whether humans make mistakes (they do in every area, this is undisputed) but whether irrationality in one domain makes more unreliable in others.
No, the original question was whether we should be surprised when humans make mistakes, and what influences the probability of them doing so. The occasional grandmaster bluder shows that even for extremely smart humans within their field of expertise, the human mind effectively has a noise floor—ie, some minimum small probability of making stupid random decisions. Computers, on the other hand, have a much lower noise floor (and can be engineered to make it arbitrarily low).
You shouldn’t be surprised that a chess world champion has made a mistake over the course of their entire career. However, given a specific turn, you should be surprised if the world champion made a mistake in that turn. That is, given any turn, you can rely on their making a good move on that turn. You can’t rely with perfect confidence, of course, but that wasn’t the claim.
Surely Hanson’s favorite (a market) is worth a try here. You’re more raging (as he does) against the increasingly obvious inefficiency of peer reviewed journals than discovering Reddit + mods as a particularly good solution, no?.
An interesting question: is there any way to turn a karma system into a prediction market?
The obvious way to me is to weight a person’s voting influence by how well their votes track the majority, but that just leads to groupthink.
The key to prediction markets, as far as I can tell, is that predictions unambiguously come true or false and so the correctness of a prediction-share can be judged without reference to the share-price (which is determined by everyone else in what could be a bubble even) - but there is no similar outside objective check on LW postings or comments, is there?
I’d love to do a real money prediction market. Unfortunately western governments seek to protect their citizens from the financial consequences of being wrong (except in state sponsored lotteries… those are okay), and the regulatory costs (financial plus the psychic pain of navigating bureaucracy) of setting one up are higher than the payback I expect from the exercise.
The UBC is able to do a non-profit elections prediction market, and it generally does better than the average of the top 5 pollsters.
The popular vote market is you pay $1 for 1 share of CON, LIB, NDP, Green, Other, and you can trade shares like a stockmarket.
The ending payout is $1 * % of popular vote that group gets.
There are other markets such as a seat market, and a majority market.
The majority market pays 50⁄50 if no majority is reached, and 100⁄0 otherwise, which makes it pretty awkward in some respects. Generally predicting a minority government the most profitable action is to try and trade for shares of the loser. This is probably the main reason its restricted to the two parties with a chance of winning one if it were the same 5 way system, trading LIB and CON for GREEN, OTHER and NDP to exploit a minority government would probably bias the results. In this case in a minority the payout would be 20/20/20/20/20, but many traders would be willing to practically throw away shares of GREEN, OTHER and NDP because they “know” those parties have a 0% chance of winning a majority. This leads to artificial devaluation and bad prediction information.
By trading 1 share of CON for 5 GREEN and 5 OTHER, you just made 10 times the money in a minority government, and that’s the payoff you’re looking for instead of saying that you think the combined chances of Green and Other winning a majority is 1/6th that of the conservatives winning.
Of course they still have this problem with Liberals and Conservatives where trading out of a party at a favorable rate might just be betting minority.
I think the problem with a prediction market is you need a payout mechanism, that values the shares at the close of business, for elections there is a reasonable structure.
For situations where there isn’t a clear solution or termination that gets much more complicated.
I’m also beginning to have a very strange thought that a reddit-derived blog system with comment upvoting and karma is just a vastly more effective way of researching decision-theory problems than publication in peer-reviewed journals.
How relevant is the voting, as opposed to just the back and forth?
I think the voting does send a strong signal to people to participate (there’s a lot more participation here than at OB). If this is working better than mailing lists, it may be the karma, but it may also be that it can support more volume by making it easier to ignore threads.
Because crazy smart people don’t consistently reach solutions. It’s not surprising when they’re right, but it’s not surprising when they’re wrong, either. There are very few people I know such that I’m surprised when they seem to get something wrong, and the key factor in that judgment is high sanity, more than high intelligence.
I’m also beginning to have a very strange thought that a reddit-derived blog system with comment upvoting and karma is just a vastly more effective way of researching decision-theory problems than publication in peer-reviewed journals.
Chess World Champions are sometimes notoriously superstitious, you can still rely on the consistency of their chess moves.
They really ought to be, what’s the rational value in putting the time and effort into chess to become a world champion at it.
I played it semi-seriously when I was young, but gave it up when in order to get to the next level I’d have to study more than play. Most of the people I know who were good at a competitive intellectual game dropped out of school to pursue it, because they couldn’t handle studying at that level for both.
I find it rather difficult to believe that pursuing chess over school is the rationally optimal choice, so I wouldn’t be remotely surprised to find that those who get to that level are irrational or superstitious when it comes to non-chess problems.
Chess provides very strong objective feedback on what does and doesn’t work.
… as opposed to what?
Psychotherapy—recommended reading is Robyn Dawes’ House of Cards.
Does not surprise me a bit.
OTOH it raises the question: Does believing in God makes you a less reliable priest?
No, you can’t. In 2006, world chess champion Vladimir Kramnik accidentally left himself open to mate in one when playing against computer program Deep Fritz (http://www.chessbase.com/newsdetail.asp?newsid=3509). Even the very best individual humans are all subject to simple mistakes of types that computers simply don’t ever make.
This is irrelevant. Human players make mistakes. The question is whether being superstitious makes them make more mistakes.
It’s not just chess—here’s two 9dan go players, one of them misthinking and killing his own group: http://www.youtube.com/watch?v=qt1FvPxmmfE
Such spectacular mistakes are not entirely unknown in go, even in top level title matches.
In pro-level shogi it’s even worse, as illegal moves (which are instant lose) are supposedly not at all uncommon.
The original question was not whether humans make mistakes (they do in every area, this is undisputed) but whether irrationality in one domain makes more unreliable in others.
No, the original question was whether we should be surprised when humans make mistakes, and what influences the probability of them doing so. The occasional grandmaster bluder shows that even for extremely smart humans within their field of expertise, the human mind effectively has a noise floor—ie, some minimum small probability of making stupid random decisions. Computers, on the other hand, have a much lower noise floor (and can be engineered to make it arbitrarily low).
You shouldn’t be surprised that a chess world champion has made a mistake over the course of their entire career. However, given a specific turn, you should be surprised if the world champion made a mistake in that turn. That is, given any turn, you can rely on their making a good move on that turn. You can’t rely with perfect confidence, of course, but that wasn’t the claim.
Even chess computers can blunder, it seems.
Surely Hanson’s favorite (a market) is worth a try here. You’re more raging (as he does) against the increasingly obvious inefficiency of peer reviewed journals than discovering Reddit + mods as a particularly good solution, no?.
An interesting question: is there any way to turn a karma system into a prediction market?
The obvious way to me is to weight a person’s voting influence by how well their votes track the majority, but that just leads to groupthink.
The key to prediction markets, as far as I can tell, is that predictions unambiguously come true or false and so the correctness of a prediction-share can be judged without reference to the share-price (which is determined by everyone else in what could be a bubble even) - but there is no similar outside objective check on LW postings or comments, is there?
I’d love to do a real money prediction market. Unfortunately western governments seek to protect their citizens from the financial consequences of being wrong (except in state sponsored lotteries… those are okay), and the regulatory costs (financial plus the psychic pain of navigating bureaucracy) of setting one up are higher than the payback I expect from the exercise.
The UBC is able to do a non-profit elections prediction market, and it generally does better than the average of the top 5 pollsters.
The popular vote market is you pay $1 for 1 share of CON, LIB, NDP, Green, Other, and you can trade shares like a stockmarket.
The ending payout is $1 * % of popular vote that group gets.
There are other markets such as a seat market, and a majority market.
The majority market pays 50⁄50 if no majority is reached, and 100⁄0 otherwise, which makes it pretty awkward in some respects. Generally predicting a minority government the most profitable action is to try and trade for shares of the loser. This is probably the main reason its restricted to the two parties with a chance of winning one if it were the same 5 way system, trading LIB and CON for GREEN, OTHER and NDP to exploit a minority government would probably bias the results. In this case in a minority the payout would be 20/20/20/20/20, but many traders would be willing to practically throw away shares of GREEN, OTHER and NDP because they “know” those parties have a 0% chance of winning a majority. This leads to artificial devaluation and bad prediction information.
By trading 1 share of CON for 5 GREEN and 5 OTHER, you just made 10 times the money in a minority government, and that’s the payoff you’re looking for instead of saying that you think the combined chances of Green and Other winning a majority is 1/6th that of the conservatives winning.
Of course they still have this problem with Liberals and Conservatives where trading out of a party at a favorable rate might just be betting minority.
I think the problem with a prediction market is you need a payout mechanism, that values the shares at the close of business, for elections there is a reasonable structure.
For situations where there isn’t a clear solution or termination that gets much more complicated.
You should be emailing people like Adam Elga and such to invite them to participate then.
How relevant is the voting, as opposed to just the back and forth?
I think the voting does send a strong signal to people to participate (there’s a lot more participation here than at OB). If this is working better than mailing lists, it may be the karma, but it may also be that it can support more volume by making it easier to ignore threads.