I came here to refer you to John Holt, but since User:NancyLebovitz already did that, I’ll just add that I’m amused that your handle is Petruchio.
HonoreDB
Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can’t quite back it out of a prediction market’s transaction history. And the amount of money to be made is small in any event because there’s just not enough participation in the markets.
Irrationality Game
Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they’re cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevitably crop up in any prediction market system due to bubbles, panics, hedging, manipulation, and either overly simple or dangerously complex derivatives. 90%
Hanson and his followers are irrationally attached to prediction markets because they flatter libertarian sensibilities. 60%
Yup. The propositions need to be such that you can get more confident than that.
My girlfriend says that a common case of motivated cognition is witnesses picking someone out of a lineup. They want to recognize the criminal, so given five faces they’re very likely to pick one even if the real criminal’s not there, whereas if people are leafing through a big book of mugshots they’re less likely to make a false positive identification.
She suggests a prank-type exercise where there are two plants in the class. Plant A, who wears a hoodie and sunglasses, leaves to go to the bathroom, whereupon Plant B announces that they’re pretty sure Plant A is actually $FAMOUS_ACTOR here incognito. Plant A pokes his head in, says he needs to go take a call, and leaves. See who manages to talk themselves into thinking that really is the celebrity.
This seems like it’ll be easiest to teach and test if you can artificially create a preference for an objective fact. Can you offer actual prizes? Candy? Have you ever tried a point system and have people reacted well?
Assume you have a set of good prizes (maybe chocolate bars, or tickets good for 10 points) and a set of less-good prizes (Hershey’s kisses, or tickets good for 1 point).
Choose a box: Have two actual boxes, labeled “TRUE” and “FALSE”. Before the class comes in, the instructor writes a proposition on the blackboard, such as “The idea that carrots are good for your eyesight is a myth promoted as part of a government conspiracy to cover up secret military technology” or “A duck’s quack never echoes, and nobody knows why.” If the instructor believes that the proposition is true, the instructor puts a bunch of good prizes in the TRUE box and nothing in the FALSE box. Otherwise, the instructor fills the FALSE box with less-good prizes. The class comes in, and the instructor explains the rules. Then she spends 5 minutes trying to persuade the class that she believes the proposition. After that, people who think she actually believes it line up at the TRUE box, and everyone else lines up at the FALSE box. Everyone who guessed right gets a prize from their box. If you guess TRUE and you’re right, your prize is better than if you guess FALSE and are right. Repeat this for a few propositions, and it’s at least a useful test for whether you can separate what you want from what seems plausible.
It seems likely that God would create multiple realities, populated by different sorts of people and/or with different True Religions, to feed a diverse set of people into a shared heaven. So the recursive realities would have a pyramid or lattice structure. If God has limited knowledge of the realities he’s created, there could even be cycles.
God is, himself, in a world filled with vague, ambiguous, sometimes contradictory hints towards a divine meta-reality. He’s confused, anxious, and doesn’t trust his own judgment. So he’s created the Abrahamic world in order to identify the people who somehow manage to arrive at the truth given a similar lack of information. One of our religions is correct—guess right and you go to Heaven to help God try to get to Double Heaven.
Okay, I see that that’s what you’re saying. The assumption then (which seems reasonable but needs to be proven?) is that the simulated humans, given infinite resources, would either solve Oracle AI [edit: without accidentally creating uFAI first, I mean] or just learn how to do stuff like create universes themselves.
There is still the issue that a hypothetical human with access to infinite computing power would not want to create or observe hellworlds. We here in the real world don’t care, but the hypothetical human would. So I don’t think your specific idea for brute-force creating an Earth simulation would work, because no moral human would do it.
I’m slightly worried that even formally specifying an “idealized and unbounded computer” will turn out to be Oracle-AI-complete. We don’t need to worry about it converting something valuable into computronium, but we do need to ensure that it interacts with the simulated human(s) in a friendly way. We need to ensure that it doesn’t modify the human to simplify the process of explaining something. The simulated human needs to be able to control what kinds of minds the computer creates in the process of thinking (we may not care, but the human would). And the computer should certainly not hack its way out of the hypothetical via being thought about by the FAI.
a papercut doesn’t leave much if any blood on the paper… as the paper moves away fast enough that blood doesn’t even have time to flow on it.
It is possible to engineer, though, if you’re manipulating the paper with great telekinetic precision. I accidentally bloodstained a book that way when I was about Harry’s age.
N-player rock-paper-scissors variants. They generally involve everybody standing in a circle facing inward shaking their fists three times and chanting in unison, and looking back I feel like they do have a community-building effect. But they bypass the filter because they’re competitive, and are presumably appealing to LW people because they involve memorizing a large ruleset and then trying to game it.
This looks like it loses in the Smoking Lesion problem.
Having seen the exchange that probably motivated this, one note: in my opinion, events can be linked both causally and acausally. The linked post gives an example. I don’t think that’s an abuse of language; we can say that people are simultaneously communicating verbally and non-verbally.
Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that “If Mitt Romney loses the primary election, Barack Obama will win the general election.” This is actually logically equivalent to “Either Mitt Romney or Barack Obama will win the 2012 Presidential Election,” barring some very unlikely events, so I posted that instead, and so I won’t have to withdraw the prediction when Romney wins the primary.
it didn’t treat mild belief and certainty differently;
It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is “No chance of occurring” and 5 is “Definitely will occur”. They didn’t use this in their top-level rankings because they felt it was “accurate enough” without that, but they did use it in their regressions.
Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!
They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).
They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?
I agree here, mostly. Looking through the predictions they’ve marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn’t figure out how to properly score them they should have just left them out.
If you think you can improve on their methodology, the full dataset is here: .xls.
This objection is not entirely valid, at least when it comes to Krugman. Krugman scored 17⁄19 mainly on economic predictions, and one of the two he got wrong looks like a pro-Republican prediction.
From their executive summary:
According to our regression analysis, liberals are better predictors than conservatives—even when taking out the Presidential and Congressional election questions.
From the paper:
Krugman...primarily discussed economics...
In my discipline? I guess
That’ll save the ancient programmers of the 1950′s some time.
If I were trying to build up programming from scratch, it’d get pretty hairy.