Woah, awesome! I would love to see something like this for the whole collection.
Vulture
Twist them the way you’re twisted.
Or rather, don’t, unless you think they have so much agency that this change in temperament will improve their utility despite massively reducing their level of satisfaction.
Suppose I think, after doing my accounts, that I have a large balance at the bank. And suppose you want to find out whether this belief of mine is “wishful thinking.” You can never come to any conclusion by examining my psychological condition. Your only chance of finding out is to sit down and work through the sum yourself.
-- C. S. Lewis
The market isn’t particularly efficient. For example, if you bought “No” on all the presidential candidates to win, it would cost $16.16, but would be worth at least $17 for a 5% gain. Of course, after paying the 10% fee on profits and 5% withdrawal fee you would be left with a loss, which is why this opportunity still exists.
Does this affect the accuracy of the market? Serious question; I do not understand the nitty-gritty economics very well.
Just as a little bit of a counterpoint, I loved the 2006-2010 ebook and was never particularly bothered by the length. I read the whole thing at least twice through, I think, and have occasionally used it to look up posts and so on. The format just worked really well for me. This may be because I am an unusually fast reader, or because I was young and had nothing else to do. But it certainly isn’t totally useless :P
Oh, I see, haha. Yes, that makes more sense, and your point is well-taken.
Why would anyone bother to send in false data about their finger-length ratios?
Working from memory, I believe that when asked about AI in the story, Eliezer said “they say a crackpot is someone who won’t change his mind and won’t change the subject—I endeavor to at least change the subject.” Obviously this is non-binding, but it still seems odd to me that he would go ahead and do the whole thing that he did with the mirror.
This makes some sense, but if Quirrell could bamboozle the map, surely he wouldn’t do so in such a way as to reveal vitally important and damaging secrets to his enemies.
I think the word Gunnar was going for was “Yudkowskyesquely”, unfortunately.
In my opinion the gamma function is by far the stupidest. IME, the off-by-one literally never makes equations clearer; it only obfuscates the relationship between continuous and discrete things (etc.) by adding in an annoying extra step that trips up your intuition. Seems like simple coordination failure.
If that effect came as a surprise, it couldn’t have been the reason for the split.
Thanks!
The wiki of a million lies
As clever as this phrase is, it is tragically ambiguous. I’m guessing 65% chance Wikipedia, 30% RationalWiki, 3% our local wiki, 2% other. How did I do?
Is it really a “bad question”? Shouldn’t a good calibrator be able to account for model error?
Yayy! I was having a shitty day, and seeing these results posted lifted my spirits. Thank you for that! Below are my assorted thoughts:
I’m a little disappointed that the correlation between height and P(supernatural)-and-similar didn’t hold up this year, because it was really fun trying to come up with explanations for that that weren’t prima facie moronic. Maybe that should have been a sign it wasn’t a real thing.
The digit ratio thing is indeed delicious. I love that stuff. I’m surprised there wasn’t a correlation to sexual orientation, though, since I seem to recall reading that that was relatively well-supported. Oh well.
WTF was going on with the computer games question? Could there have been some kind of widespread misunderstanding of the question? In any case, it’s pretty clearly poorly-calibrated Georg, but the results from the other questions are horrendous enough on their own.
On that subject, I have to say that even more amusing than the people who gave 100% and got it wrong are the people who put down 0% and then got it right—aka, really lucky guessers :P
Congrats to the Snicket fan!
This was a good survey and a good year. Cheers!
Damn, I didn’t intend to hit that Retract button. Stupid mobile. In case it wasn’t clear, I do stand by this comment aside from the corrections offered by JoshuaZ.
In the Bayesian view, you can never really make absolute positive statements about truth anyway. Without a simplicity prior you would need some other kind of distribution. Even for computable theories, I don’t think you can ever have a uniform distribution over possible explanations (math people, feel free to correct me on this if I’m wrong!); you could have some kind of perverse non-uniform but non-simplicity-based distribution, I suppose, but I would bet some money that it would perform very badly.
I haven’t looked into it much myself, but a couple of people have mentioned RibbonFarm as being something like that.
I mean, you could run correlations with Openness to experience or with age, right? I guess there’s probably too small of a sample size to do a lot of interesting analysis with it, but I’m sure one could do some.