In the notation of that post, I’d say I am interested mostly in the argument over “Whether a Bayesian or frequentist algorithm is better suited to solving a particular problem”, generalized over a wide range of problems. And the sort of frequentism I have in mind seems to be “frequentist guarantee”—the process of taking data and making inferences from it on some quantity of interest, and the importance to be given to guarantees on the process.
Meni_Rosenfeld
How not to sort by a complicated frequentist formula
Would it? Maybe the question (in its current form) isn’t good, but I think there are good answers for it. Those answers should be prominently searchable.
What is the best paper explaining the superiority of Bayesianism over frequentism?
Except it’s not really a prediction market. You could know the exact probability of an event happening, which is different from the market’s opinion, and still not be able to guarantee profit (on average).
Tel Aviv Self-Improvement Meetup Group
but the blue strategy aims to maximize the frequency of somewhat positive responses while the red strategy aims to maximize the frequency of highly positive responses.
It’s the other way around.
I guess the Umesh principle applies. If you never have to throw food away, you’re preparing too little.
If you haven’t already, you can try deepbit.net. I did, and it’s working nicely so far.
Thanks, will do.
Where in the world is the SIAI house?
do you know that group? do you want their contact info?
No, and no need—I trust I’ll find them should the need arise.
I’m interested in being there, but that’s a pretty long drive for me. Is there any chance to make it in Tel-Aviv instead?
At the risk of stating the obvious: The information content of a datum is its surprisal, the logarithm of the prior probability that it is true. If I currently give 1% chance that the cat in the box is dead, discovering that it is dead gives me 6.64 bits of information.
Eliezer Yudkowski can solve EXPTIME-complete problems in polynomial time.
Sorry, I’m not sure I know how to answer that.
Just in case anyone didn’t get the joke (rot13):
Gur novyvgl gb qvivqr ol mreb vf pbzzbayl nggevohgrq gb Puhpx Abeevf, naq n fvathynevgl, n gbcvp bs vagrerfg gb RL, vf nyfb n zngurzngvpny grez eryngrq gb qvivfvba ol mreb (uggc://ra.jvxvcrqvn.bet/jvxv/Zngurzngvpny_fvathynevgl).
When Eliezer Yudkowsky divides by zero, he gets a singularity.
Now that I’ve looked it up, I don’t think it really has the same intuitions behind it as mixed strategy NE. But it does have an interesting connection with swings. If you try to push a heavy pendulum one way, you won’t get very far. Trying the other way you’ll also be out of luck. But if you push and pull alternately at the right frequency, you will obtain an impressive amplitude and height. Maybe it is because I’ve had firsthand experience with this that I don’t find Parrondo’s paradox all that puzzling.
I forgot to link in the OP. Then remembered, and forgot again.
This seems to use specific parameters for the beta distribution. In the model I describe, the parameters are tailored per domain. This is actually an important distinction.
I think using the lower bound of an interval makes every item “guilty until proven innocent”—with no data we assume the item is of low quality. In my method we give the mean quality of all items (and it is important we calibrate the parameters for the domain). Which is better is debatable.