I like it, but that ‘no way to make it go up’ is a problem.
I agree; the typical human brain balks and runs away when faced with a scale of merit whose max-point is 0.
To what extent do we want ‘honour’ to be a measure of calibration and to what extent a measure of predictive power?
Yes.
In other words, my honor as an epistemic rationalist should be a mix of calibration and predictive power. An amusing but arbitrary formula might be just to give yourself 2x honor when your binary prediction with probability x comes true and to dock yourself ln (1-x) honor when it doesn’t. If you make 20 predictions each at p = 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, and 0.95 for a total of 200 predictions a day and you are perfectly calibrated, you would expect to lose about 3.4 honor each day.
There’s gotta be a way to fix this so that a perfectly calibrated person would gain a tiny amount of honor each day rather than lose it. It might not be elegant, though. Got any ideas?
I agree; the typical human brain balks and runs away when faced with a scale of merit whose max-point is 0.
Zero does seem more appropriate either as a minimum or a midpoint. If everything is going to be negative then flip it around and say ‘less is good’! But the main problem I have with only losing honor based on making predictions is that it essentially rewards never saying anything of importance that could be contradicted. That sounds a bit too much like real life for some reason. ;)
There’s gotta be a way to fix this so that a perfectly calibrated person would gain a tiny amount of honor each day rather than lose it. It might not be elegant, though. Got any ideas?
The tricky part is not so much making up the equations but in determining what criteria to rate the scale against. We would inevitably be injecting something arbitrary.
You’re supposed to have a probability for everything. The closest you can do to not guessing is give every possibility equal probabilities, in which case you’d lose honor even faster than normal.
You could give yourself honor equal to the square of the probability you gave, but that means you’d have incentive to phrase it in as many questions possible. After all, if you gave a single probability for what happens for your entire life, you couldn’t get more than one point of honor. With the system I mentioned first, you’d lose exactly the same honor.
I agree; the typical human brain balks and runs away when faced with a scale of merit whose max-point is 0.
Yes.
In other words, my honor as an epistemic rationalist should be a mix of calibration and predictive power. An amusing but arbitrary formula might be just to give yourself 2x honor when your binary prediction with probability x comes true and to dock yourself ln (1-x) honor when it doesn’t. If you make 20 predictions each at p = 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, and 0.95 for a total of 200 predictions a day and you are perfectly calibrated, you would expect to lose about 3.4 honor each day.
There’s gotta be a way to fix this so that a perfectly calibrated person would gain a tiny amount of honor each day rather than lose it. It might not be elegant, though. Got any ideas?
Zero does seem more appropriate either as a minimum or a midpoint. If everything is going to be negative then flip it around and say ‘less is good’! But the main problem I have with only losing honor based on making predictions is that it essentially rewards never saying anything of importance that could be contradicted. That sounds a bit too much like real life for some reason. ;)
The tricky part is not so much making up the equations but in determining what criteria to rate the scale against. We would inevitably be injecting something arbitrary.
You’re supposed to have a probability for everything. The closest you can do to not guessing is give every possibility equal probabilities, in which case you’d lose honor even faster than normal.
You could give yourself honor equal to the square of the probability you gave, but that means you’d have incentive to phrase it in as many questions possible. After all, if you gave a single probability for what happens for your entire life, you couldn’t get more than one point of honor. With the system I mentioned first, you’d lose exactly the same honor.